Topic
stringclasses
9 values
News_Title
stringlengths
10
120
Citation
stringlengths
18
4.58k
Paper_URL
stringlengths
27
213
News_URL
stringlengths
36
119
Paper_Body
stringlengths
11.8k
2.03M
News_Body
stringlengths
574
29.7k
DOI
stringlengths
3
169
Computer
Anonymizing personal data 'not enough to protect privacy,' shows new study
Luc Rocher et al. Estimating the success of re-identifications in incomplete datasets using generative models, Nature Communications (2019). DOI: 10.1038/s41467-019-10933-3 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-10933-3
https://techxplore.com/news/2019-07-anonymizing-personal-privacy.html
Abstract While rich medical, behavioral, and socio-demographic data are key to modern data-driven research, their collection and use raise legitimate privacy concerns. Anonymizing datasets through de-identification and sampling before sharing them has been the main tool used to address those concerns. We here propose a generative copula-based method that can accurately estimate the likelihood of a specific person to be correctly re-identified, even in a heavily incomplete dataset. On 210 populations, our method obtains AUC scores for predicting individual uniqueness ranging from 0.84 to 0.97, with low false-discovery rate. Using our model, we find that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes. Our results suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model. Introduction In the last decade, the ability to collect and store personal data has exploded. With two thirds of the world population having access to the Internet 1 , electronic medical records becoming the norm 2 , and the rise of the Internet of Things, this is unlikely to stop anytime soon. Collected at scale from financial or medical services, when filling in online surveys or liking pages, this data has an incredible potential for good. It drives scientific advancements in medicine 3 , social science 4 , 5 , and AI 6 and promises to revolutionize the way businesses and governments function 7 , 8 . However, the large-scale collection and use of detailed individual-level data raise legitimate privacy concerns. The recent backlashes against the sharing of NHS [UK National Health Service] medical data with DeepMind 9 and the collection and subsequent sale of Facebook data to Cambridge Analytica 10 are the latest evidences that people are concerned about the confidentiality, privacy, and ethical use of their data. In a recent survey, >72% of U.S. citizens reported being worried about sharing personal information online 11 . In the wrong hands, sensitive data can be exploited for blackmailing, mass surveillance, social engineering, or identity theft. De-identification, the process of anonymizing datasets before sharing them, has been the main paradigm used in research and elsewhere to share data while preserving people’s privacy 12 , 13 , 14 . Data protection laws worldwide consider anonymous data as not personal data anymore 15 , 16 allowing it to be freely used, shared, and sold. Academic journals are, e.g., increasingly requiring authors to make anonymous data available to the research community 17 . While standards for anonymous data vary, modern data protection laws, such as the European General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), consider that each and every person in a dataset has to be protected for the dataset to be considered anonymous 18 , 19 , 20 . This new higher standard for anonymization is further made clear by the introduction in GDPR of pseudonymous data: data that does not contain obvious identifiers but might be re-identifiable and is therefore within the scope of the law 16 , 18 . Yet numerous supposedly anonymous datasets have recently been released and re-identified 15 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 . In 2016, journalists re-identified politicians in an anonymized browsing history dataset of 3 million German citizens, uncovering their medical information and their sexual preferences 23 . A few months before, the Australian Department of Health publicly released de-identified medical records for 10% of the population only for researchers to re-identify them 6 weeks later 24 . Before that, studies had shown that de-identified hospital discharge data could be re-identified using basic demographic attributes 25 and that diagnostic codes, year of birth, gender, and ethnicity could uniquely identify patients in genomic studies data 26 . Finally, researchers were able to uniquely identify individuals in anonymized taxi trajectories in NYC 27 , bike sharing trips in London 28 , subway data in Riga 29 , and mobile phone and credit card datasets 30 , 31 . Statistical disclosure control researchers and some companies are disputing the validity of these re-identifications: as datasets are always incomplete, journalists and researchers can never be sure they have re-identified the right person even if they found a match 32 , 33 , 34 , 35 . They argue that this provides strong plausible deniability to participants and reduce the risks, making such de-identified datasets anonymous including according to GDPR 36 , 37 , 38 , 39 . De-identified datasets can be intrinsically incomplete, e.g., because the dataset only covers patients of one of the hospital networks in a country or because they have been subsampled as part of the de-identification process. For example, the U.S. Census Bureau releases only 1% of their decennial census and sampling fractions for international census range from 0.07% in India to 10% in South American countries 40 . Companies are adopting similar approaches with, e.g., the Netflix Prize dataset including <10% of their users 41 . Imagine a health insurance company who decides to run a contest to predict breast cancer and publishes a de-identified dataset of 1000 people, 1% of their 100,000 insureds in California, including people’s birth date, gender, ZIP code, and breast cancer diagnosis. John Doe’s employer downloads the dataset and finds one (and only one) record matching Doe’s information: male living in Berkeley, CA (94720), born on January 2 nd 1968, and diagnosed with breast cancer (self-disclosed by John Doe). This record also contains the details of his recent (failed) stage IV treatments. When contacted, the insurance company argues that matching does not equal re-identification: the record could belong to 1 of the 99,000 other people they insure or, if the employer does not know whether Doe is insured by this company or not, to anyone else of the 39.5M people living in California. Our paper shows how the likelihood of a specific individual to have been correctly re-identified can be estimated with high accuracy even when the anonymized dataset is heavily incomplete. We propose a generative graphical model that can be accurately and efficiently trained on incomplete data. Using socio-demographic, survey, and health datasets, we show that our model exhibits a mean absolute error (MAE) of 0.018 on average in estimating population uniqueness 42 and an MAE of 0.041 in estimating population uniqueness when the model is trained on only a 1% population sample. Once trained, our model allows us to predict whether the re-identification of an individual is correct with an average false-discovery rate of <6.7% for a 95% threshold \(( {\widehat {\xi _x}\, > \,0.95} )\) and an error rate 39% lower than the best achievable population-level estimator. With population uniqueness increasing fast with the number of attributes available, our results show that the likelihood of a re-identification to be correct, even in a heavily sampled dataset, can be accurately estimated, and is often high. Our results reject the claims that, first, re-identification is not a practical risk and, second, sampling or releasing partial datasets provide plausible deniability. Moving forward, they question whether current de-identification practices satisfy the anonymization standards of modern data protection laws such as GDPR and CCPA and emphasize the need to move, from a legal and regulatory perspective, beyond the de-identification release-and-forget model. Results Using Gaussian copulas to model uniqueness We consider a dataset \({\cal{D}}\) , released by an organization, and containing a sample of \(n_{\cal{D}}\) individuals extracted at random from a population of n individuals, e.g., the US population. Each row x ( i ) is an individual record, containing d nominal or ordinal attributes (e.g., demographic variables, survey responses) taking values in a discrete sample space \({\cal{X}}\) . We consider the rows x ( i ) to be independent and identically distributed, drawn from the probability distribution X with \({\Bbb P}(X = {\boldsymbol{x}})\) , abbreviated p ( x ). Our model quantifies, for any individual x , the likelihood ξ x for this record to be unique in the complete population and therefore always successfully re-identified when matched. From ξ x , we derive the likelihood κ x for x to be correctly re-identified when matched, which we call correctness. If Doe’s record x ( d ) is unique in \({\cal{D}}\) , he will always be correctly re-identified ( \(\kappa _{{\boldsymbol{x}}^{(d)}} = 1\) and \(\xi _{{\boldsymbol{x}}^{(d)}} = 1\) ). However, if two other people share the same attribute ( \({\boldsymbol{x}}^{(d)}\) not unique, \(\xi _{{\boldsymbol{x}}^{(d)}} = 0\) ), Doe would still have one chance out of three to have been successfully re-identified \(\left( {\kappa _{{\boldsymbol{x}}^{(d)}} = 1/3} \right)\) . We model \(\xi _{\boldsymbol{x}}\) as: $$\xi _{\boldsymbol{x}} \equiv {\Bbb P}\left({\boldsymbol{x}}{\hbox{ unique in }}({\boldsymbol{x}}^{(1)}, \ldots ,{\boldsymbol{x}}^{(n)}) \;|\; \exists i,{\boldsymbol{x}}^{(i)} = {\boldsymbol{x}}\right)$$ (1) $$= \left(1 - p({\boldsymbol{x}})\right)^{n - 1}$$ (2) and κ x as: $$\kappa _{\boldsymbol{x}} \equiv {\Bbb P}\left({\boldsymbol{x}}{\hbox{ correctly matched in }} ({\boldsymbol{x}}^{(1)}, \ldots ,{\boldsymbol{x}}^{(n)}) \;|\; \exists i,{\boldsymbol{x}}^{(i)} = {\boldsymbol{x}}\right)$$ (3) $$= \frac{1}{n}\frac{{1 - \xi _{\boldsymbol{x}}^{n/(n - 1)}}}{{1 - \xi _{\boldsymbol{x}}^{1/(n - 1)}}}$$ (4) with proofs in “Methods”. We model the joint distribution of X 1 , X 2 , … X d using a latent Gaussian copula 43 . Copulas have been used to study a wide range of dependence structures in finance 44 , geology 45 , and biomedicine 46 and allow us to model the density of X by specifying separately the marginal distributions, easy to infer from limited samples, and the dependency structure. For a large sample space \({\cal{X}}\) and a small number \(n_{\cal{D}}\) of available records, Gaussian copulas provide a good approximation of the density using only d ( d − 1)/2 parameters for the dependency structure and no hyperparameter. The density of a Gaussian copula C Σ is expressed as: $$c_\Sigma ({\boldsymbol{u}}) = \frac{1}{{\sqrt {{\mathrm{det}}\,\Sigma } }}{\mathrm{exp}}\left( { - \frac{1}{2}{\mathrm{\Phi }}^{ - 1}({\boldsymbol{u}})^T \cdot (\Sigma ^{ - 1} - {\mathrm{I}}) \cdot {\mathrm{\Phi }}^{ - 1}({\boldsymbol{u}})} \right)$$ (5) with a covariance matrix Σ, u ∈ [0, 1] d , and Φ the cumulative distribution function (CDF) of a standard univariate normal distribution. We estimate from \({\cal{D}}\) the marginal distributions Ψ (marginal parameters) for X 1 , …, X d and the copula distribution Σ (covariance matrix), such that p ( x ) is modeled by $$q({\boldsymbol{x}}|\Sigma ,\Psi ) = {\int}_{F_1^{ - 1}(x_1 - 1|\Psi )}^{F_1^{ - 1}(x_1|\Psi )} \ldots {\int}_{F_d^{ - 1}(x_d - 1|\Psi )}^{F_d^{ - 1}(x_d|\Psi )} c_\Sigma ({\boldsymbol{u}})\,{\mathrm{d}}{\boldsymbol{u}}$$ (6) with F j the CDF of the discrete variable X j . In practice, the copula distribution is a continuous distribution on the unit cube, and p ( x ) its discrete counterpart on \({\cal{X}}\) (see Supplementary Methods). We select, using maximum likelihood estimation, the marginal distributions from categorical, logarithmic, and negative binomial count distributions (see Supplementary Methods). Sampling the complete set of covariance matrices to estimate the association structure of copulas is computationally expensive for large datasets. We rely instead on a fast two-step approximate inference method: we infer separately each pairwise correlation factor Σ ij and then project the constructed matrix Σ on the set of symmetric positive definite matrices to accurately recover the copula covariance matrix (see “Methods”). We collect five corpora from publicly available sources: population census (USA and MERNIS) as well as surveys from the UCI Machine Learning repository (ADULT, MIDUS, HDV). From each corpus, we create populations by selecting subsets of attributes (columns) uniformly. The resulting 210 populations cover a large range of uniqueness values (0–0.96), numbers of attributes (2–47), and records (7108–9M individuals). For readability purposes, we report in the main text the numerical results for all five corpora but will show figures only for USA. Figures for MERNIS, ADULT, MIDUS, and HDV are similar and available in Supplementary Information. Figure 1a shows that, when trained on the entire population, our model correctly estimates population uniqueness \(\Xi _X = \mathop {\sum}\nolimits_{{\boldsymbol{x}} \in {\cal{X}}} p({\boldsymbol{x}})\left(1 - p({\boldsymbol{x}})\right)^{n - 1}\) , i.e., the expected percentage of unique individuals in ( x (1) , x (2) , …, x ( n ) ). The MAE between the empirical uniqueness of our population Ξ X and the estimated uniqueness \(\widehat {\Xi _X}\) is 0.028 ± 0.026 [mean ± s.d.] for USA and 0.018 ± 0.019 on average across every corpus (see Table 1 ). Figure 1a and Supplementary Fig. 1 furthermore show that our model correctly estimates uniqueness across all values of uniqueness, with low within-population s.d. (Supplementary Table 3 ). Fig. 1 Estimating the population uniqueness of the USA corpus. a We compare, for each population, empirical and estimated population uniqueness (boxplot with median, 25th and 75th percentiles, maximum 1.5 interquartile range (IQR) for each population, with 100 independent trials per population). For example, date of birth, location (PUMA code), marital status, and gender uniquely identify 78.7% of the 3 million people in this population (empirical uniqueness) that our model estimates to be 78.2 ± 0.5% (boxplot in black). b Absolute error when estimating USA’s population uniqueness when the disclosed dataset is randomly sampled from 10% to 0.1%. The boxplots (25, 50, and 75th percentiles, 1.5 IQR) show the distribution of mean absolute error (MAE) for population uniqueness, at one subsampling fraction across all USA populations (100 trials per population and sampling fraction). The y axis shows both p , the sampling fraction, and \(n_{\cal{S}} = p \times n\) , the sample size. Our model estimates population uniqueness very well for all sampling fractions with the MAE slightly increasing when only a very small number of records are available ( p = 0.1% or 3061 records) Full size image Table 1 Mean absolute error (mean ± s.d.) when estimating population uniqueness (100 trials per population) Full size table Figure 1b shows that our model estimates population uniqueness very well even when the dataset is heavily sampled (see Supplementary Fig. 2 , for other populations). For instance, our model achieves an MAE of 0.029 ± 0.015 when the dataset only contains 1% of the USA population and an MAE of 0.041 ± 0.053 on average across every corpus. Table 1 shows that our model reaches a similarly low MAE, usually <0.050, across corpora and sampling fractions. Likelihood of successful re-identification Once trained, we can use our model to estimate the likelihood of his employer having correctly re-identified John Doe, our 50-year-old male from Berkeley with breast cancer. More specifically, given an individual record x , we can use the trained model to compute the likelihood \(\widehat {\xi _{\boldsymbol{x}}} = \left(1 - q({\boldsymbol{x}}\,|\,\Sigma ,\Psi )\right)^{n - 1}\) for this record x to be unique in the population. Our model takes into account information on both marginal prevalence (e.g., breast cancer prevalence) and global attribute association (e.g., gender and breast cancer). Since the cdf. of a Gaussian copula distribution has no close-form expression, we evaluate q ( x |Σ, Ψ) with a numerical integration of the latent continuous joint density inside the hyper-rectangle defined by the d components ( x 1 , x 2 , …, x d ) 47 , 48 . We assume no prior knowledge on the order of outcomes inside marginals for nominal attributes and randomize their order. Figure 2a shows that, when trained on 1% of the USA populations, our model predicts very well individual uniqueness, achieving a mean AUC (area under the receiver-operator characteristic curve (ROC)) of 0.89. For each population, to avoid overfitting, we train the model on a single 1% sample, then select 1000 records, independent from the training sample, to test the model. For re-identifications that the model predicts to be always correct ( \(\widehat {\xi _{\boldsymbol{x}}}\, > \, 0.95\) , estimated individual uniqueness >95%), the likelihood of them to be incorrect (false-discovery rate) is 5.26% (see bottom-right inset in Fig. 2a ). ROC curves for the other populations are available in Supplementary Fig. 3 and have overall a mean AUC of 0.93 and mean false-discovery rate of 6.67% for \(\widehat {\xi _{\boldsymbol{x}}}\, > \, 0.95\) (see Supplementary Table 1 ). Fig. 2 The model predicts correct re-identifications with high confidence. a Receiver operating characteristic (ROC) curves for USA populations (light ROC curve for each population and a solid line for the average ROC curve). Our method accurately predicts the (binary) individual uniqueness. (Inset) False-discovery rate (FDR) for individual records classified with ξ > 0.9, ξ > 0.95, and ξ > 0.99. For re-identifications that the model predicts are likely to be correct \(( {\widehat {\xi _{\boldsymbol{x}}} \,> \, 0.95})\) , only 5.26% of them are incorrect (FDR). b Our model outperforms by 39% the best theoretically achievable prediction using population uniqueness across every corpus. A red point shows the Brier Score obtained by our model, when trained on a 1% sample. The solid line represents the lowest Brier Score achievable when using the exact population uniqueness while the dashed line represents the Brier Score of a random guess prediction (BS = 1/3) Full size image Finally, Fig. 2b shows that our model outperforms even the best theoretically achievable prediction using only population uniqueness, i.e., assigning the score \(\xi _{\boldsymbol{x}}^{{\mathrm{(pop)}}} = \Xi _X\) to every individual (ground truth population uniqueness, see Supplementary Methods). We use the Brier Score (BS) 49 to measure the calibration of probabilistic predictions: \({\mathrm{BS}} = \frac{1}{n}\mathop {\sum}\nolimits_{i = 1}^n {\left(\xi _{{\boldsymbol{x}}^{(i)}} - \widehat {\xi _{{\boldsymbol{x}}^{(i)}}}\right)^2}\) with, in our case, \(\xi _{{\boldsymbol{x}}^{(i)}}\) the actual uniqueness of the record \({\boldsymbol{x}}^{(i)}\) (1 if \({\boldsymbol{x}}^{(i)}\) is unique and 0 if not) and \(\widehat {\xi _{{\boldsymbol{x}}^{(i)}}}\) the estimated likelihood. Our model obtains scores on average 39% lower than the best theoretically achievable prediction using only population uniqueness, emphasizing the importance of modeling individuals’ characteristics. Appropriateness of the de-identification model Using our model, we revisit the (successful) re-identification of Gov. Weld 25 . We train our model on the 5% Public Use Microdata Sample (PUMS) files using ZIP code, date of birth, and gender and validate it using the last national estimate 50 . We show that, as a male born on July 31, 1945 and living in Cambridge (02138), the information used by Latanya Sweeney at the time, William Weld was unique with a 58% likelihood ( ξ x = 0.58 and κ x = 0.77), meaning that Latanya Sweeney’s re-identification had 77% chances of being correct. We show that, if his medical records had included number of children—5 for William Weld—, her re-identification would have had 99.8% chances of being correct! Figure 3a shows that the same combinations of attributes (ZIP code, date of birth, gender, and number of children) would also identify 79.4% of the population in Massachusetts with high confidence \(( {\widehat {\xi _{\boldsymbol{x}}} \,\,> \,\, 0.80} )\) . We finally evaluate the impact of specific attributes on William Weld’s uniqueness. We either change the value of one of his baseline attributes (ZIP code, date of birth, or gender) or add one extra attribute, in both cases picking the attribute at random from its distribution (see Supplementary Methods). Figure 3c shows, for instance, that individuals with 3 cars or no car are harder to re-identify than those with 2 cars. Similarly, it shows that it would not take much to re-identify people living in Harwich Port, MA, a city of <2000 inhabitants. Fig. 3 Average individual uniqueness increases fast with the number of collected demographic attributes. a Distribution of predicted individual uniqueness knowing ZIP code, date of birth, and gender (resp. ZIP code, date of birth, gender, and number of children) in blue (resp. orange). The dotted blue line at \(\widehat {\xi _{\boldsymbol{x}}} = 0.580\) (resp. dashed orange line at \(\widehat {\xi _{\boldsymbol{x}}} = 0.997\) ) illustrates the predicted individual uniqueness of Gov. Weld knowing the same combination of attributes. (Inset) The correctness κ x is solely determined by uniqueness ξ x and population size n (here for Massachusetts). We show individual uniqueness and correctness for William Weld with three (in blue) and four (in orange) attributes. b The boxplots (25, 50, and 75th percentiles, 1.5 IQR) show the average uniqueness 〈 ξ x 〉 knowing k demographic attributes, grouped by number of attributes. The individual uniqueness scores ξ x are estimated on the complete population in Massachusetts, based on the 5% Public Use Microdata Sample files. While few attributes might not be sufficient for a re-identification to be correct, collecting a few more attributes will quickly render the re-identification very likely to be successful. For instance, 15 demographic attributes would render 99.98% of people in Massachusetts unique. c Uniqueness varies with the specific value of attributes. For instance, a 33-year-old is less unique than a 58-year-old person. We here either ( i ) randomly replace the value of one baseline attribute (ZIP code, date of birth, or gender) or ( ii ) add one extra attribute, both by sampling from its marginal distribution, to the uniqueness of a 58-year-old male from Cambridge, MA. The dashed baseline shows his original uniqueness \(\widehat {\xi _{\boldsymbol{x}}} = 0.580\) and the boxplots the distribution of individual uniqueness obtained after randomly replacing or adding one attribute. A complete description of the attributes and method is available in Supplementary Methods Full size image Modern datasets contain a large number of points per individuals. For instance, the data broker Experian sold Alteryx access to a de-identified dataset containing 248 attributes per household for 120M Americans 51 ; Cambridge university researchers shared anonymous Facebook data for 3M users collected through the myPersonality app and containing, among other attributes, users’ age, gender, location, status updates, and results on a personality quiz 52 . These datasets do not necessarily share all the characteristics of the one studied here. Yet, our analysis of the re-identification of Gov. Weld by Latanya Sweeney shows that few attributes are often enough to render the likelihood of correct re-identification very high. For instance, Fig. 3b shows that the average individual uniqueness increases fast with the number of collected demographic attributes and that 15 demographic attributes would render 99.98% of people in Massachusetts unique. Our results, first, show that few attributes are often sufficient to re-identify with high confidence individuals in heavily incomplete datasets and, second, reject the claim that sampling or releasing partial datasets, e.g., from one hospital network or a single online service, provide plausible deniability. Finally, they show that, third, even if population uniqueness is low—an argument often used to justify that data are sufficiently de-identified to be considered anonymous 53 —, many individuals are still at risk of being successfully re-identified by an attacker using our model. As standards for anonymization are being redefined, incl. by national and regional data protection authorities in the EU, it is essential for them to be robust and account for new threats like the one we present in this paper. They need to take into account the individual risk of re-identification and the lack of plausible deniability—even if the dataset is incomplete—, as well as legally recognize the broad range of provable privacy-enhancing systems and security measures that would allow data to be used while effectively preserving people’s privacy 54 , 55 . Discussion In this paper, we proposed and validated a statistical model to quantify the likelihood for a re-identification attempt to be successful, even if the disclosed dataset is heavily incomplete. Beyond the claim that the incompleteness of the dataset provides plausible deniability, our method also challenges claims that a low population uniqueness is sufficient to protect people’s privacy 53 , 56 . Indeed, an attacker can, using our model, correctly re-identify an individual with high likelihood even if the population uniqueness is low (Fig. 3a ). While more advanced guarantees like k -anonymity 57 would give every individual in the dataset some protection, they have been shown to be NP-Hard 58 , hard to achieve in modern high-dimensional datasets 59 , and not always sufficient 60 . While developed to estimate the likelihood of a specific re-identification to be successful, our model can also be used to estimate population uniqueness. We show in Supplementary Note 1 that, while not its primary goal, our model performs consistently better than existing methods to estimate population uniqueness on all five corpora (Supplementary Fig. 4 , P < 0.05 in 78 cases out of 80 using Wilcoxon’s signed-rank test) 61 , 62 , 63 , 64 , 65 , 66 and consistently better than previous attempts to estimate individual uniqueness 67 , 68 . Existing approaches, indeed, exhibit unpredictably large over- and under-estimation errors. Finally, a recent work quantifies the correctness of individual re-identification in incomplete (10%) hospital data using complete population frequencies 24 . Compared to this work, our approach does not require external data nor to assume this external data to be complete. To study the stability and robustness of our estimations, we perform further experiments (Supplementary Notes 2 – 8 ). First, we analyze the impact of marginal and association parameters on the model error and show how to use exogenous information to lower it. Table 1 and Supplementary Note 7 show that, at very small sampling fraction (below 0.1%), where the error is the largest, the error is mostly determined by the marginals, and converges after few hundred records when the exact marginals are known. The copula covariance parameters exhibit no significant bias and decrease fast when the sample size increases (Supplementary Note 8 ). As our method separates marginals and association structure inference, exogenous information from larger data sources could also be used to estimate marginals with higher accuracy. For instance, count distributions for attributes such as date of birth or ZIP code could be directly estimated from national surveys. We replicate our analysis on the USA corpus using a subsampled dataset to infer the association structure along with the exact counts for marginal distributions. Incorporating exogenous information reduces, e.g., the mean MAE of uniqueness across all corpora by 48.6% ( P < 0.01, Mann–Whitney) for a 0.1% sample. Exogenous information become less useful as the sampling fraction increases (Supplementary Table 2 ). Second, our model assumes that \({\cal{D}}\) is either uniformly sampled from the population of interest X or, as several census bureaus are doing, released with post-stratification weights to match the overall population. We believe this to be a reasonable assumption as biases in the data would greatly affect its usefulness and affect any application of the data, including our model. To overcome an existing sampling bias, the model can be ( i ) further trained on a random sample from the population \({\cal{D}}\) (e.g., microdata census or survey data) and then applied to a non-uniform released sample (e.g., hospital data, not uniformly sampled from the population) or ( ii ) trained using better, potentially unbiased, estimates for marginals or association structure coming from other sources (see above). Third, since \({\cal{D}}\) is a sample from the population X , only the records that are unique in the sample can be unique in the population. Hence, we further evaluate the performance on our model only on records that are sample unique and show that it only marginally decrease the AUC (Supplementary Note 5 ). We therefore prefer to not restrict our predictions to sample unique records as (a) our models need to perform well on non-sample unique records for us to be able to estimate correctness and (b) to keep the method robust if oversampling or sampling with replacement were to have been used. Methods Inferring marginals distributions Marginals can be either (i) unknown and are estimated from the marginals of the population sample \(X_{\cal{S}}\) , this is the assumption used in the main text, or (ii) known with their exact distribution and cumulative density function directly available. In the first case, we fit marginal counts to categorical (naive plug-in estimator), negative binomial, and logarithmic distributions using maximum log-likelihood. We compare the obtained distributions and select the best likelihood according to its Bayesian information criterion (BIC): $${\mathrm{BIC}} = - 2\log \widehat L + k \log n_{\cal{D}}$$ (7) where \(\widehat L\) is the maximized value of the likelihood function, \(n_{\cal{D}}\) the number of individuals in the sample \({\cal{D}}\) , and k the number of parameters in the fitted marginal distribution. Inferring the parameters of the latent copula Each cell Σ ij of the Σ covariance matrix of a multivariate copula distribution is the correlation parameter of a pairwise copula distribution. Hence, instead of inferring Σ from the set of all covariance matrices, we separately infer every cell Σ ij ∈ [0, 1] from the joint sample of \({\cal{D}}_i\) and \({\cal{D}}_j\) . We first measure the mutual information \(I({\cal{D}}_i;{\cal{D}}_j)\) between the two attributes and select \(\sigma = \widehat {\Sigma _{ij}}\) minimizing the Euclidean distance between the empirical mutual information and the mutual information of the inferred joint distribution. In practice, since the cdf. of a Gaussian copula is not tractable, we use a bounded Nelder–Mead minimization algorithm. For a given ( σ , (Ψ i , Ψ j )), we sample from the distribution q ( ⋅ | σ , (Ψ i , Ψ j )) and generate a discrete bivariate sample Y from which we measure the objective: $$f(\sigma ) = \left\{ {\begin{array}{*{20}{l}} {\left\| {I({\cal{D}}_i;{\cal{D}}_j) - I(Y_1;Y_2)} \right\|_2} \hfill & {{\mathrm{for}}\,\sigma \in [0,1]} \hfill \\ { + \infty } \hfill & {{\mathrm{otherwise}}} \hfill \end{array}} \right.$$ (8) We then project the obtained \(\widehat \Sigma\) matrix on the set of SDP matrices by solving the following optimization problem: $$\begin{array}{*{20}{c}} {\min\limits_A } & {\left\| {A - \widehat \Sigma } \right\|_2} \\ {{\mathrm{s}}.{\mathrm{t}}.} & {A\succcurlyeq 0} \end{array}$$ (9) Modeling the association structure using mutual information We use the pairwise mutual information to measure the strength of association between attributes. For a dataset \({\cal{D}}\) , we denote by \(I_{\cal{D}}\) the mutual information matrix where each cell \(I({\cal{D}}_i;{\cal{D}}_j)\) is the mutual information between attributes \({\cal{D}}_i\) and \({\cal{D}}_j\) . When evaluating mutual information from small samples, obtained scores are often overestimating the strength of association. We apply a correction for randomness using a permutation model 69 : $$AI({\cal{D}}_i;{\cal{D}}_j) = \frac{{I({\cal{D}}_i;{\cal{D}}_j) - {\Bbb E}(I({\cal{D}}_i;{\cal{D}}_j))}}{{{\max}\{ {\Bbb H}({\cal{D}}_i),{\Bbb H}({\cal{D}}_j)\} - {\Bbb E}(I({\cal{D}}_i;{\cal{D}}_j))}}$$ (10) In practice, we estimate the expected mutual information between \({\cal{D}}_i\) and \({\cal{D}}_j\) with successive permutations of \({\cal{D}}_j\) . We found that the adjusted mutual information provides significant improvement for small samples and large support size \(|{\cal{X}}|\) compared to the naive estimator. Theoretical and empirical population uniqueness For n individuals x (1) , x (2) , …, x ( n ) drawn from X , the uniqueness Ξ X is the expected percentage of unique individuals. It can be estimated either (i) by computing the mean of individual uniqueness or (ii) by sampling a synthetic population of n individuals from the copula distribution. In the former case, we have $$\Xi _X \equiv \frac{1}{n}\,{\Bbb E}\left[ {\mathop {\sum}\limits_{i = 1}^n \left[{\boldsymbol{x}}^{(i)}{\mathrm{unique}} \,\,{\mathrm{in}}({\boldsymbol{x}}^{(1)}, \ldots ,{\boldsymbol{x}}^{(n)})\right]} \right]$$ (11) $$= \frac{1}{n}\,{\Bbb E}\left[ {\mathop {\sum}\limits_{{\boldsymbol{x}} \in {\cal{X}}} T_{\boldsymbol{x}}} \right]$$ (12) $$= \frac{1}{n}\mathop {\sum}\limits_{{\boldsymbol{x}} \in {\cal{X}}} {\Bbb E} [T_{\boldsymbol{x}}]$$ (13) where T x = [ ∃ ! i , x ( i ) = x ] equals one if there exists a single individual i such as x ( i ) = x and zero otherwise. T x follows a binomial distribution B ( p ( x ), n ). Therefore $${\Bbb E}[T_{\boldsymbol{x}}] = n{\kern 1pt} p({\boldsymbol{x}}){\kern 1pt} \left(1 - p({\boldsymbol{x}})\right)^{n - 1}$$ (14) and $$\Xi _X = \mathop {\sum}\limits_{{\boldsymbol{x}} \in {\cal{X}}} p ({\boldsymbol{x}})\left(1 - p({\boldsymbol{x}})\right)^{n - 1}$$ (15) This requires iterating over all combinations of attributes, whose number grows exponentially as the number of attributes increases, and quickly becomes computationally intractable. The second method is therefore often more tractable and we use it to estimate population uniqueness in the paper. For cumulative marginal distributions F 1 , F 2 , …, F d and copula correlation matrix Σ, the algorithm 1 (Supplementary Methods) samples n individuals from q ( ⋅ |Σ,Ψ) using the latent copula distribution. From the n generated records ( y (1) , y (2) , …, y ( n ) ), we compute the empirical uniqueness $$\Xi _X = \frac{1}{n}\left| {\left\{ i \in [1,n] \;/\; \forall j \ne i,{\boldsymbol{y}}^{(i)} \ne {\boldsymbol{y}}^{(j)}\right\} } \right|$$ (16) Individual likelihood of uniqueness and correctness The probability distribution \(q( \cdot \,|\,\Sigma ,\Psi )\) can be computed by integrating over the latent copula density. Note that the marginal distributions X 1 to X d are discrete, causing the inverses \(F_1^{ - 1}\) to \(F_d^{ - 1}\) to have plateaus. When estimating p ( x ), we integrate over the latent copula distribution inside the hypercube \([x_1 - 1,x_1] \times [x_2 - 1,x_2] \times \ldots \times [x_d - 1,x_d]\) : $$q({\boldsymbol{x}}\,|\Sigma ,\Psi ) = {\Bbb P}(x_1 - 1 \, < \, X_1 \le x_1, \ldots ,x_d - 1\, < \, X_{d} \le x_{d} |\Sigma , \Psi )$$ (17) $$= {\int}_{F_1^{ - 1}(x_1 - 1|\Psi )}^{F_1^{ - 1}(x_1|\Psi )} \ldots {\int}_{F_d^{ - 1}(x_d - 1|\Psi )}^{F_d^{ - 1}(x_d|\Psi )} c_\Sigma ({\boldsymbol{u}})\,{\mathrm{d}}{\boldsymbol{u}}$$ (18) $$= {\int}_{\phi ^{ - 1}(F_1^{ - 1}(x_1 - 1|\Psi ))}^{\phi ^{ - 1}(F_1^{ - 1}(x_1|\Psi ))} \ldots {\int}_{\phi ^{ - 1}(F_d^{ - 1}(x_d - 1|\Psi ))}^{\phi ^{ - 1}(F_d^{ - 1}(x_d|\Psi ))} \phi _\Sigma ({\boldsymbol{z}})\,{\mathrm{d}}{\boldsymbol{z}}$$ (19) with ϕ Σ the density of a zero-mean multivariate normal (MVN) of correlation matrix Σ. Several methods have been proposed in the literature to estimate MVN rectangle probabilities. Genz and Bretz 47 , 48 proposed a randomized quasi Monte Carlo method which we use to estimate the discrete copula density. The likelihood ξ x for an individual’s record x to be unique in a population of n individuals can be derived from p X ( X = x ): $$\xi _{\boldsymbol{x}} \equiv p_X({\boldsymbol{x}}\,{\hbox{ unique in }}({\boldsymbol{x}}^{(1)}, \ldots ,{\boldsymbol{x}}^{(n)})\;|\;\exists i,{\boldsymbol{x}}^{(i)} = {\boldsymbol{x}})$$ (20) $$= p_X({\boldsymbol{x}}{\hbox{ unique in }}({\boldsymbol{x}}^{(1)}, \ldots ,{\boldsymbol{x}}^{(n)})\;|\;{\boldsymbol{x}}^{(1)} = {\boldsymbol{x}})$$ (21) $$= p_X(\forall i \in [2,n],{\boldsymbol{x}}^{(i)} \ne {\boldsymbol{x}})$$ (22) $$= \left(1 - p({\boldsymbol{x}})\right)^{n - 1}$$ (23) $$\widehat {\xi _{\boldsymbol{x}}} = \left(1 - q({\boldsymbol{x}}\,|\,\Sigma ,\Psi )\right)^{n - 1}$$ Similarly, the likelihood \(\kappa _{\boldsymbol{x}}\) for an individual’s record x to be correctly matched in a population of n individuals can be derived from \(p_X(X = {\boldsymbol{x}})\) . With \(T \equiv \mathop {\sum}\nolimits_{i = 1}^n {\left[ {{\boldsymbol{x}}^{(i)} = {\boldsymbol{x}}} \right]} - 1\) , the number of potential false positives in the population, we have: $$\kappa _{\boldsymbol{x}} \equiv {\Bbb P}({\boldsymbol{x}}{\hbox{ correctly matched in }}({\boldsymbol{x}}^{(1)}, \ldots ,{\boldsymbol{x}}^{(n)})\;|\;\exists i,{\boldsymbol{x}}^{(i)} = {\boldsymbol{x}})$$ (24) $$= \mathop {\sum}\limits_{k = 0}^{n - 1} {\frac{1}{{k + 1}}} {\Bbb P}(T = k)$$ (25) $$= \mathop {\sum}\limits_{k = 0}^{n - 1} {\frac{1}{{k + 1}}} \left( {\begin{array}{*{20}{c}} {n - 1} \\ k \end{array}} \right)p({\boldsymbol{x}})^k(1 - p({\boldsymbol{x}}))^{(n - 1 - k)}$$ (26) $$= \frac{1}{{n\,p({\boldsymbol{x}})}}\left( {1 - \left( {1 - p({\boldsymbol{x}})} \right)^n} \right)$$ (27) Note that, since records are independent, T follows a binomial distribution B ( n − 1, p ( x )). We substitute the expression for ξ x in the last formula and obtain: $$\kappa _{\boldsymbol{x}} = \frac{1}{{n\,p({\boldsymbol{x}})}}\left( {1 - \left( {1 - p({\boldsymbol{x}})} \right)^n} \right)$$ (28) $$= \frac{1}{n}\frac{{1 - \xi _{\boldsymbol{x}}^{n/(n - 1)}}}{{1 - \xi _{\boldsymbol{x}}^{1/(n - 1)}}}$$ (29) Data availability The USA corpus, extracted from the 1-Percent Public Use Microdata Sample (PUMS) files, is available at . The 5% PUMS files used to estimate the correctness of Governor Weld’s re-identification are also available at the same address. The ADULT corpus, extracted from the Adult Income dataset, is available at . The HDV corpus, extracted from the Histoire de vie survey, is available at . The MIDUS corpus, extracted from the Midlife in the United States survey, is available at . The MERNIS corpus is extracted from a complete population database of virtually all 48 million individuals born before early 1991 in Turkey that was made available online in April 2016 after a data leak from Turkey’s Central Civil Registration System. Our use of this data was approved by Imperial College as it provides a unique opportunity to perform uniqueness estimation on a complete census survey. Owing to the sensitivity of the data, we have only analyzed a copy of the dataset where every distinct value was replaced by a unique integer to obfuscate records, without loss of precision for uniqueness modeling. A complete description of each corpus is available in the Supplementary Information. Code availability All simulations were implemented in Julia and Python. The source code to reproduce the experiments is available at , along with documentation, tests, and examples.
With the first large fines for breaching EU General Data Protection Regulation (GDPR) regulations upon us, and the UK government about to review GDPR guidelines, researchers have shown how even anonymised datasets can be traced back to individuals using machine learning. The researchers say their paper, published today in Nature Communications, demonstrates that allowing data to be used—to train AI algorithms, for example—while preserving people's privacy, requires much more than simply adding noise, sampling datasets, and other de-identification techniques. They have also published a demonstration tool that allows people to understand just how likely they are to be traced, even if the dataset they are in is anonymised and just a small fraction of it shared. They say their findings should be a wake-up call for policymakers on the need to tighten the rules for what constitutes truly anonymous data. Companies and governments both routinely collect and use our personal data. Our data and the way it's used is protected under relevant laws like GDPR or the US's California Consumer Privacy Act (CCPA). Data is 'sampled' and anonymised, which includes stripping the data of identifying characteristics like names and email addresses, so that individuals cannot, in theory, be identified. After this process, the data's no longer subject to data protection regulations, so it can be freely used and sold to third parties like advertising companies and data brokers. The new research shows that once bought, the data can often be reverse engineered using machine learning to re-identify individuals, despite the anonymisation techniques. This could expose sensitive information about personally identified individuals, and allow buyers to build increasingly comprehensive personal profiles of individuals. The research demonstrates for the first time how easily and accurately this can be done—even with incomplete datasets. In the research, 99.98 per cent of Americans were correctly re-identified in any available 'anonymised' dataset by using just 15 characteristics, including age, gender, and marital status. First author Dr. Luc Rocher of UCLouvain said: "While there might be a lot of people who are in their thirties, male, and living in New York City, far fewer of them were also born on 5 January, are driving a red sports car, and live with two kids (both girls) and one dog." To demonstrate this, the researchers developed a machine learning model to evaluate the likelihood for an individual's characteristics to be precise enough to describe only one person in a population of billions. They also developed an online tool, which doesn't save data and is for demonstration purposes only, to help people see which characteristics make them unique in datasets. The tool first asks you put in the first part of their post (UK) or ZIP (US) code, gender, and date of birth, before giving them a probability that their profile could be re-identified in any anonymised dataset. It then asks your marital status, number of vehicles, house ownership status, and employment status, before recalculating. By adding more characteristics, the likelihood of a match to be correct dramatically increases. Senior author Dr. Yves-Alexandre de Montjoye, of Imperial's Department of Computing, and Data Science Institute, said: "This is pretty standard information for companies to ask for. Although they are bound by GDPR guidelines, they're free to sell the data to anyone once it's anonymised. Our research shows just how easily—and how accurately—individuals can be traced once this happens. He added: "Companies and governments have downplayed the risk of re-identification by arguing that the datasets they sell are always incomplete. "Our findings contradict this and demonstrate that an attacker could easily and accurately estimate the likelihood that the record they found belongs to the person they are looking for." Re-identifying anonymised data is how journalists exposed Donald Trump's 1985-94 tax returns in May 2019. Co-author Dr. Julien Hendrickx from UCLouvain said: "We're often assured that anonymisation will keep our personal information safe. Our paper shows that de-identification is nowhere near enough to protect the privacy of people's data." The researchers say policymakers must do more to protect individuals from such attacks, which could have serious ramifications for careers as well as personal and financial lives. Dr. Hendrickx added: "It is essential for anonymisation standards to be robust and account for new threats like the one demonstrated in this paper." Dr. de Montjoye said: "The goal of anonymisation is so we can use data to benefit society. This is extremely important but should not and does not have to happen at the expense of people's privacy."
10.1038/s41467-019-10933-3
Medicine
Microglia pruning brain synapses captured on film for the first time
Nature Communications (2018). DOI: 10.1038/s41467-018-03566-5 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-03566-5
https://medicalxpress.com/news/2018-03-microglia-pruning-brain-synapses-captured.html
Abstract Microglia are highly motile glial cells that are proposed to mediate synaptic pruning during neuronal circuit formation. Disruption of signaling between microglia and neurons leads to an excess of immature synaptic connections, thought to be the result of impaired phagocytosis of synapses by microglia. However, until now the direct phagocytosis of synapses by microglia has not been reported and fundamental questions remain about the precise synaptic structures and phagocytic mechanisms involved. Here we used light sheet fluorescence microscopy to follow microglia–synapse interactions in developing organotypic hippocampal cultures, complemented by a 3D ultrastructural characterization using correlative light and electron microscopy (CLEM). Our findings define a set of dynamic microglia–synapse interactions, including the selective partial phagocytosis, or trogocytosis ( trogo -: nibble), of presynaptic structures and the induction of postsynaptic spine head filopodia by microglia. These findings allow us to propose a mechanism for the facilitatory role of microglia in synaptic circuit remodeling and maturation. Introduction Microglia are glial cells that derive from the myeloid hematopoietic lineage and take up long-lived residence in the developing brain. In response to brain injury, microglia migrate to the site of damage and participate in the phagocytic removal of cellular debris 1 . The recent discovery that microglia are also highly motile in the uninjured brain 2 , 3 , continuously extending and retracting processes through the extracellular space, suggests that they may monitor and contribute to synaptic maturation and function. Clues to this activity come from observations that during early postnatal development microglia undergo morphological maturation that matches synaptic maturation 4 , and that they express receptors for neuronal signaling factors that are upregulated during this period 5 . These data, combined with the known phagocytic capacity of myeloid cells, led to the hypothesis that microglia may have a role in the phagocytic elimination of synapses as part of the widespread pruning of exuberant synaptic connections during development 6 , 7 . This hypothesis was supported by two studies that reported the selective engulfment of synaptic structures by microglia and the appearance of excess immature synapses in mice lacking either the fractalkine 8 (Cx3cl1/Cx3cr1) or complement component 9 (C1q/C3/CR3) microglia signaling pathways. Numerous studies have confirmed an important role for microglia in promoting synapse and circuit maturation. Knockout (KO) mice lacking complement factors show cortical excitatory hyperconnectivity 10 , supporting a role for microglia in the elimination of excess synapses in the mammalian neocortex. KO mice lacking fractalkine receptor show transient excitatory hyperconnectivity followed by weak synaptic multiplicity and reduced functional brain connectivity in adulthood 11 , 12 , suggesting that a failure to eliminate synapses at the correct developmental time prevents the normal strengthening of synaptic connections. Notably, inhibitory synapses in hippocampus appear to be unaffected by disruptions of neuron–microglia signaling 11 (but see ref. 13 ). Microglia are also likely to be required for environment-induced brain plasticity as mice lacking microglia P2Y 12 receptors show deficits in early monocular deprivation-associated visual cortical plasticity 14 . At the same time, studies have pointed to a role for microglia in synapse formation in the adult brain, showing that they can elicit calcium transients and the formation of filopodia from dendritic branches 15 , and that they are required for learning-induced synapse formation 16 . Together, these studies suggest that microglia have a complex role in shaping maturing circuits. An important question raised by these studies is whether synaptic phagocytosis by microglia underlies some or all of these phenotypes. Unfortunately, support for a role of microglia in the phagocytic elimination of synapses is based entirely on indirect evidence—localization of synaptic material within microglia in fixed specimens and increased synapse density following the disruption of microglia function. In this study, we set out to test the hypothesis that microglia engulf and eliminate synapses during mouse hippocampal development. First, we carried out quantitative confocal microscopy analysis of microglia–synapse interactions in fixed hippocampal tissue. Microglia were found to contact dendritic spines, but contrary to previous reports we found no evidence for the elimination of postsynaptic material. Second, we confirmed these data at the ultrastructural level by correlative light and electron microscopy (CLEM) using focused ion beam scanning electron microscopy (FIB-SEM) and discovered evidence for the partial elimination, or trogocytosis, of presynaptic boutons and axons by microglia. Trogocytosis has been described in the immune system as a non-apoptotic mechanism for the rapid capture of membrane components and differs from phagocytosis, which involves the engulfment and elimination of larger cellular structures (> 1 µm) 17 , 18 , 19 . Third, we carried out time-lapse light sheet microscopy of microglia–synapse interactions in organotypic hippocampal cultures and observed the trogocytosis of exclusively presynaptic material. Time-lapse microscopy also revealed the frequent induction of spine head filopodia at sites of microglia–synapse contacts and these were confirmed at the ultrastructural level. Our findings provide the first direct evidence for the elimination of synaptic material by microglia in living brain tissue and suggest that microglia facilitate circuit maturation by a combination of trogocytosis of the axonal compartment and the remodeling of postsynaptic sites. Results No evidence for phagocytosis of spines by microglia To identify the period of maximal microglia phagocytic activity in postnatal hippocampal development, we performed immunocolocalization analysis of Iba1-labelled microglia with CD68, a phagosomal marker, in fixed sections at postnatal day 8 (P8), P15, P28, and P40 (Supplementary Fig. 1a, b ). CD68 immunoreactivity was high at P8–P15 with a peak at P15 and a gradual decrease in the following weeks. The CD68 immunoreactivity pattern at P15 was confirmed in genetically labeled microglia (Supplementary Fig. 1c, d ). These findings indicate that the second postnatal week of mouse hippocampal development is likely to be a period of active microglia phagocytosis and suggested that this period may be most relevant to search for evidence of phagocytic elimination of synapses by microglia. Previous studies have presented indirect evidence for the phagocytic engulfment of dendritic spines in hippocampus and visual cortex 8 , 14 , 20 . For example, immunoreactivity for postsynaptic density 95 (PSD95) protein was found to be localized inside microglia by confocal, super-resolution, and electron microscopy 8 . We attempted to confirm these findings using cytoplasmic neuronal and microglia markers, in triple transgenic mice expressing green fluorescent protein (GFP) in sparse excitatory neurons ( Thy1 ::EGFP 21 ) and tdTomato in microglia ( Cx3cr1 ::CreER 16 ; RC ::LSL-tdTomato 22 ). We analyzed over 8900 spines from secondary dendrites of GFP + neurons in the CA1 stratum radiatum of fixed hippocampal tissue at P15 and found that about 3% of spines were contacted by microglia ( n = 294). The majority of these contacts presented a relatively minor colocalization of spine and microglia fluorescence (apposition, Fig. 1a, c ; n = 171, 1.9% of spines), whereas others were characterized by more extensive colocalization as defined by > 70% of the spine surface contacted by microglia (encapsulation, Fig. 1b, c ; n = 123, 1.4% of spines). For each encapsulation event we carefully examined the integrity of the spine head, neck, and dendritic shaft, and found that in all cases, even in those where the entire spine surface appeared to be contacted by microglia, an intact GFP + spine neck remained visible (Fig. 1b–d ). Thus, using cytoplasmic markers to visualize neuronal and microglia material, we were not able to confirm earlier evidence suggesting the phagocytic engulfment of dendritic spines. To explore whether phagocytic activity might nevertheless be associated with microglia–spine contacts, we performed immunofluorescence colocalization of the phagosomal marker CD68 and microglia–spine contacts. Approximately 15% of microglia–spine contacts (apposition or encapsulation) showed apposed CD68 immunoreactivity (Fig. 1d ), suggesting an involvement of local phagocytic activity in the microglia–neuron contact event. Fig. 1 Microglia do not phagocytose dendritic spines. Representative images of microglia (red, Cx3cr1 ::CreER; RC ::LSL-tdTomato) a apposing or b encapsulating a dendritic spine (green, Thy1 ::EGFP; white insert: projection of the undeconvolved z-stack containing the contacted spine). Note that the neck of the contacted spine is intact (white arrow head). c Quantification of microglia–spine contacts. No spine was found phagocytosed, as contacted spines were always attached to their dendrite ( n = 25 cells from 5 animals for 8944 spines analyzed, error bars are mean + SEM). d Encapsulated spine localizing next to a phagocytic compartment (blue, CD68 immunostaining). Scale bars: 0.5 µm Full size image To test this hypothesis we developed a CLEM approach based on previously published methods 24 , 25 23 , to first identify rare microglia–spine contact events by fluorescence microscopy and then reconstruct in three dimensions the surrounding ultrastructure by electron microscopy (Fig. 2a and Supplementary Movie 1 ). Following fixation of hippocampal tissue in a manner compatible with electron microscopy, interaction events were identified by confocal microscopy. Once a region of interest (ROI) was identified, concentric brands were etched into the surface of the fixed tissue with a UV-laser microdissector microscope and the tissue was embedded and prepared for electron microscopy. The resulting trimmed block of tissue was then subjected to X-ray imaging to identify vasculature and nuclear landmarks, and align these with the branding. The block was then subjected to FIB-SEM at 5 nm lateral pixel size and 8 nm section thickness using the landmarks as guides to capture the ROI. Four ultrastructural image stacks (up to 35 × 20 µm area; 2300 images) were obtained containing a total of 8 appositions and 5 encapsulations. Three-dimensional ultrastructural reconstruction confirmed microglia–spine apposition events where microglia and spine membranes were in contact or juxtaposed (7/8 apposed; Fig. 2f ). However, most encapsulations appeared as simple appositions (3/5) with only one ROI showing > 50% of the spine surface contacted by microglia (Fig. 2b–f ). These findings show that caution must be exercised when interpreting colocalization of synaptic material with microglia using light microscopy. Moreover, these data do not support the hypothesis that microglia phagocytose dendritic spine material. Fig. 2 CLEM analysis of microglia-spine interactions. a Schematic of correlative light and electron microscopy (CLEM) workflow. b Confocal orthogonal view of a region of interest (ROI, dotted line in a ) containing a spine encapsulated by microglia. c Segmentation of the ROI containing the encapsulation from the corresponding electron microscopy dataset, side view. d Top view of the ROI revealed that the spine was not encapsulated, with e no sign of elimination. f Quantification showed that the majority of the encapsulations observed by confocal microscopy were simple appositions by electron microscopy ( n = 13 contacts analyzed from two animals). Scale bars: 5 µm for a , 0.5 µm for b – e Full size image Microglia trogocytose presynaptic elements Previous studies have argued for the phagocytosis of presynaptic material by microglia 8 , 9 , 14 , 20 and we re-examined our electron microscopy datasets for evidence of microglia engulfment of identified presynaptic structures. We examined over 56 µm 3 of microglial material for a total surface of 560 µm 2 from eight reconstructed cells. We found 17 confirmed double-membrane inclusion bodies (Fig. 3c ). Two of these contained putative presynaptic vesicles as indicated by their 40 nm diameter (Fig. 3a, c and Supplementary Movie 2 ), suggesting that presynaptic material is a substrate for elimination by microglia. In addition, we found 20 double-membrane structures that appeared to be in the process of engulfment. Many involved axonal shafts (8/20; Fig. 3b, c and Supplementary Movie 3 ) and a smaller number involved presynaptic boutons (3/20). We also frequently observed microglia self-engulfment in which a microglial process was enwrapped and pinched by another process from the same cell (9/20; Fig. 3c and Supplementary Fig. 2 ). Analysis of the size distribution of inclusions revealed that the material being engulfed from boutons and axons typically ranged between 0.01 and 0.05 µm 3 (Fig. 3d ) with an average diameter of 253 ± 24 nm. This shows that presynaptic structures are not entirely phagocytosed by microglia but rather “trogocytosed,” a term originally coined to describe membrane transfer in immune cells 19 and later extended to refer to partial phagocytosis by various cell types, including macrophages 18 , 26 , 27 . We also observed numerous invaginations of microglia facing boutons or axons that allowed us to reconstruct the putative sequence of events leading to the microglial digestion of these structures (Fig. 3e ). Engulfment did not appear to be mediated by the formation of phagocytic cups, as no microglial pseudopodia were observed at the contact site. Instead, boutons or axonal pinches appear to sink into microglial cytoplasm before closure of the membrane and subsequent trafficking. These findings argue for the specific trogocytosis of presynaptic structures by microglia and suggest that this activity may be oriented indiscriminately toward axons and synaptic boutons rather than selectively targeting the presynaptic active zone. Fig. 3 Microglia trogocytosis of presynaptic boutons and axons. Representative FIB-SEM image sequences of a complete presynaptic bouton inclusion (dark purple), as identified by its 40 nm vesicles content, inside microglia (red), b partial inclusion containing axonal material (clear blue) inside a microglia. c Quantification of microglial partial and complete inclusions (n = 37 inclusions from 8 cells, 4 animals). d Distribution of the volume of microglial inclusions. e Putative sequence of events leading to presynaptic bouton or axon material digestion by microglia, represented by a schematic and a collection of three examples for each step (gray: undetermined origin, yellow: lysosomes). Scale bars: 200 nm Full size image To explore the dynamics of interaction between synapses and microglia, we developed a time-lapse fluorescence imaging method in brain explant cultures (Supplementary Fig. 3a ). Organotypic hippocampal slice cultures are known to undergo key developmental steps similar to those observed in vivo, including synapse maturation 28 , 29 , 30 , 31 , and have been previously used to study ramified microglia function 32 . As shown previously, microglia initially respond to culturing by retracting their processes and assuming an activated phenotype 33 (Supplementary Fig. 3b ). Following 1 week in culture, however, microglia morphology resembles that found in vivo 34 (Supplementary Fig. 3b, c ). Time-lapse imaging of hippocampal cultures was performed using light sheet fluorescence microscopy, in order to minimize light toxicity common to point-source beam scanning microscopes and to allow for the visualization of multiple fluorophores across very large fields of view (up to 0.5 × 0.5 × 0.2 mm) at relatively high frame rates (up to 1 frame/45 s) for protracted periods (up to 3 h). To validate the technique and determine whether microglia in hippocampal explants showed in vivo-like physiology, we quantified the number and speed of process extension and retraction events (Supplementary Fig. 3d, e and Supplementary Movie 4 ). No significant difference was observed between extension and retraction over time, with an average of 26 extension and 21 retraction events per cell per minute (Supplementary Fig. 3d , two-way analysis of variance (ANOVA), main effect of time: F 2, 12 = 0.22, p = 0.81; main effect of direction: F 1, 12 = 0.96, p = 0.37, n = 4 cells). The speed of extension and retraction events was similar and stable over time (1.9 and 1.8 µm/min, respectively; Supplementary Fig. 3e , two-way ANOVA, main effect of time: F 2, 12 = 3.48, p = 0.06; main effect of direction: F 1, 12 = 0.10, p = 0.77, n = 4 cells) and consistent with previous in vivo imaging studies 3 . Next, we labeled presynaptic CA3 to CA1 Schaffer collateral projections with cytoplasmic near infra-red fluorescent protein (iRFP) following local adeno-associated viral infection (AAV- Syn ::iRFP, Fig. 4a ) of the CA3 region of organotypic slices from Thy1 ::EGFP; Cx3cr1 ::CreER; RC ::LSL-tdTomato triple transgenic mice shortly after culturing. Importantly, microglia at the imaging site did not show any detectable morphological changes following viral infection, as they were imaged 2 weeks later and 500 µm distant from the infection site (Supplementary Fig. 4 ). Consistent with our fixed electron microscopy data, we found clear evidence for the engulfment of presynaptic material (Fig. 4b, d and Supplementary Movie 5 ). Surprisingly, presynaptic engulfment events ( n = 11 from 8 microglia analyzed) were rapid, frequently occurring in < 3 min (Fig. 4c ), raising the possibility that some events occurring within our frame rate (1 frame/90 s) went unnoticed. Notably, even in those cases where microglia were seen to eliminate most of the synaptic bouton (2/11 events), presynaptic material remained at the initial site, confirming that microglia engage in partial elimination, or trogocytosis, of presynaptic boutons. Fig. 4 Rapid trogocytosis of presynaptic material by microglia. a Low-magnification image of a stack projection of 35 consecutive optical sections (Δ z = 0.48 µm) showing microglia (red, Cx3cr1 ::CreER; RC ::LSL-tdTomato) surrounded by iRFP + presynaptic boutons from Schaffer collaterals (blue, AAV- Syn ::iRFP) in the CA1 region of organotypic hippocampal cultures. b Time-lapse imaging revealed engulfment of a presynaptic bouton (single optical planes series from a , dotted box). The corresponding optical plane containing the presynaptic bouton (star) is shown in the plain white insert. Although most of the bouton has been internalized by the microglia and trafficked toward the soma (arrowhead), presynaptic material remains at the original site (star), indicating partial elimination. c Distribution of the latency to engulfment (n = 11 events from 8 cells originating from 3 organotypic slice cultures). d Representative image of an iRFP + inclusion in a microglia soma (arrowhead) showing slow degradation. Scale bars: 2 µm Full size image Trogocytosis does not require CR3 signaling The complement system has been shown to be required for the engulfment of apoptotic cells by microglia in developing hippocampus 35 , for the efficient pruning of synapses during retinothalamic development 6 , 9 , 36 , and for the loss of synaptic structures during neurodegeneration and aging 37 , 38 , 39 , 40 . We therefore tested whether microglia trogocytosis of presynaptic elements was compromised in mice lacking the complement receptor CR3, an essential component of the complement signaling pathway expressed on microglia. Using the time-lapse imaging setup previously described, we analyzed microglia–synapse interaction in slices from CR3 -KO; Thy1 ::EGFP; Cx3cr1 ::CreER; RC ::LSL-tdTomato quadruple transgenic mice (Fig. 5a ). Contrary to our hypothesis, we found no evidence for a deficit in microglia trogocytosis in KO when compared with wild-type (WT) slices (2.3 ± 0.7 vs 1.5 ± 0.6 trogocytosis events/cell for 3 h, respectively; p = 0.37, t -test; six and eight cells analyzed from three cultures, Fig. 5b, c and Supplementary Movie 6 ). There was also no difference in the latency of elimination (WT: 6.0 ± 1.6, KO: 3.9 ± 1.1, p = 0.28, t -test; Fig. 5d ) or in the number of iRFP inclusions found in microglia soma (WT: 2.6 ± 0.4, KO: 2.2 ± 0.6 min, p = 0.51, t -test; Fig. 5e ). These data suggest that the complement signaling pathway is not required for microglial trogocytosis of presynaptic elements. Fig. 5 CR3 is not necessary for microglia trogocytosis. a Low-magnification image of a stack-projection of 33 consecutive optical sections (Δ z = 0.48 µm) showing a microglia (red, Cx3cr1 ::CreER; RC ::LSL-tdTomato) surrounded by presynaptic boutons from Schaffer collaterals (blue, AAV- Syn ::iRFP) in organotypic hippocampal slices from CR3 KO mice. b Time-lapse imaging of CR3 KO microglia–bouton interactions revealed engulfment of presynaptic material (single optical planes series from a , dotted box). No difference was found in the c number or d latency of microglia engulfment events, or e the number of iRFP inclusions per cell between WT and CR3 KO slices (two-sided unpaired t -test, n = 8 and 6 cells from 3 organotypic slice cultures, error bars are mean + SEM). Scale bars: 2 µm Full size image Microglia induce spine head filopodia formation To explore whether microglia might indirectly induce the elimination of spines as a consequence of non-phagocytic contact or presynaptic trogocytosis, we investigated microglia interactions with the postsynaptic compartment by time-lapse imaging in organotypic cultures. Putative contacts between microglia processes and spines were identified and analyzed over time (Fig. 6a ). Microglia–spine contacts were brief (4.2 ± 0.85 min) and microglia frequently re-contacted the same spine, suggesting that the contacts were non-random. Ten percent of microglia-contacted spines (3/31) disappeared during the imaging session (Fig. 6b ), and 13% both appeared and disappeared (4/31), and were classified as transient spines. However, none of these spines were in contact with microglia at the time of disappearance, arguing for a microglia-independent spine elimination process. Importantly, the frequency of disappearance of spines that had been contacted during the imaging session by microglia was not different from that of nearby (< 4 μm away), non-contacted spines (10% vs 11%, respectively, n = 28; Fig. 6c ). Transient spines, on the other hand, were found exclusively among contacted spines when compared with nearby, non-contacted spines (13% vs. 0%). Closer, high-resolution inspection of these microglia-transient spine contact events revealed that these spines formed from filopodia that appeared at the microglia contact point or in proximity to the dendritic shaft (4/31; Fig. 6a, d, i and Supplementary Movie 7 ) similar to a phenomenon recently observed in the mouse cortex using in vivo imaging 15 . Intriguingly, 39% (13/31) of the persistent, mature spines contacted by microglia formed a filopodium protruding from the head (spine head filopodia) and extending toward the microglia process (Fig. 6a, e, i and Supplementary movie 8 ), thus making spines a preferential substrate for microglia-induced filopodia compared to dendritic shaft (13 spine head filopodia vs. 4 shaft filopodia). Occasionally, we noted stretching of the entire spine during microglial contact, suggesting that the microglia process was able to induce profound changes in spine morphology (2/31; Fig. 6a, f, i ). Systematic, morphometric analysis of microglia–spine contact events across the imaging session was used to perform a cross-correlation analysis that revealed a significant increase in spine length that peaked just after microglia contact ( n = 13 spines analyzed, Fig. 6g ). Vectorial correlation of spine head filopodia and microglia process movement direction confirmed that filopodia extended toward the microglia process, and varied in length from 0.4 to 3.1 µm with an average of 1.5 µm ( n = 21 spine head filopodia formations analyzed, Fig. 6h ). Notably, spine head filopodia were rarely found on nearby, non-contacted spines (7% on non-contacted spines vs 39% on contacted spines, n = 28 and 31 spines analyzed, respectively; Fig. 6i ). Fig. 6 Microglia induce spine head filopodia formation. a Quantification of microglia-spine contact duration (red) over the imaging session (gray). Each line represents a spine selected to have been contacted at least once by microglia and annotated for spine appearance (gray arrowhead), disappearance (black bar), and spine head filopodia formation (SHF, black arrowhead). Representative time sequence images of b spine disappearance, d filopodia formation, e SHF, and f spine stretching. c Quantification of spine retraction rate of contacted versus non-contacted neighboring spine. g Cross-correlation analysis revealed a significant increase in spine length during microglia-spine contact. h Vectorial analysis showed a significant correlation of microglial process direction (red arrow) with filopodia direction (Anderson–Darling test, P = 0.00047, n = 21 spine head filopodia analyzed, black arrows indicating filopodia length and direction, indentation = 1 µm). i Quantification of spine stretching, filopodia formation, and spine head filopodia formation events in contacted versus non-contacted neighboring spines ( n = 31 and 28, respectively, from 4 organotypic slice cultures). Scale bars: 2 µm Full size image Spine head filopodia have been shown to contribute to the formation of new spine-bouton contacts 41 and proposed to be a mechanism for the movement, or “switching,” of spines from one bouton to another, possibly in response to changing synaptic activity or plasticity 41 , 42 . Although our approach did not allow for a systematic assessment of such switching events, we did find that 27% (5/21) of microglia-induced spine head filopodia were associated with spine head relocation, as identified by the displacement of the head from its original site to the tip of the filopodia (Fig. 7a ). Interestingly, spine head filopodia that underwent relocation showed a tendency for longer lifetimes than those that did not (27 vs 12 min, n = 5 and 16, respectively; Fig. 7b ), suggesting that this relocation might be associated with stabilizing synapse formation. This hypothesis was supported by two cases in which we were able to simultaneously image GFP + spines and iRFP + presynaptic boutons, and we could confirm that the induced spine head filopodia made stable contact with a different, neighboring bouton (Fig. 7c, d ). Fig. 7 Spine head filopodia-associated synapse remodeling. a Representative time sequence images of a spine head relocating to the tip of the SHF following its induction by microglia. b Quantification of SHF lifetime revealed more stable filopodia following relocation ( n = 5 relocating SHF vs 16 non-relocating, analyzed from 3 organotypic slice cultures, error bars are mean + SEM). c , d SHF making a stable contact with a neighboring bouton (arrowhead). In c the spine persists at its original location (star), whereas in d the spine relocates to the newly contacted bouton (star). e Example of microglial process in contact with a spine extending a SHF (dotted box) as identified and visualized by CLEM. Further examination of the fully segmented EM dataset revealed f multiple filopodia extending toward the microglial process, of which g a few were simple filopodia and h the majority originated from mature spines bearing postsynaptic-densities (PSDs). It is noteworthy that the microglial process is in intimate contact with a presynaptic bouton and i several of the SHFs contact the same bouton, j one of which has formed an immature PSD (arrowhead, Supplementary Fig. 5 ) resulting in the formation of a multiple-synapse bouton. Scale bars: 1 µm Full size image A systematic analysis of our FIB-SEM datasets allowed us to confirm the frequent presence of microglia-associated spine head filopodia in fixed brain tissue and rule out that this phenomenon was an artifact of the organotypic culture. Remarkably, over 28% (5/18) of mature, PSD-containing spines found in contact with microglia processes presented a filopodium extending toward microglia. In one particularly striking case (Fig. 7e ), a microglial end process contacting a presynaptic bouton was found surrounded by converging filopodia (15 filopodia originating from 9 dendrites; Fig. 7f ). Consistent with our live-imaging data, the majority (9/15) were spine head filopodia originating from mature, PSD-containing spines (Fig. 7h ), whereas the rest were filopodia extending from dendritic shaft structures (6/15; Fig. 7g ). Intriguingly, a few of these spine head filopodia extended alongside the microglia-contacted presynaptic bouton (Fig. 7i ) and one appeared to initiate a synapse as indicated by the presence of a PSD but no clustering of presynaptic vesicles, resulting in the formation of a multiple-synapse bouton (MSB, Fig. 7j and Supplementary Fig. 5 ). We also noted that 12/15 of the converging filopodia shared their dendrite of origin, suggesting that they could facilitate the formation of class I MSBs in which one bouton synapses with several spines originating from a common neuron, a form of synaptic contact that conveys increased efficacy 43 . MSB1s have previously been shown to depend on microglia–neuron signaling and contribute to the strengthening of developing hippocampal circuits and the establishment of normal brain connectivity 11 . Together, these data suggest that microglia are broadly involved in structural synaptic plasticity and circuit maturation, both by the trogocytosis of presynaptic boutons and axons, as well as by the induction of filopodia from postsynaptic sites (Supplementary Fig. 6 ). Discussion Our findings confirm the hypothesis that microglia directly engulf and eliminate synaptic material. However, contrary to previous assumptions, we found no evidence for the phagocytosis of entire synapses. Instead, we observed microglia trogocytosis—or nibbling—of synaptic structures. Importantly, microglia trogocytosis was restricted to presynaptic boutons and axons, with no evidence for elimination of postsynaptic material. Intriguingly, microglia contacts at postsynaptic sites frequently elicited transient filopodia, most of which originated from mature spines. These data support the current hypothesis that microglia can “eat” synaptic material, but point to a more nuanced role for microglia in synapse remodeling that may explain the diverse synaptic alterations observed following the disruption of microglial function. Our observation of microglia engulfment of presynaptic material is consistent with published reports showing the localization of material deriving from axonal projections inside microglia 9 . To the best of our knowledge, our data are the first time-lapse images to directly demonstrate the active engulfment of synaptic material by microglia. Our extensive characterization of microglial content from three-dimensional (3D) FIB-SEM reconstructions validate previous data from single section electron microscopy that showed putative double-membrane inclusions of presynaptic material in microglia 9 . Moreover, our observation of intermediates such as invaginations, pinching of presynaptic boutons and axons, and complete inclusions sheds light on the cellular mechanism involved. Synapses were previously proposed to be eliminated by microglia through phagocytosis, a process traditionally defined as the cellular uptake of particles over 0.5 µm in size 44 . Instead, our data show that only small fragments (250 nm average diameter, Fig. 3 ) of the presynaptic compartment are engulfed by microglia. This partial elimination, or trogocytosis (from the Greek trogo: to nibble) has been previously described in immune and amoeboid cells 18 , 26 , 27 that ingest small parts (< 1 µm) of their targets within a few minutes, a timeframe compatible with our observations (Fig. 4 ). Although phagocytosis and trogocytosis likely share common endocytic machinery, they potentially differ in their uptake pathways. In fact, we found that CR3, a microglia-expressed complement receptor involved in phagocytosis and previously proposed to mediate synapse elimination in the retinogeniculate pathway 9 , is not necessary for the trogocytosis of presynaptic structures. However, other components of the complement pathway could be involved and it is possible that different brain regions recruit different pruning pathways. One possible candidate “eat me” signal is phosphatidylserine (PS), a phospholipid known to elicit engulfment by macrophages following its exposure on the outer leaflet of the cell membrane 45 and recently suggested to mediate trogocytosis 27 . The known capacity of PS to laterally diffuse within the membrane could explain our observation that microglia indiscriminately trogocytose presynaptic boutons and axonal shafts. It should also be noted that the elimination of presynaptic material does not necessarily imply elimination of functional synaptic structures, and our observation that microglia trogocytosed primarily axonal shafts rather than boutons suggests that this process may be relatively nonspecific. Presynaptic trogocytosis may be aimed at an overall reduction in axonal processes or, alternatively, it may mediate a remodeling of axons by eliminating, e.g., specific surface-associated factors that inhibit presynaptic site formation. Second, several data argue against a major role for microglia in the elimination of postsynaptic material in the developing hippocampus, including the following: (1) absence of phagocytosis/trogocytosis of postsynaptic material by microglia, as assessed by confocal microscopy in fixed brain sections (Figs. 1 – 2) , (2) the presence of frequent and intimate partial inclusion events (“pinched” material) of exclusively presynaptic boutons and axons as assessed by 3D volume EM (Fig. 3) , and (3) intake of presynaptic (Fig. 4 ) but not postsynaptic (Fig. 6 ) material by microglia as assessed by time-lapse fluorescence imaging. The fact that microglia do not eliminate postsynaptic structures in the developing hippocampus has implications for the interpretation of previous data in the field. For example, we and others have published data showing localization of postsynaptic proteins inside microglia 8 , 14 , 20 , 46 . In our current confocal microscopy data, we observed the intimate encapsulation of spines by microglia, a phenomenon that likely explains the previously described colocalization of immunolabeled postsynaptic proteins with microglia. However, using cytoplasmic labeling we noted that all contacted spines were still attached to the dendritic shaft via a spine neck and our 3D FIB-SEM reconstructions did not support the phagocytic engulfment of postsynaptic material by microglia. Previous evidence for the localization of postsynaptic material inside microglia by single-section electron microscopy, either by contrast agent-enhanced visualization of PSD-like material 20 or its immunodetection 8 , are potentially more difficult to counter, but might reflect the limitations inherent to two-dimensional electron microscopy. In addition, it should be noted that PSD95 has been found to be expressed by microglia 47 (but see ref. 48 ), and that immunocolocalization of synaptic proteins with microglia might derive from non-phagocytic membrane exchange 49 . Although it remains possible that elimination of postsynaptic material by microglia might occur in other brain regions, at other developmental stages, or alternatively at such a low frequency that it went unnoticed in our experiments, our data argue that immunofluorescent colocalization of postsynaptic proteins and microglia, even by super-resolution methods (e.g., stimulated-emission depletion), should be interpreted with caution. Lastly, a major observation from our time-lapse imaging data was the induction of transient filopodia following microglia contacts. This observation is in line with a recent in vivo imaging study reporting that microglia contacts can induce local calcium transients in dendritic shafts followed by filopodia formation 15 . It is also consistent with a report showing that microglia participate in the learning-dependent formation of functional synapses via brain-derived neurotrophic factor (BDNF) and its receptor TrkB 16 . Together, these observations argue that filopodia induction by microglia may be a widespread phenomenon and a possible trigger mechanism for the formation of functional synapses. Intriguingly, we observed that the majority of filopodia induced by microglia originate from mature spine heads (Fig. 6 ), a finding confirmed in our electron microscopy datasets. Protrusions from spine heads have been previously described in the hippocampus 41 , 50 , 51 , 52 , 53 , 54 , 55 , in the visual cortex 56 , and in the olfactory bulb 42 , where they have been referred to as spinules, spine head protrusions, or spine head filopodia, depending on the length of the extension measured. Spine-emerging spinules 54 were considered trans -endocytic and surrounded either by axonal material or astrocytes, the later of which might relate to astrocyte-mediated synaptic pruning described in the thalamic system 57 . On the other hand, spine head filopodia or protrusions were found in the extracellular space between cells, and associated with microtubule invasion of the spine 58 and synaptopodin-dependent actin bundling 51 . The mechanism for spine head filopodia induction by microglia in our preparations is not clear, but might involve the application of tension to “pull” the filopodia, a loosening up of the extracellular matrix by microglia invasion, or a release of chemotactic factors. Intriguingly, BDNF has been shown to induce spine head filopodia 42 , as well as synapse formation upon expression by microglia 16 , suggesting a role for this factor in microglia-dependent circuit remodeling during development. Our time-lapse imaging study revealed that microglia-induced spine head filopodia formation was frequently followed by a relocation of the spine head to the tip of the filopodium and occasionally resulted in a new bouton contact (Fig. 7 ). These observations confirm that spine head filopodia induction might trigger spine switching and potentially the replacement of inefficient with more efficient synapses 41 , 42 , 53 . Spine head filopodia can be induced by neurotransmitter such as acetylcholine 53 and glutamate via the activation of AMPA receptors 41 , 42 , 55 . Moderate, but not high, levels of glutamate induce spine head filopodia formation 41 , 52 , opening the possibility that microglia-mediated spine head switching may be moderated by glutamatergic neurotransmission and presynaptic release probability, a hypothesis supported by observation that blockade of evoked neurotransmitter release by the application of tetrodotoxin increased the incidence of spine head filopodia 41 . Spine switching has the potential to transform single-synapse boutons into MSBs 41 , 59 and if these occur on spines emerging from the same dendrite they might generate so called class I MSBs, a subclass of excitatory connections that increase during postnatal development and mediate the strengthening of excitatory connectivity 11 , 43 , 50 . Further evidence to support a role for microglia-induced spine head filopodia in MSB formation comes from our anecdotal observation of a dozen spine head filopodia converging toward a single microglia process in intimate contact with a presynaptic bouton on which they formed a nascent MSB (Fig. 7 and Supplementary Fig. 5 ). Although caution must be exercised when extrapolating such anecdotal electron microscopy data, the architecture of this particular case where most of the filopodia shared their dendrite of origin supports the hypothesis that microglia might promote the formation of class I MSBs. Such a hypothesis is consistent with previous observations that mice lacking the microglia–neuron signaling factor fractalkine (Cx3cl1) are associated with a deficiency in class I MSBs and impaired maturation of functional circuit connectivity 11 . Overall, our data argue that the prevailing view of microglia as phagocytic cells eliminating synapses during neural circuit development may be overly simplified. Instead, they suggest a broad role for microglia in synaptic remodeling via the trogocytosis of axonal structures, and the induction and reorganization of postsynaptic sites, so as to achieve an appropriate maturation of circuits. Materials and methods Animals C57BL/6J mice were obtained from local EMBL colonies. Thy1 ::EGFP; Cx3cr1 ::CreER; RC ::LSL-tdTomato triple transgenic mice were obtained by crossing Thy1 ::EGFP-M 21 (Jackson Laboratory stock 007788) with Cx3cr1:: creER-YFP 16 (Jackson Laboratory stock 021160) and Rosa26-CAG ::loxP-STOP-loxP-tdTomato-WPRE 22 (Jackson Laboratory stock 007905). Mice were used in homozygous state for Thy1 ::EGFP and in heterozygous state for Cx3cr1 ::CreER and RC ::LSL-tdTomato. Cre-mediated recombination was induced by a single injection of 98% Z-isomers hydroxy-tamoxifen diluted in corn oil at 10 mg/mL (Sigma, 1 mg injected per 20 g of mouse weight) at P10. Residual yellow fluorescent protein (YFP) expression in microglia yielded a faint signal in GFP channel that was thresholded out in all analysis. For CR3-KO experiments, triple transgenic mice were additionally crossed with CD11b-deficient mice 60 (Jackson Laboratory stock 003991) and used in homozygous state. Cx3cr1:: GFP mice 61 (Jackson Laboratory stock 005582) were used in heterozygous state. All mice were on a C57BL/6J congenic background. Mice were bred, genotyped, and tested at EMBL following protocols approved by the Italian Ministry of Health. Analysis of microglia phagocytic capacity C57BL/6J WT mice were anesthetized intraperitoneally with 2.5% Avertin (Sigma-Aldrich, St Louis) and perfused transcardially with 4% paraformaldehyde (PFA) at P8, P15, P28, and P40. Brains were removed and post-fixed in 4% PFA overnight (ON) at 4 °C. Coronal 50 µm sections were cut on a vibratom (Leica Microsystems, Wetzlar, Germany) and blocked in 20% normal goat serum and 0.4% Triton X-100 in phosphate-buffered saline (PBS) for 2 h at room temperature. CD68 and Iba1 were immunodetected by ON incubation at 4 °C with primary antibodies (rat anti-CD68 1:500, Serotec; rabbit anti-Iba1 1:200, Wako) followed by secondary antibodies (goat anti-rabbit A647 and goat anti-rat A546, 1:400, Life Technologies) incubation in PBS with 0.3% Triton-X100 and 5% goat serum for 2 h at room temperature. Sections were imaged on a TCS SP5 resonant scanner confocal microscope (TCS Leica Microsystems, Mannheim) with a × 63/1.4 oil-immersion objective at 48 nm lateral pixel size with an axial step of 130 nm. Iba1-positive microglia were 3D reconstructed using local contrast on Imaris software and CD68 signal intensity was measured in each individual reconstructed cell. CD68 expression pattern observed in Iba1-labeled microglia was confirmed in genetically labeled microglia ( Cx3cr1 ::CreER; RC ::LSL-tdTomato). Characterization of microglia–spine interactions Brain tissue was collected at P15 as previously described. Sections were permeabilized with PBS and 0.5% Triton X-100 for 30 min, and blocked with PBS, 0.3% Triton, and 5% goat serum for 30 min at room temperature. CD68 was immunodetected by ON incubation at 4 °C with primary antibodies (rat anti-CD68 1:500, Serotec) followed by secondary antibodies (goat anti-rat A647, 1:600, Life Technologies) incubation in PBS with 0.3% Triton and 10% goat serum at 4 °C ON. Secondary dendrites of bright GFP + neurons were imaged in medial stratum radiatum of CA1 using Leica SP5 confocal resonant scanner microscope with a × 63/1.4 oil-immersion objective, at a lateral pixel size of 40 nm and an axial step of 130 nm. Images were deconvolved using Huygens software (40 iterations, 0.1 of quality change, theoretical point spread function) and sharpened using Image J software (NIH). Interactions were determined after 3D visualization in Imaris as follows: appositions were considered when 20–50% of the spine head surface was covered by microglia, encapsulation when > 70% was covered. Imaging microglia–spine interactions for CLEM Mice were perfused transcardially with PBS and fixed with 2% (w/v) PFA, 2.5% (w/v) Glutaraldehyde (TAAB) in 0.1 M phosphate buffer (PB) at P15. After perfusion brains were dissected and postfixed in 4% PFA in PB 0.1 M ON at 4 °C. Subsequently, 60 µm-thick vibratome (Leica Microsystems) coronal sections were cut and 4',6-diamidino-2-phenylindole (DAPI) stained. Hippocampal areas were trimmed and mounted with 1% Low Melting Agarose (Sigma) in PB 0.1 M on glass-bottom dishes with alpha numeric grid (Ibidi). ROIs containing microglia–spine interactions were imaged at high magnification with TCS SP5 resonant scanner confocal microscope with a × 63/1.2 water-immersion objective, at a pixel size of 48 nm and a step size of 300 nm. Low-magnification stacks containing the ROI were acquired in bright field, GFP, RFP, and DAPI channels to visualize neurons and microglia together with fiducial capillaries and cell nuclei. A UV-diode laser operating at 405 nm, an Argon laser at 488 nm, and a diode-pumped solid-state laser at 561 nm were used as excitation sources. Subsequent to confocal imaging, the grid-glass bottom was separated from the plastic dish and placed onto Laser capture system microdissector microscope (Leica LMD7000) for laser etching of the ROI. The etched sections were retrieved and stored in 4% PFA in PB 0.1 M at 4 °C. Apposition/encapsulation events in Fig. 2 derive from four ROIs containing four neurons from a total of two animals. Microglial content analysis in Fig. 3 , on the other hand, derives from seven ROIs containing end processes from eight microglia from a total of four animals. Sample preparation for FIB-SEM The sections were processed as described in Maco et al. 24 . Briefly, the sections previously imaged on confocal were washed in cold sodium cacodylate buffer 0.1 M pH 7.4, postfixed with 1% OsO 4 /1.5% potassium ferrocyanide for 1 h on ice, followed by a second step of 1 h in 1% OsO 4 in sodium cacodylate buffer 0.1 M pH 7.4 on ice. Samples were then rinsed carefully in water and stained “en block” with 1% aqueous solution of uranyl acetate ON at 4 °C, dehydrated with raising concentration of ethanol and infiltrated in propylene oxide/Durcupan mixture with increasing concentration of resin. Durcupan embedding was carried out in a flat orientation within a sandwich of ACLAR ® 33 C Films (Electron Microscopy Science) for 72 h at 60 °C. Sections were washed in cold sodium cacodylate buffer 0.1 M pH 7.4, postfixed with 2% OsO 4 /1.5% potassium ferrocyanide for 1 h on ice, followed by a step with thiocarbohydrazide for 20 min at room temperature and then a second step of 30 min in 2% aqueous OsO 4 on ice. Samples were then rinsed carefully in water and stained “en block” first with 1% aqueous solution of uranyl acetate ON at 4 °C and then with lead aspartate at 60 °C for 30 min. Subsequently, sections were dehydrated with increasing concentration acetone and infiltrated in Durcupan resin ON followed by 2 h embedding step with fresh resin. As a pilote experiment, one of the samples (Zeiss, Oberkochen, Fig. 2 ) was processed using a slightly modified protocol based on the application of heavy metal fixatives, stains, and mordanting agents 62 producing slightly more contrast compared with the other samples. Flat embedded samples were then trimmed to about 1 mm width to fit on the pin for microscopic X-ray computed tomography (MicroCT). Samples were attached to the pin with either double sided tape or dental wax and mounted into the Brukker Skyscan 1272 for microCT imaging. Data were acquired over 180° at a pixel resolution of 1.5–2 µm. Karreman et al. 25 thoroughly details the process of how the MicroCT data enable the correlation of fluorescent imaging to 3D electron microscopy of voluminous samples. In this experiment, MicroCT revealed laser etched markings performed at the microdissector microscope, and vasculature. This vasculature, which could also be seen by negative contrast in confocal datasets, acted as fiducial features to register the various microscopy modalities (MicroCT, and low- and high-magnification confocal data). Using Amira software (FEI Company), 3D models were generated from these microscopy modalities by thresholding and manual segmentation. These volumes could then be registered together by a manual fit to reveal the position of the event visualized by fluorescent confocal microscopy despite the loss of fluorescence during processing for EM. The registered volumes also allowed precise trimming of the sample for FIB-SEM, where it is necessary for the ROI to be at the surface of the sample or within 5 µm (for trimming procedure see Karreman et al. 25 ). Each sample was trimmed according to the available features that would assist with later steps of FIB-SEM acquisition, such as to position the platinum deposition on the trimmed sample surface over the ROI and to position the imaging area on the cross-section face. For example, the laser markings that were made on one surface of the brain slice gave us only the ROI position in x but not over the thickness of 60 µm brain slice. For this axis, patterns made by the distribution of the vasculature were necessary to pin-point the position of the protective platinum coat over the ROI. FIB-SEM imaging Registered and trimmed samples were then mounted onto the edge of an SEM stub (Agar Scientific) with silver conductive epoxy (CircuitWorks) with the trimmed surface facing up so that it will be perpendicular to the focused ion beam (FIB). The sample was then sputter coated with gold (180 s at 30 mA) in a Quorum Q150RS coater before being placed in the Zeiss Crossbeam 540 focused ion beam scanning electron microscope (FIB-SEM). Once the ROI was located in the sample, Atlas3D software (Fibics Inc. and Zeiss) was used to perform sample preparation and 3D acquisitions. First, a platinum protective coat of 20 × 20 µm was deposited with 1.5 nA FIB current. The rough trench was then milled to expose the imaging cross-section with 15 nA FIB current, followed with a polish at 7 nA. Now that the imaging cross-section was exposed, the features visible here including vessels and nuclei were used to correlate with the registered 3D volumes in Amira and confirm the current position relative to the ROI. During the acquisition, lower resolution keyframes with a large field of view (FOV) from 40 × 40 to 70 × 70 µm were acquired, in order to have this broader context of the sample. Provided there were enough features close to the ROI, this information helped to position the high-resolution imaging FOV (typically 10 × 10 µm). The 3D acquisition milling was done with 3 nA FIB current. For SEM imaging, the beam was operated at 1.5 kV/700 pA in analytic mode using the EsB detector (1.1 kV collector voltage) at a dwell time from 6 to 8 µs with no line averaging over a pixel size of 5 × 5 nm and slice thickness 8 nm. For the pilot acquisition run at Zeiss, Oberkochen, a large volume was first acquired at low magnification without prior MicroCT, while correlating with the confocal dataset to detect the ROI, which was subsequently imaged at 5 nm isotropic pixel size. FIB-SEM stack segmentation and correlation A single stack file containing individual FIB images was aligned on ImageJ software ( ) with the help of the linear stack registration plugin (SIFT). Grayscale look up table was inverted and the stack was binned 2 × in both lateral and axial planes. Microglia and dendrites of interest were located based on their xyz coordinates within the ROI and from fiducials correlated between electron and confocal microscopy datasets. The segmentation was carried out manually using iMOD software ( ) and 3D model was generated and matched to the confocal dataset to confirm perfect correlation. Thy1 ::EGFP neuron and microglia were identified upon correlation of light and electron microscopy datasets. Complete presynaptic bouton inclusions were identified by the presence of 40 nm vesicles typical of presynaptic machinery. Partial presynaptic bouton and axonal material inclusions were identified after 3D reconstruction of the belonging axon, based on the presence of presynaptic machinery: clusters of 40 nm presynaptic vesicles and apposition on a PSD. Blender software ( ) was used to generate animation of the FIB-SEM dataset segmentation and 3D reconstruction. Preparation of hippocampal slice culture Organotypic hippocampal slice cultures were prepared using the air/medium interface method 63 . Briefly, mice were decapitated at P4 and hippocampi were dissected out in cold dissecting medium (Hank’s buffered salt solution 1 × (Gibco), penicillin/streptomycin 1 ×, HEPES (Gibco) 15 mM, glucose (Sigma) 0.5%)). Transverse sections of 300 µm thickness were cut using a tissue chopper. Slices were laid on culture-inserts (Millipore) in pre-warmed six-well plates containing 1.2 mL of maintaining medium (minimal essential medium (MEM) 0.5 × (Gibco), Basal Medium Eagle 25%, horse serum 25%, penicillin/streptomycin 1 × , GlutaMAX 2 mM, glucose 0.65%, sodium bicarbonate 7.5%, ddH 2 O qsp). Medium was replaced after 24 h and then every 2 days. For culture of Thy1 ::EGFP; Cx3cr1:: CreER; RC ::LSL-tdTomato brain slices, 98% Z isomers-OHT was added to the maintaining medium at 0.1 µM during the first 24 h. After preparation, cultures were maintained for up to 21 days in vitro (DIV21) in incubator at 35 °C and 5% CO 2 . The morphology of microglia was inspected in Cx3cr1:: GFP slices at the indicated time-points upon fixation in PFA 4% for 1 h at room temperature, PBS-washed, mounted with mowiol, and imaged on a Leica SP5 confocal resonant scanner microscope with a 63 × /1.4 oil-immersion objective. Light sheet live imaging Live imaging of microglia and synapses in hippocampal slice cultures (DIV10–19) was performed using a Z1 light sheet microscope (Zeiss). The imaging chamber was set at 35 °C and 5% CO 2 , and filled with imaging medium (MEM without phenol red 0.5 × , horse serum 25%, penicillin/streptomycin 1 × , GlutaMAX 2 mM, glucose 0.65%, sodium bicarbonate 7.5%, ddH 2 O qsp) 2 h before the imaging session, to allow the system to equilibrate and the medium to reach pH7. Low melting point agarose was prepared at 2% with imaging medium and incubated at 35 °C and 5% CO 2 for 30 min, for reaching proper pH. The membrane of the Millipore insert containing the slice of interest was cut around the slice and laid onto the equilibrated liquid agarose before polymerization at 4 °C for 1 min. Although it is very unlikely that the tissue temperature dropped to this temperature, it has been reported that following incubation at 4 °C for longer periods of time (> 30 min) microglia soma enlarged, but recovered normal morphology within 2 h 64 . Therefore, we waited 2 h for the slice to recover before imaging and carried out control experiments showing that microglia motility (Supplementary Fig. 3 ) was similar to that found by two-photon imaging in vivo. The slice mounted on agarose was then placed in incubation at 35 °C and 5% CO 2 for further 30 min. The polymerized agarose containing the slice was then inserted into a FEP tube adapted on a glass capillary and the slice was gently pushed to be exposed 1 mm outside the FEP tube. The capillary was then placed on the microscope holder and the slice was immersed in the imaging medium of the chamber 1 h before imaging for stabilization. For all imaging sessions, microglia and neurons were selected for their brightness and position in the stratum radiatum of CA1, 2 to 30 µm from the slice surface. Imaging was performed for 2–3 h using a 60 × /NA1 water-immersion objective, with a lateral pixel size of 130 nm and an axial step of 480 nm. For microglia–postsynaptic structures interactions (2-colors imaging), 488 and 561 lasers were used for simultaneous acquisition of GFP and tdTomato signal using 505–545 and 575–615 band-pass filters on two cameras, at a rate of one frame/45–60 s. For microglia-pre/postsynaptic structures interactions (3-colors imaging), two channels were fast switching between frames: 488 and 561 lasers were used for simultaneous acquisition of GFP and tdTomato signal using a 505–545 band-pass filter and a 585 long-pass filter, and iRFP was imaged using a 638 laser line and the 585 long-pass filter, at a rate of 1 frame/90 s. The emission spectra of tdTomato and iRFP overlap significantly and are both efficiently detected in the presence of the 585 nm long-pass filter. However, we were able to image them separately, because their excitation spectra are distinct, with the 561 nm laser exciting primarily tdTomato (< 20% iRFP peak excitation) and the 638 nm laser exciting exclusively iRFP. This allowed us to image iRFP and tdTomato separately by alternating exposure to 561 and 638 nm light sources with a fixed 585 nm long-pass filter detection system. Notably, no detectable iRFP signal was seen during 561 nm illumination, most likely to be because iRFP is significantly more dim than tdTomato. All datasets were deconvolved using Zen software, and corrected for drifts on Image J using a script created by Albert Cardona and Robert Bryson-Richardson 65 , and modified by Cristian Tischer (EMBL Heidelberg). Analysis of microglia motility TdTomato signal intensity was measured in microglial processes and normalized across all datasets. Noise was measured outside microglia, and removed by thresholding to the measured value + 40%. Motility was assessed by analyzing protrusions over 1 min interval. Extending and retracting protrusions were counted and measured at 0, 60, and 120 min after the beginning of the session to confirm imaging. Labeling of presynaptic structures AAV- rSyn ::iRFP670 virus was generated by cloning AAV vector serotype 2 ITRs with a rat Synapsin promoter (a gift from Hirai and colleagues 66 ), a iRFP670 coding sequence (a gift from Shcherbakova and Verkhusha 67 , Addgene plasmid 45457), WPRE, and human growth hormone polyA sequence. Viral production and purification was performed according to McClure et al. 68 with minor modifications. Briefly, 15 × 15 cm dishes of HEK293T cells were transfected with the pAAV- rSyn ::iRFP670 plasmid, together with pAAV1, pAAV2, and the helper plasmid pFdelta6 using PEI (Sigma, 408727). Seventy-two hours after transfection, the cells were collected and lysed according to the protocol and the virus containing cell lysate was loaded onto HiTrap Heparin columns (GE Biosciences 17-0406-01). After a series of washes, the virus was eluted from the heparin column to a final concentration of 450 mM NaCl. Finally, the virus was concentrated using Amicon Ultra-15 centrifugal filter units (Millipore UFC910024) and semi-quantitative titering of viral particles was executed by Coomassie staining versus standards. CA3 neurons of hippocampal slices were infected with AAV-rSyn::iRFP670 the day following the preparation of the culture, by a local injection of 0.1 µL of virus in the pyramidal layer of CA3 using a glass capillary under a stereomicroscope. Expression of iRFP was observed in CA3 neurons exclusively as early as 5 days post infection, and expression in CA3-CA1 Schaffer collaterals reached satisfactory expression level 10 days after infection. Analysis of microglia–presynaptic compartment interactions Contacts between microglia and boutons were detected by scouring iRFP + axons in three dimensions for the entire duration of the imaging session. Elimination events were defined as clear engulfment of iRFP + material by microglia from an iRFP + bouton. Latency before engulfment was measured as the time of contact between microglia and the bouton before any iRFP + material was seen internalized. Inclusions were defined as iRFP + structures within microglia for which the origin could not be defined, as they were present from the beginning of the imaging session. Analysis of microglia–postsynaptic compartment interactions Contacts between microglia and the postsynaptic compartment were detected by scouring dendritic shafts of GFP + neurons in three-dimensions for the entire duration of the imaging session. All z -planes containing the contacted spine over time were axially projected. GFP intensity was measured in the dendritic shaft and normalized across all the datasets. GFP signal noise and residual microglial YFP signal were measured in microglial processes, and thresholded out to the measured value + 40%. For all spines that were seen to be contacted at least once by microglia, we measured the spine length, the size of the head, and the extent of contact between the spine and microglia for each timepoint of the entire imaging session. The length of the spine was measured from the dendritic shaft to the tip of the spine head, the change in head size was measured as variation in GFP signal at the spine head, and the extent of contact was measured as the percentage of GFP signal at the spine head colocalizing with tdTomato signal. We then performed cross-correlation analysis over time of the extent of contact, with the spine length variation (correlation with the head size not shown) using bootstrap resampling on MATLAB. Statistical analysis All data are represented as mean ± SEM. To determine statistical significance, data distribution was first tested for variance and normality, and the corresponding t -test (parametric or non-parametric) was performed using the Graphpad Prism software. For multiple group analysis (motility analysis over time) two-way ANOVA was performed using Graphpad Prism software. Cross-correlation analysis was made using MATLAB software. Data availability All relevant data are available from the authors.
For the first time, EMBL researchers have captured microglia pruning synaptic connections between brain cells. Their findings show that the special glial cells help synapses grow and rearrange, demonstrating the essential role of microglia in brain development. Nature Communications will publish the results on March 26. Around one in 10 brain cells are microglia. Cousins of macrophages, they act as the first and main contact in the central nervous system's active immune defense. They also guide healthy brain development. Researchers have proposed that microglia prune synapses as an essential step during early circuit refinement. But until now, no one had observed this activity. Microglia make synapses stronger Laetitia Weinhard from the Gross group at EMBL Rome set out on a massive imaging study to observe this process in action in the mouse brain, in collaboration with the Schwab team at EMBL Heidelberg. "Our findings suggest that microglia are nibbling synapses as a way to make them stronger, rather than weaker," says Cornelius Gross, who led the work. Around half of the time, microglia contact a synapse, and the synapse head sends out thin projections called filopodia to contact them. In one particularly dramatic case—as seen in the accompanying image—15 synapse heads extended filopodia toward a single microglia as it picked on a synapse. "As we were trying to see how microglia eliminate synapses, we realised that microglia actually induce their growth most of the time," Laetitia Weinhard explains. It turns out that microglia might underly the formation of double synapses, in which the terminal end of a neuron releases neurotransmitters onto two neighboring partners instead of one. This process can support effective connectivity between neurons. Weinhard says, "This shows that microglia are broadly involved in structural plasticity and might induce the rearrangement of synapses, a mechanism underlying learning and memory." Perseverance Since this was the first attempt to visualise this process in the brain, the current paper entails five years of technological development. The team tried three different state-of-the-art imaging systems before succeeding. Finally, by combining correlative light and electron microscopy (CLEM) and light sheet fluorescence microscopy—a technique developed at EMBL—they were able to make the first movie of microglia eating synapses. "This is what neuroscientists have fantasised about for years, but nobody had ever seen before," says Cornelius Gross. "These findings allow us to propose a mechanism for the role of microglia in the remodeling and evolution of brain circuits during development." In the future, he plans to investigate the role of microglia in brain development during adolescence and the possible link to the onset of schizophrenia and depression.
10.1038/s41467-018-03566-5
Chemistry
Mobile device could make it easier to predict and control harmful algal blooms
Zoltán Gӧrӧcs et al. A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples, Light: Science & Applications (2018). DOI: 10.1038/s41377-018-0067-0 Journal information: Light: Science & Applications
http://dx.doi.org/10.1038/s41377-018-0067-0
https://phys.org/news/2018-09-mobile-device-easier-algal-blooms.html
Abstract We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h. The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel. These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling. Motion blur is eliminated by simultaneously illuminating the sample with red, green, and blue light-emitting diodes that are pulsed. Operated by a laptop computer, this portable device measures 15.5 cm × 15 cm × 12.5 cm, weighs 1 kg, and compared to standard imaging flow cytometers, it provides extreme reductions of cost, size and weight while also providing a high volumetric throughput over a large object size range. We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro- and nanoplankton composition. Furthermore, we measured the concentration of a potentially toxic alga ( Pseudo-nitzschia ) in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health. The cost-effectiveness, compactness, and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for large-scale and continuous monitoring of the ocean microbiome, including its plankton composition. Introduction Plankton form the base of the oceanic food chain, and thus they are important components of the whole marine ecosystem. Phytoplankton are responsible for approximately half of the global photoautotrophic primary production 1 , 2 . High-resolution mapping of the composition of phytoplankton over extended periods is very important and but rather challenging because the concentration and composition of species rapidly change as a function of space and time 3 . Furthermore, the factors governing these changes are not fully understood 4 , and phytoplankton population dynamics are chaotic 5 . The changes in the seasonal bloom cycle can also have major environmental 6 and economic effects 7 . The vast majority of phytoplankton species are not harmful, but some species produce neurotoxins that can enter the food chain, accumulate, and poison fish, mammals, and ultimately humans. Notable examples include Karenia brevis , which produces brevetoxin and causes neurotoxic shellfish poisoning 8 ; Alexandrium fundyense , which generates saxitoxin and causes paralytic shellfish poisoning; Dinophysis acuminata , which produces okadaic acid and results in diarrhetic shellfish poisoning 9 ; and Pseudo-nitzschia , which produces domoic acid and is responsible for amnesiac shellfish poisoning, potentially even leading to death 10 , 11 . Currently, monitoring of the concentrations of these species in coastal regions, including in California (USA), is usually performed by manual sample collection from coastal waters using plankton nets, followed by transportation of the sample to a central laboratory for light microscopy-based analysis 12 , which is very tedious, slow, and expensive and requires several manual steps performed by professionals. As an alternative to light microscopy-based analysis, flow cytometry has been used to analyze phytoplankton samples for more than 35 years 13 . The technique relies on using a sheath flow to confine the plankton sample to the focal point of an illuminating laser beam and measuring the forward and side scattering intensities of each individual object/particle inside the sample volume. To aid classification, flow cytometry is usually coupled with a fluorescence readout to detect the autofluorescence of chlorophyll, phycocyanin, and phycoerythrin found in algae and cyanobacteria. Several field-portable devices based on flow cytometry have been successfully used for analyzing nano- and pico-phytoplankton distributions in natural water samples 14 , 15 , 16 . However, taxonomic identification based solely on scattering and fluorescence data is usually not feasible in flow cytometry, and thus these devices are coupled with additional microscopic image analysis 17 or need to be enhanced with some form of imaging 18 , 19 . Consequently, imaging flow cytometry has become a widely used technique 20 in which a microscope objective is used to image the sample (e.g., algae) within a fluidic flow. The image capture is triggered by a fluorescence detector, and thus objects with detectable autofluorescence are imaged. Some of the widely utilized and commercially available imaging flow cytometers include the Flowcam 21 (Fluid Imaging Technologies), Imaging Flowcytobot 22 (McLane Research Laboratories), and CytoSense 23 (Cytobouy b.v.). Although these systems are able to perform imaging of the plankton in a flow, they still have some important limitations. The use of a microscope objective lens provides a strong trade-off between the image resolution and volumetric throughput of these systems; therefore, to obtain high-quality images, the measured sample volume is limited to a few milliliters per hour (e.g., 3–15 mL/h). Using lower-magnification objective lenses can scale up this low throughput by approximately tenfold at the expense of image quality. In addition, the shallow depth-of-field of the microscope objective necessitates hydrodynamic focusing of the liquid sample into a few-µm-thick layer using a stable sheath flow. This also restricts the size of the objects that can be imaged (e.g., to <150 µm) as well as the flow velocity and throughput of the system, thus requiring the use of additional expensive techniques such as acoustic focusing 22 . As a result of these factors, currently existing imaging flow cytometers used in environmental microbiology are fairly bulky (weighing, e.g., 9–30 kg) and costly (>$40,000–$100,000), limiting their widespread use. Holographic imaging of plankton samples provides a label-free alternative to these existing fluorescence-based approaches; in fact, its use in environmental microbiology started more than 40 years ago using photographic films 24 and subsequently continued via digital cameras and reconstruction techniques 25 . Holography provides a volumetric imaging technique that uses coherent or partially coherent light to record the interference intensity pattern of an object 26 . This hologram can subsequently be reconstructed to digitally bring the object into focus. The hologram contains information on the complex refractive index distribution of the object, and consequently, not only the absorption but also the phase distribution of the sample can be retrieved. There are several implementations of digital holography for imaging a fluidic flow. We can classify these digital holographic microscopy systems in terms of the presence of an external reference wave (in-line 27 or off-axis 28 ), magnification of the imaged volume, and utilization of a lens 29 or spherical wavefront 30 for illumination. Off-axis systems can directly retrieve the phase information from the captured hologram; however, their space-bandwidth product and image quality are generally worse than those of in-line systems 26 . Commercially available holographic imaging flow cytometer systems also exist, such as the LISST-Holo2. This platform is a monochrome system (i.e., does not provide color information) and offers relatively poor image quality compared to traditional imaging flow cytometers. The throughput and spatial resolution are coupled in this device, and therefore it can achieve high-throughput volumetric imaging at the cost of limited resolution (~25 µm), which makes it useful for detecting and identifying only larger organisms. Higher-resolution and better image quality systems using microscope objectives in the optical path have also been described in the literature 28 , 29 . However, the use of microscope objective lenses not only makes these systems more expensive but also limits the achievable field-of-view (FOV) and depth-of-field, thereby drastically reducing the throughput of the system to, e.g., ~0.8 mL/h 31 . To provide a powerful and yet mobile and inexpensive tool for environmental microbiology-related research, here we introduce an in-line holographic imaging flow cytometer that is able to automatically detect and provide in real time color images of label-free objects inside a continuously flowing water sample at a throughput of ~100 mL/h. This high-throughput imaging flow cytometer weighs 1 kg with a size of 15.5 cm × 15 cm × 12.5 cm (see Fig. 1 ) and is based on a deep learning-enabled phase-recovery and holographic-reconstruction framework running on a laptop that also controls the device. Compared with other imaging flow cytometers, the presented device is significantly more compact, lighter weight, and extremely cost-effective, with parts costing less than $2500, only a fraction of the cost of existing imaging flow cytometers. This device continuously examines the liquid pumped through a 0.8-mm-thick microfluidic chip without any fluorescence triggering or hydrodynamic focusing of the sample, which also makes it robust and very simple to operate, with a very large dynamic range in terms of the object size from microns to several hundreds of microns. We demonstrated the capabilities of our field-portable holographic imaging flow cytometer by imaging the micro- and nanoplankton composition of ocean samples along the Los Angeles coastline. We also measured the concentration of the potentially harmful algae Pseudo-nitzschia , achieving good agreement with independent measurements conducted by the California Department of Public Health (CDPH). These field results provide a proof-of-principle debut of our compact, inexpensive and high-throughput imaging flow cytometer system, which might form the basis of a network of imaging cytometers that can be deployed for large-scale, continuous monitoring and quantification of the microscopic composition of natural water samples. Fig. 1: Photos and schematic of the imaging flow cytometer device. The water sample is constantly pumped through the microfluidic channel at a rate of 100 mL/h during imaging. The illumination is emitted simultaneously from red, green, and blue LEDs in 120-µs pulses and triggered by the camera. Two triple-bandpass filters are positioned above the LEDs, and the angle of incidence of the light on the filters is adjusted to create a <12 nm bandpass in each wavelength to achieve adequate temporal coherence. The light is reflected from a convex mirror before reaching the sample to increase its spatial coherence while allowing a compact and lightweight optical setup Full size image Results We tested our imaging flow cytometer with samples obtained from the ocean along the Los Angeles coastline. The samples were imaged at a flow rate of 100 mL/h, and the raw full FOV image information was saved on the controlling laptop. Plankton holograms were segmented automatically and reconstructed by the device using a deep convolutional network, and phase-contrast color images of plankton were calculated and saved to the local laptop controlling the imaging flow cytometer through a custom-designed graphical user interface (GUI). Figure 2 highlights the performance of this automated deep learning-enabled reconstruction process and the image quality achieved by the device, showcasing several plankton species with both their initial segmented raw images (holograms) and the final-phase contrast color images. We were able to identify most of the plankton types detected by our device based on the reconstructed images, as detailed in the captions of Fig. 2 . An additional selection of unidentified plankton imaged in the same ocean samples is also shown in Fig. 3 . Some part of the water sample for each measurement was also sent to CDPH for comparative microscopic analysis by their experts, and the qualitative composition of different species found in each water sample was in good agreement with our measurements. Furthermore, to perform a quantitative comparison against the routine analysis performed by CDPH, we selected the potentially toxic Pseudo-nitzschia alga and evaluated its relative abundance at six different measurement locations (i.e., public beaches) along the Los Angeles coastline. Our imaging flow cytometer results, summarized in Fig. 4 , also showed good agreement with the analysis performed by CDPH. CDPH analyzes the relative abundance of species based on microscopic scanning of a slide containing the settled objects from the water sample of interest, whereas our analysis is based on imaging of the liquid sample itself during its flow. Differences in sample preparation, imaging, and data-processing techniques might cause some systematic differences between the two Pseudo-nitzschia composition metrics reported in Fig. 4 . However, both methods are self-consistent, and therefore the relative differences that are observed in Pseudo-nitzschia composition among different beaches are comparable, illustrating good agreement between our results and the analysis performed by CDPH. Fig. 2: The image quality of the flow cytometer allows the identification of plankton. Examples of various ocean planktons detected by our imaging flow cytometer at the Los Angeles coastline, represented by their a raw holograms and b phase-contrast reconstructions following phase recovery. The organisms were identified as (1) Chaetoceros lorenzianus , (2) Chaetoceros debilis , (3) Ditylum brightwellii , (4) Lauderia , (5) Leptocylindrus , (6) Pseudo-nitzschia , (7) Ceratium fusus , (8) Ceratium furca , (9) Eucampia cornuta , (10) Bacteriastrum , (11) Hemiaulus , (12) Skeletonema , (13) Ciliate , (14) Cerataulina , (15) Guinardia striata , (16) Lithodesmium , (17) Pleurosigma , (18) Protoperidinium claudicans , (19) Protoperidinium steinii , (20) Prorocentrum micans , (21) Lingulodinium polyedra , (22) Dinophysis , (23) Dictyocha fibula (silica skeleton), and (24) Thalassionema . The yellow rectangle in a -1 represents the segmented and 45° rotated area corresponding to the reconstructed images Full size image Fig. 3: Reconstructed images of various phytoplankton and zooplankton. Phase-contrast color images depicting the plankton found near the Los Angeles coastline and imaged by our flow cytometer at a flowrate of 100 mL/h Full size image Fig. 4: Prevalence of Pseudo-nitzschia in the ocean along the Los Angeles coastline on January 31, 2018. Samples were collected according to California Department of Public Health (CDPH) protocols. A portion of each sample was analyzed by the imaging flow cytometer system, and the remainder was sent to CDPH for subsequent analysis, which showed good agreement with our measurements. The inset shows phase-contrast reconstruction examples of Pseudo-nitzschia , an alga that can produce domoic acid, a dangerous neurotoxin that causes amnesic shellfish poisoning Full size image We also demonstrated the field portability and on-site operation of our imaging flow cytometer by performing experiments at the Redondo Beach pier over a duration of 8 h. The flow cytometer itself was powered by a 5-V battery pack and could run for several hours. We utilized a 500-Wh 19-V external battery pack to power the laptop for the duration of our field experiments (from 6:30 am until 2:30 pm). In these field experiments, we measured the time evolution of the total plankton concentration in the ocean during the morning hours and found that the amount of microplankton in the top 1.5 m of the water increased during the day, possibly due to vertical migration 32 , 33 (see Fig. 5 ). We also manually counted the number of Pseudo-nitzschia found in these samples and observed a peak in the morning (at ~8:30 am) and a steady decline thereafter (Fig. 5 ); in general, these trends are rather complicated to predict since they are influenced by various factors, such as the composition of the local microbiome, tides and upwelling/downwelling patterns 34 , 35 . These results demonstrate the capability of our portable imaging flow cytometer to periodically measure and track the plankton composition and concentration of water samples on site for several hours without the need for connection to a power grid. Fig. 5: Field test results from a series of measurements at Redondo Beach on April 17, 2018. We sampled the top 1.5 m of the ocean every 2 h and measured on-site the variation in the plankton concentration over time. The measurements started after sunrise (6:21 am), and each sample was imaged on-site using the flow cytometer. The results showed an increase in the total particle count during the day, whereas the number of Pseudo-nitzschia showed a peak during the morning hours Full size image Discussion The throughput of an imaging flow cytometer is determined by several factors, but most importantly it is governed by the required image quality. We designed our portable imaging flow cytometer to achieve the highest resolution allowed by the pixel size of the image sensor, which resulted in a tight photon budget owing to the loss of illumination intensity for achieving sufficient spatial and temporal coherence over the sample volume and the requirement for pulsed illumination to eliminate motion blur. Because of the fast flow speed of the objects within the sample channel, pixel super-resolution 36 , 37 approaches could not be used to improve the resolution of the reconstructed images to the subpixel level. We conducted our experiments at 100 mL/h; however, at the cost of some motion blur, this throughput could be quadrupled without any modification to the device. It could be increased even more by using a thicker (e.g., >1 mm) microfluidic channel. To demonstrate this, we imaged an ocean sample with increased throughputs of up to 480 mL/h (see Fig. 6 ). The obtained reconstructions show that the imaged alga ( Ceratium furca ) still remains easily recognizable despite the increased flow speed. Fig. 6: Effect of increasing the liquid flow speed in the system on the image quality. The relative flow speed profile inside the rectangular channel cross-section is depicted in the top left (see the Methods section). The measurements were made on an ocean sample containing a high concentration of Ceratium furca , which was therefore used as the model organism for this test. The sample was tested at various flow speeds above 100 mL/h with a constant 120-µs illumination pulse length. We selected the objects located inside the channel near the maximum -flow velocity regions, and their locations are depicted as red dots. a – e Reconstructed intensities corresponding to different flow rates are shown. The flow rate (black) and the theoretically calculated displacement during the illumination pulse (red) are also shown Full size image In addition to the physical volumetric throughput, the processing speed of the controlling laptop can also be a limiting factor, mainly affecting the maximum density of the sample that can be processed in real time. Our device design achieves real-time operation; i.e., the computer processes the information faster than the image sensor provides it to avoid overflowing the memory. Currently, the device can be run in three modes depending on the sample density. First, we can acquire and save the full FOV holograms and perform all reconstruction and phase recovery steps after the measurement, which is a necessary approach for high-concentration samples (e.g., >2000–3000 objects/mL). Even denser samples can also be analyzed by our device by, e.g., diluting them accordingly or by lowering the throughput. Second, we can reconstruct but do not perform phase recovery of the detected objects during measurement. At present, the image-segmentation and reconstruction procedure takes ~320 ms for each full FOV frame, in which seven objects can be reconstructed per image with parallel computing on a GTX 1080 GPU. The major computational operations are (1) segmentation of the full FOV hologram for object detection (~70 ms), (2) holographic autofocusing and reconstruction (~12 ms/object), and (3) transfer of the final amplitude and phase images (8 bit, 1024 × 1024 pixels × 3 color channels) from the device (i.e., GPU) to the host (i.e., central processing unit) and saving of the images on an internal solid-state drive (~10–20 ms per object). Consequently, for reconstruction but not phase recovery of objects, the device can image, in real-time, samples with ~700 objects/mL at a flowrate of 100 mL/h. The third mode of operation of our device involves performing both the image-reconstruction and phase-recovery steps for all flowing objects during the measurement. The deep learning-based phase-recovery step is currently the most intensive part of our algorithm, with a runtime of ~250 ms/object. Thus, if real-time phase recovery is necessary in this third mode of operation, it restricts the sample density to ~100 objects/mL at a flowrate of 100 mL/h. Since the performance of GPUs increases by, on average, 1.5 × per year, these computational performance restrictions will be partially overcome over time. Furthermore, we have recently shown that is possible to simultaneously focus all objects in a hologram using a convolutional neural network 38 that extends the depth-of-field of holographic reconstruction by >25-fold compared to conventional approaches. This would allow the phase-recovery, autofocusing and image-reconstruction steps to be combined into a single neural network, which would make the computation time for the full FOV independent of the density of the particles, thus enabling real-time imaging of highly dense fluidic samples. We tested this approach to reconstruct micro-objects in our 800-µm-thick channel volume and found that it gives good results regardless of the object’s height inside the channel (see Supplementary Figure S1 ). Since static objects are removed from our FOV using the digital background-subtraction step, we can tolerate some loss of light transmission due to potential fouling. Since our instrument is based on holography, the amplitude and phase information of a flowing object are dispersed over a larger diffraction pattern at the detector plane, making it robust to potential alterations caused by other small objects in the light path. Furthermore, since the flow chamber of our imaging flow cytometer is disposable, it can be easily replaced if needed; for this purpose, an evaluation of the background image can be used to automatically alert users to the need for replacement. Although our current prototype is a field-portable imaging flow cytometer, it is not fully waterproof and operates above the water surface. This prototype can operate up to 100 m from the controlling laptop by simply changing the USB3 camera connection to GigE and constructing a long-range microcontroller communication setup similar to an OpenROV 39 submersible platform. Owing to its low hardware complexity in comparison with other imaging flow cytometer technologies, the component cost of the system is very low (<$2500), and with large volume manufacturing, it could be built for less than $760 (see Supplementary Table S1 ). This remarkable cost-effectiveness opens up various exciting opportunities for environmental microbiology research and could allow the creation of a network of computational imaging cytometers at an affordable price point for large-scale and continuous monitoring of ocean plankton composition and the ocean microbiome in general. Materials and methods Optical system Our imaging flow cytometer uses a color image sensor with a pixel size of 1.4 µm (Basler aca4600-10uc). The housing of the camera is removed, and the circuit is rearranged to allow the sample holder to be placed in direct contact with the protective cover glass of the image sensor (see Fig. 1 ). Illumination of the holographic microscope is provided by using the red, green, and blue emitters from a light-emitting diode (LED) (Ledengin LZ4-04MDPB). The spatial and temporal coherence of the emitted light from the LEDs is increased to achieve the maximum resolution allowed by the sensor pixel size. The spatial coherence is adjusted by using a convex mirror (Edmund Optics #64-061) to increase the light path. The LED light is also spectrally filtered by two triple-bandpass optical filters (Edmund Optics #87–246, Chroma Inc. 69015m) to increase the temporal coherence of the illumination. The placement of the optical components is designed to tune the bandpass of the spectral filter angle to better match the emission maximum of the LEDs. Increasing the spatial and temporal coherence of the LEDs also decreases the intensity reaching the image sensor. In addition, the short exposure time required to avoid the motion blur when imaging objects in a fast flow makes it necessary for our configuration to utilize a linear sensor gain of 2. The additional noise generated from the gain is sufficiently low as to not interfere with the image-reconstruction process. Microfluidic channel and flow design A microfluidic channel (Ibidi µ-Slide I) with an internal height of 0.8 mm is placed on the top of the image sensor, secured using a three-dimensional (3D)-printed holder, and connected to a peristaltic pump (Instech p625). The size of the active area of the image sensor is slightly smaller than the width of the channel (4.6 mm vs. 5 mm), and the channel is positioned so that the sensor measures the center of the liquid flow. We calculated the flow profile inside the channel (see Fig. 6 ) by solving the Navier–Stokes equation for noncompressible liquids assuming a nonslip boundary condition. The results show that the image sensor measures ~98% of the total volume passing through the microfluidic channel. The flow profile is a two-dimensional paraboloid, with the maximum flow speed located at the center of the microfluidic channel and measuring approximately 1.66 times higher than the mean velocity of the liquid (see Fig. 6 ). To acquire sharp, in-focus images of the objects in the continuously flowing liquid, we operate the image sensor in the global reset release mode and illuminated the sample by flash pulses, where the length of an illuminating pulse is adjusted to not allow an object traveling at the maximum speed inside the channel to shift by more than the width of a single sensor pixel. For a flowrate of 100 mL/h, this corresponds to a pulse length of 120 µs. Pulsed illumination, power, and control circuit Because shortening the illumination time also constrains the available photon budget, we maximize the brightness of our LEDs by operating them at currents ranging from 2.2 to 5 A depending on their color. The currents are set for each LED emitter to create similar brightness levels at our image sensor, ensuring that we adequately light the sample at each color, a requirement for obtaining color images. The green LED spectrum is inherently wider than the red and blue counterparts, and thus the spectral filters will reduce its intensity the most. Therefore, we operate the green LED at the experimentally determined maximum possible current of 5 A. The red and blue LEDs require a current of ~2.2 A to match the intensity of the green LED on the image sensor to correct the white balance. We designed a circuit to control the necessary components of the device. The circuit is powered by either a 5-V wall-mounted power supply or a cellphone charger battery pack. The circuit fulfills four major roles: providing power to the peristaltic pump, charging the capacitors for providing power to the LEDs, synchronizing the LEDs to the camera and creating stable, short, high current pulses, and, finally, providing an interface for remote control by a laptop using an Inter-Integrated-Circuit (i2c) interface for setting various parameters. The peristaltic pump is powered by a high-efficiency step-up DC-DC converter at 16 V (TPS61086, Texas instruments), and its speed is controlled by a potentiometer via i2c components (TPL0401B, Texas Instruments). The charge for the high-current pulses is stored in three 0.1-F capacitors, which are charged using a capacitor charger controller (LT3750, Linear Technologies) to 12 V. The capacitor charge is initiated by the image sensor flash window trigger signal, which is active during the frame capture, and its length can be controlled by the camera software driver. The charger controller acquires an “on” state and keeps charging the capacitors until the preset voltage level of 12 V is reached. During the short illumination pulses, the voltage on the capacitors decreases only slightly, and they are immediately recharged as each frame capture resets the charge cycle, thereby allowing continuous operation. The LEDs are synchronized and their constant-current operation is ensured by a triple-output LED driver controller (LT3797, Linear Technologies). The controller uses the same flash window signal from the image sensor to turn on the LEDs for the exposure duration set by the software. The current of each LED is controlled between 0 and 12.5 A using digital i2c potentiometers (TPL0401A, Texas Instruments) and is kept constant for the subsequent pulses by the circuit, thus maintaining the same illumination intensity for each holographic frame. During startup, it takes ~3–4 frames for the circuit to stabilize at a constant light level. To avoid having multiple devices with the same address on the i2c line, we included an address translator (LTC4317, Linear Technologies) to interface with the potentiometers controlling the red and blue LEDs. To control the circuit, the laptop communicates with an Arduino microcontroller (TinyDuino from Tinycircuits), which is used as an interface for i2c communications only. During the initial startup, the circuit consumes ~1 A for the first ~8 s until the capacitors are fully charged. During its operation, the circuit consumes, on average, ~370 mA with a pump speed setting of 100 mL/h and a frame rate of 3 frames per second. This yields an average power consumption of less than 2 W. In addition, the image sensor’s typical power consumption is ~2.8 W according to the manufacturer’s data. Object detection and deep learning-based hologram reconstruction For automatic detection and holographic reconstruction of the target objects found in the continuously flowing water sample (see Fig. 7 ), the static objects found in the raw full FOV image (e.g., dust particles in the flow channel) need to be eliminated first. This is achieved by calculating a time-averaged image of the preceding ~20 images containing only the static objects and subtracting it from the present raw hologram. To ensure appropriate reconstruction quality, the mean of this subtracted image is added back uniformly to the current frame. This yields a background-subtracted full FOV image in which only the holograms of the objects newly introduced by the flow are present. These objects are automatically detected and segmented from the full FOV for individual processing (see Supplementary Figure S2 ). The full FOV background-subtracted hologram is first Gaussian-filtered and converted into a binary image by hard-thresholding with its statistical values (mean + 1.5 × standard deviation), which isolates the peaks of the holographic signatures created by the objects included in the FOV. The binary contours with an area of a few pixels are removed to reduce misdetection events due to sensor noise. A closing operation is performed in the generated binary image to create a continuous patch for each object. The resulting binary contours represent the shapes and locations of the objects appearing in the FOV, and their morphological information is used to filter each contour by certain desired criteria (e.g., major axis). The center coordinate of the filtered contour is used to segment its corresponding hologram. We should emphasize that not only is it feasible to extract all objects in the FOV but it is also possible to prioritize the segmentation of the objects of interest for a specific goal by our approach. Thus, we can better utilize the computational resources of the laptop and maintain real-time processing for denser samples. After segmentation, the Bayer-patterned holograms are separated into three mono-color (i.e., red, green, and blue) holograms corresponding to the illumination wavelengths. To fully utilize the spatial resolution of the optical system, the orientation of the Bayer-patterned green pixels is rotated by 45° to regularize their sampling grid 40 . Concurrently, the red and blue mono-color holograms are upsampled by a factor of two, and a 45° rotation is applied to these upsampled holograms. These processes are jointly called “Resampling” in Fig. 7 . Holographic autofocusing using the Tamura coefficient of the complex gradient 41 , 42 is performed for each segmented object using only a single mono-color hologram to accurately estimate the distance of the respective object from the image sensor plane. At this point, we have 3D localized each object within the flow (per FOV). The coordinates of each detected object are then used in conjunction with the estimated flow profile from our calculations, and the location of each object is predicted at the next frame. If an object is found at the predicted coordinates, it is flagged to be removed from the total count and processing workflow to avoid reconstructing and counting the same object multiple times. At this point, the image preprocessing step (shown in cyan in Fig. 7 ) is complete. Fig. 7: The algorithm used for object segmentation and deep learning-based hologram reconstruction in our field-portable imaging flow cytometer is illustrated. The phase-recovered intensity and phase images in red, green, and blue channels are fused to generate a final phase-contrast image per object (shown within the dashed black frame on the right) Full size image The next step is the high-resolution color reconstruction (shown in pink in Fig. 7 ). We maximize the resolution of the reconstruction by further upsampling the holograms by a factor of four. Each color channel is then propagated to the obtained reconstruction distance by an angular spectrum-based wave-propagation algorithm 26 and thus brought into focus. The slight incidence angle difference between the red, green, and blue emitters is corrected by modifying the propagation kernel accordingly 43 . To evaluate the resolution of the imaging flow cytometer system for the objects located inside our microfluidic channel, we replaced the flow channel with a 1951 Air Force test chart (see Supplementary Figure S3 ). Due to the partially coherent nature of our illumination, the resolution depends on the object–sensor distance; thus, we measured it by placing the test chart at various heights above the sensor. The width of the smallest resolved line varied between 1.55 µm and 1.95 µm depending on the height of the object, with 1.55 µm corresponding to the smallest resolvable feature for most flowing objects imaged by the imaging flow cytometer during its regular operation. These raw reconstructions, however, are contaminated by self-interference and twin-image noise, which are characteristic of in-line digital holographic imaging systems due to the loss of the phase information of the hologram at the sensor plane. To achieve accurate image reconstruction without these artifacts, a deep learning-based digital holographic phase recovery method 38 , 44 was employed using a convolutional neural network (see Fig. 7 and Supplementary Figure S4 ) pretrained with various phase-recovered reconstructions of water-borne micro-objects captured with our imaging flow cytometer. This method enables automated and accurate acquisition of the spectral morphology of an object without sacrificing the high-throughput operation of the holographic imaging cytometer, which otherwise would be very challenging as other existing phase-recovery methods require static repetitive measurements 43 , 45 , 46 , 47 , 48 and/or time-consuming iterative calculations 43 , 45 , 46 , 47 , 48 , 49 , 50 , which would not work for flowing objects. For the visualization of transparent objects, such as plankton, we computed the color phase-contrast image based on the complex-valued reconstructions of the red, green, and blue channels, which assist in accurately resolving the fine features and internal structures of various water-borne microorganisms with high color contrast (see, e.g., Figs. 2 , 3 , and 7 ). The phase-contrast image was synthesized by (1) estimating the background field from the mean amplitude and phase of the refocused complex field, (2) calculating the object field by subtracting the background field from the refocused field, (3) shifting the phase of the background field by π /2, (4) adding the phase-shifted background field to the object field, and (5) taking the magnitude of the recalculated total field. Graphical user interface We developed a GUI to operate the device. Through this GUI, all relevant measurement parameters can be specified, such as the liquid flow speed, the driving currents, the incidence angles for the red, green, and blue LEDs, the flash pulse duration, and the camera sensor gain. The GUI gives a real-time, full FOV reconstructed image at the center of the channel, thus allowing visual inspection during flow with and without background subtraction, and displays the total number of detected objects in the current frame. The GUI is also capable of visualizing up to 12 segmented, autofocused, and reconstructed objects in real time. The user can specify whether to digitally save any combination of the raw, background-subtracted holograms or reconstructed images. The GUI can also be run in demo mode to analyze previously captured image datasets without the presence of the imaging flow cytometer. Sample preparation and analysis We followed the sampling protocol recommended by CDPH (USA) to obtain our ocean samples. We used a plankton net with a diameter of 25 cm and a mesh size of 20 µm and performed vertical tows with a total length of 15 m (5 × 3 m) from the end of the pier at each sampling location where a pier was present (Malibu, Santa Monica, Venice, Manhattan, and Redondo; California, USA). There was no pier at Point Dume; thus, we performed a horizontal tow from the shoreline. The plankton net condensed the micro- and nanoplankton found in the ocean into a sample volume of ~250 mL; i.e., in our case, a condensation ratio of ~3000×. We extracted 1 mL of the condensed sample, rediluted it with 50 mL of filtered ocean water, and imaged its contents using our imaging flow cytometer. The remaining samples were sent to CDPH for subsequent analysis (used for comparison purposes). During our field tests, we used the same plankton net but only performed one vertical tow from a depth of 1.5 m at each measurement. A 1-mL aliquot of the obtained sample was rediluted with 20 mL of filtered ocean water. To conserve the battery power of the controlling laptop, ~12 mL of this sample was imaged on-site. The imaging flow cytometer automatically detected and saved the reconstructed images of all detected plankton and provided the user real-time feedback on the total plankton count detected. Specific counting of Pseudo-nitzschia was performed manually by scanning through the dataset of the saved images and visually identifying Pseudo-nitzschia .
In the past 10 years, harmful algal blooms—sudden increases in the population of algae, typically in coastal regions and freshwater systems—have become a more serious problem for marine life throughout the U.S. The blooms are made up of phytoplankton, which naturally produce biotoxins, and those toxins can affect not only fish and plant life in the water, but also mammals, birds and humans who live near those areas. According to the National Oceanic and Atmospheric Administration, the events have become more common and are occurring in more regions around the world than ever before. The ability to forecast harmful algal blooms and their locations, size and severity could help scientists prevent their dangerous effects. But it has been difficult to predict when and where the blooms will occur. Now, UCLA researchers have developed an inexpensive and portable device that can analyze water samples immediately, which would provide marine biologists with real-time insight about the possibility that the algal blooms could occur in the area they're testing. That, in turn, would allow officials who manage coastal areas to make better, faster decisions about, for example, closing beaches and shellfish beds before algal blooms cause serious damage. UCLA researchers created a new flow cytometer—which detects and measures the physical and chemical characteristics of tiny objects within a sample—based on holographic imaging and artificial intelligence. It can quickly analyze the composition of various plankton species within a matter of seconds, much faster than the current standard method, which involves collecting water samples manually and running them through several steps. The research, which was published online by Light: Science & Applications and will appear in the journal's print edition, was led by Aydogan Ozcan, the UCLA Chancellor's Professor of Electrical and Computer Engineering and associate director of the California NanoSystems Institute at UCLA. The growing threat from blooms is being caused in part by higher water temperature due to climate change, and in part by high levels of nutrients (mainly phosphorus, nitrogen and carbon) from fertilizers used for lawns and farmland. The toxic compounds produced by the blooms can deplete oxygen from the water and can block sunlight from reaching fish and aquatic plants, which cause them to die or migrate elsewhere. In addition, fish and nearby wildlife can even ingest the toxins; and in some rare cases, if they are close enough to the blooms, humans can inhale them which can affect the nervous system, brain and liver, and eventually lead to death. Scientists have generally tried to understand algal blooms through manual sampling and traditional light microscopy, which they use to create high-resolution maps showing a phytoplankton composition in that area over extended periods of time. To build those maps, technicians have to collect water samples by hand using plankton nets and then bring them to a lab for analysis. The process is challenging in part because the concentration and composition of algae in a given body of water can change quickly—even in the time it takes to analyze samples. The device created by Ozcan and his colleagues speeds up the entire process and, because it does not use lenses or other optical components, it performs the testing at a much lower cost. It images algae samples—and is capable of scanning a wide range of other substances, too —using holography and artificial intelligence. Commercially available imaging flow cytometers used in environmental microbiology can cost from $40,000 to $100,000, which has limited their widespread use. The UCLA cytometer is compact and lightweight, and it can be assembled from parts costing less than $2,500. One challenge the researchers had to overcome was ensuring the device would have enough light to create well-lit, high-speed images without motion blur. "It's similar to taking a picture of a Formula 1 race car," Ozcan said. "The cameraman needs a very short exposure to avoid motion blur. In our case, that means using a very bright, pulsed light source with a pulse length about one-thousandth the duration of the blink of an eye." To test the device, the scientists measured ocean samples along the Los Angeles coastline and obtained images of its phytoplankton composition. They also measured the concentration of a potentially toxic alga called Pseudo-nitzschia along six public beaches in the region. The UCLA researchers' measurements were comparable to those in a recent study by the California Department of Public Health's Marine Biotoxin Monitoring Program. Zoltán Gӧrӧcs, a UCLA postdoctoral scholar and the study's first author, said the researchers are in the process of discussing their new device with marine biologists to determine where it would be most useful. "Our device can be adapted to look at larger organisms with a higher throughput or look at smaller ones with a better image quality while sacrificing some of the throughput," he said.
10.1038/s41377-018-0067-0
Medicine
New study points to novel drug target for treating COVID-19
GuanQun Liu et al, ISG15-dependent activation of the sensor MDA5 is antagonized by the SARS-CoV-2 papain-like protease to evade host innate immunity, Nature Microbiology (2021). DOI: 10.1038/s41564-021-00884-1 Journal information: Nature Microbiology
http://dx.doi.org/10.1038/s41564-021-00884-1
https://medicalxpress.com/news/2021-03-drug-covid-.html
Abstract Activation of the RIG-I-like receptors, retinoic-acid inducible gene I (RIG-I) and melanoma differentiation-associated protein 5 (MDA5), establishes an antiviral state by upregulating interferon (IFN)-stimulated genes (ISGs). Among these is ISG15, the mechanistic roles of which in innate immunity still remain enigmatic. In the present study, we report that ISG15 conjugation is essential for antiviral IFN responses mediated by the viral RNA sensor MDA5. ISGylation of the caspase activation and recruitment domains of MDA5 promotes its oligomerization and thereby triggers activation of innate immunity against a range of viruses, including coronaviruses, flaviviruses and picornaviruses. The ISG15-dependent activation of MDA5 is antagonized through direct de-ISGylation mediated by the papain-like protease of SARS-CoV-2, a recently emerged coronavirus that has caused the COVID-19 pandemic. Our work demonstrates a crucial role for ISG15 in the MDA5-mediated antiviral response, and also identifies a key immune evasion mechanism of SARS-CoV-2, which may be targeted for the development of new antivirals and vaccines to combat COVID-19. Main Viral perturbation of host immune homoeostasis is monitored by the innate immune system, which relies on receptors that sense danger- or pathogen-associated molecular patterns (PAMPs) 1 , 2 , 3 . The RIG-I-like receptors (RLRs) RIG-I and MDA5 are pivotal for virus detection by surveying the cytoplasm for viral or host-derived immunostimulatory RNAs 4 . Binding of RNA to the C-terminal domain (CTD) and helicase of RIG-I and MDA5 leads to their signalling-primed conformation that allows for the recruitment of several enzymes 5 . These enzymes modify RLRs at multiple domains and sites, and post-translational modifications (PTMs) are particularly well studied for the caspase activation and recruitment domains (CARDs), the signalling modules. Protein phosphatase 1 (PP1)α/γ dephosphorylates the RIG-I and MDA5 CARDs 6 . In the case of RIG-I, dephosphorylation promotes Lys63-linked polyubiquitination of the CARDs by TRIM25 (tripartite motif containing 25) and other E3 ligases 7 , 8 , which stabilizes the oligomeric form of RIG-I, thereby enabling mitochondrial antiviral-signalling protein (MAVS) binding. Compared with those of RIG-I, the individual steps of MDA5 activation and critical PTMs involved are less well understood. RLR activation induces the production of type I and III IFNs which, in turn, propagate antiviral signalling by upregulating ISGs 9 , 10 . Among those is ISG15, a ubiquitin-like protein that can be covalently conjugated to lysine residues of target proteins, a PTM process termed ISGylation 11 . Although ISG15 conjugation has been widely recognized to act antivirally 12 , mechanisms of host protein ISGylation that could explain the broad antiviral restriction activity of ISG15 are currently unknown. The causative agent of the ongoing COVID-19 pandemic, severe acute respiratory syndrome coronavirus 2 (SCoV2), belongs to the Coronaviridae family that contains several other human pathogens. Coronaviruses have an exceptional capability to suppress IFN-mediated antiviral responses, and low IFN production in SCoV2-infected patients correlated with severe disease 13 . Among the coronaviral IFN antagonists is the papain-like protease (PLpro) which has deubiquitinating and de-ISGylating activities 14 , 15 . In the present study, we identify an essential role for ISGylation in MDA5 activation. We further show that SCoV2 PLpro interacts with MDA5 and antagonizes ISG15-dependent MDA5 activation via active de-ISGylation, unveiling that SCoV2 has already evolved to escape immune surveillance by MDA5. Results MDA5, but not RIG-I, signalling requires ISG15 To identify PTMs of the MDA5 CARDs that may regulate MDA5 activation, we subjected affinity-purified MDA5–2CARD fused to glutathione- S -transferase (GST–MDA5–2CARD), or GST alone, to liquid chromatography coupled with tandem mass spectrometry (LC–MS/MS), and found that, specifically, GST–MDA5–2CARD co-purified with ISG15, which appeared as two bands that migrated more slowly (by ~15 and 30 kDa) than unmodified GST–MDA5–2CARD (Extended Data Fig. 1a ). Immunoblotting (IB) confirmed that GST–MDA5–2CARD is modified by ISG15 (Extended Data Fig. 1b ). We next determined the relevance of ISG15 for MDA5-induced signalling. Whereas FLAG–MDA5 expression in wild-type (WT) mouse embryonic fibroblasts (MEFs) induced IFN-β messenger RNA and protein as well as Ccl5 transcripts in a dose-dependent manner, FLAG–MDA5 expression in Isg15 −/− MEFs led to ablated antiviral gene and protein expression (Fig. 1a and Extended Data Fig. 1c ). Similarly, antiviral gene induction by FLAG–MDA5 was strongly diminished in ISG15 knockout (KO) HeLa (human) cells compared with WT control cells (Fig. 1b and Extended Data Fig. 1d ), ruling out a species-specific effect. In contrast, FLAG–RIG-I induced comparable amounts of secreted IFN-β protein as well as Ifnb1 and Ccl5 transcripts in Isg15 −/− and WT MEFs (Fig. 1a and Extended Data Fig. 1c ). IFNB1 and CCL5 transcripts as well as IFN-β protein production by FLAG–RIG-I were similar or slightly enhanced in ISG15 KO HeLa cells compared with WT cells (Fig. 1b and Extended Data Fig. 1d ), consistent with previous reports that ISGylation negatively impacts RIG-I signalling 16 , 17 . Fig. 1: ISGylation is required for MDA5, but not RIG-I, signalling. a , b , ELISA of IFN-β from supernatants of MEFs (WT or Isg15 −/− ) ( a ) and HeLa cells (WT or ISG15 KO) ( b ) transiently transfected with increasing amounts of FLAG-tagged MDA5 or RIG-I for 40 h. Whole-cell lysates (WCLs) were probed by IB with anti-ISG15, anti-FLAG and anti-actin (loading control). c , ELISA of IFN-β from supernatants of WT or Isg15 −/− MEFs that were mock stimulated or transfected with EMCV RNA (0.1 or 0.4 µg ml −1 ), HMW-poly(I:C) (0.5 µg ml −1 ) or RABV Le (1 pmol ml −1 ), or infected with SeV (10 haemagglutination units (HAU) ml −1 ) for 24 h. d , RT–qPCR analysis of Ifnb1 , Ccl5 and Tnf mRNA in WT and Isg15 −/− MEFs stimulated as in c . e , IRF3 phosphorylation in the WCLs of NHLFs that were transfected with the indicated siRNAs for 30 h and then mock stimulated or transfected with EMCV RNA (0.4 µg ml −1 ) or RABV Le (1 pmol ml −1 ) for 6 h, assessed by IB with anti-pSer396-IRF3 and anti-IRF3. f , ELISA of IFN-β from supernatants of NHLFs that were transfected with the indicated siRNAs for 30 h and then mock stimulated or transfected with EMCV RNA (0.4 µg ml −1 ) or RABV Le (1 pmol ml −1 ), or infected with SeV (10 HAU ml −1 ) for 16 h. g , ELISA of IFN-β from the supernatants of PBMCs that were transduced for 40 h with the indicated shRNA lentiviral particles and then infected with mutEMCV (MOI = 10) or SeV (200 HAU ml −1 ) for 8 h. h , RT–qPCR analysis of IFNA2 and IL-6 mRNA in PBMCs that were transduced and infected as in g . Data represent at least two independent experiments with similar results (mean ± s.d. of n = 3 biological replicates in a – d and f , and mean of n = 2 biological replicates in g and h ). * P < 0.05, ** P < 0.01, *** P < 0.001 (two-tailed, unpaired Student’s t -test). ND, not detected; NS, not significant. Source data Full size image We next tested the effect of ISG15 gene deletion on the activation of endogenous MDA5 and RIG-I by their respective ligands. IFN-β production as well as IFNB1 , CCL5 and TNF gene expression induced by transfection of encephalomyocarditis virus (EMCV) RNA or high-molecular-weight (HMW)-poly(I:C), both of which are predominantly sensed by MDA5, were profoundly attenuated in Isg15 −/− MEF, ISG15 KO HeLa and ISG15 KO HAP-1 cells compared with their respective control cells (Fig. 1c,d and Extended Data Fig. 1e–g ). Importantly, the ablation of antiviral gene induction by EMCV RNA or HMW-poly(I:C) in ISG15 KO cells was not due to abrogated MDA5 gene expression; on the contrary, MDA5 mRNA expression was enhanced in ISG15 KO cells compared with WT cells (Extended Data Fig. 1f,g ). In contrast to stimulation with MDA5 agonists, stimulation of Isg15 −/− MEFs and ISG15 KO HeLa cells by rabies virus leader RNA (RABV Le ) transfection or Sendai virus (SeV) infection, which are RIG-I stimuli, led to IFN-β production and antiviral gene expression comparable to WT cells (Fig. 1c,d and Extended Data Fig. 1e ). To rule out potential clonal effects that could be associated with ISG15 gene-deleted cells, we performed transient gene-silencing experiments in primary normal human lung fibroblasts (NHLFs). ISG15 silencing, similar to MDA5 knockdown, led to an almost-complete loss of phosphorylation of IFN-regulatory factor 3 (IRF3)—a hallmark of RLR-signal activation—following stimulation with EMCV RNA but not RABV Le (Fig. 1e ). In agreement with this, ISG15 knockdown greatly diminished IFN-β production as well as antiviral transcript expression in NHLFs transfected with EMCV RNA, but not in cells stimulated with RABV Le or SeV (Fig. 1f and Extended Data Fig. 1h ). Small hairpin (sh)RNA-mediated silencing of ISG15 or MDA5 in primary human peripheral blood mononuclear cells (PBMCs) also substantially reduced antiviral protein and transcript expression after infection with a recombinant mutant EMCV (mutEMCV) deficient in MDA5 antagonism 18 , 19 , compared with infected PBMCs transduced with non-targeting control shRNA (Fig. 1g,h and Extended Data Fig. 1i ). By contrast, ISG15 or MDA5 depletion did not affect cytokine responses in PBMCs after SeV infection (Fig. 1g,h and Extended Data Fig. 1i ). These results show that ISG15 is essential for immune signalling by MDA5, but not RIG-I. The MDA5 CARDs are ISGylated at Lys23 and Lys43 To corroborate our MS analysis that identified MDA5–2CARD ISGylation, we first tested whether endogenous MDA5 is also modified by ISG15. Endogenous MDA5 was robustly ISGylated in cells transfected with HMW-poly(I:C), or infected with dengue (DENV) or Zika (ZIKV) viruses that are sensed by MDA5 (ref. 5 ) (Fig. 2a ). Notably, endogenous MDA5 was also ISGylated in uninfected cells, although at very low levels (Extended Data Fig. 2a ), which is consistent with previous findings that many host proteins are also ISGylated at low levels in normal (uninfected) conditions 20 . In cells treated with anti-IFNAR2 to block IFNAR-signalling-mediated ISG upregulation, ISG15 or MDA5 silencing led to a comparable reduction of IFNB1 gene expression after mutEMCV infection (Extended Data Fig. 2b ), indicating that ISG15-dependent MDA5 signalling occurs even in the absence of IFNAR signalling. Fig. 2: MDA5 activation requires ISGylation at Lys23 and Lys43. a , Endogenous MDA5 ISGylation in NHLFs that were mock treated, transfected with HMW-poly(I:C) (0.1 µg ml −1 ) for 40 h (left), or infected with DENV or ZIKV (MOI = 1 for each) for 48 h (right), determined by IP with anti-MDA5 (or an IgG isotype control) and IB with anti-ISG15. b , ISGylation of FLAG-tagged MDA5–2CARD and MDA5Δ2CARD in transiently transfected HEK293T cells that also expressed V5–ISG15, HA–Ube1L and FLAG–UbcH8, assessed by FLAG PD and IB with anti-V5 at 40 h post-transfection. c , Endogenous MDA5 ISGylation in ISG15 KO HeLa cells stably reconstituted with vector, WT ISG15 or ISG15-AA and co-transfected with HA–Ube1L and FLAG–UbcH8 after IFN-β treatment (1,000 U ml −1 ) for 24 h, determined by IP with anti-MDA5 and IB with anti-ISG15. d , ISGylation of GST–MDA5–2CARD WT and Lys23Arg/Lys43Arg in HEK293T cells that were co-transfected with V5–ISG15, HA–Ube1L and FLAG–UbcH8 for 24 h, determined by GST PD and IB with anti-V5. e , ISGylation of FLAG–MDA5 WT and Lys23Arg/Lys43Arg in HEK293T cells that were co-transfected with V5–ISG15, HA–Ube1L and FLAG–UbcH8, determined by FLAG PD and IB with anti-V5. f , IFN-β luciferase reporter activity in HEK293T cells that were transfected for 40 h with vector, FLAG–MDA5 WT or mutants. Luciferase values are presented as fold induction relative to the values for vector-transfected cells, set to 1. WCLs were probed by IB with anti-FLAG and anti-actin. g , RT–qPCR analysis of IFNB1 and CCL5 mRNA in HEK293T cells that were transiently transfected with either vector, or increasing amounts of FLAG–MDA5 WT or Lys23Arg/Lys43Arg. h , STAT1 phosphorylation and ISG (IFIT1 and -2) protein abundance in the WCLs of HEK293T cells that were transiently transfected with vector or FLAG–MDA5 WT or Lys23Arg/Lys43Arg, determined by IB. i , RT–qPCR analysis of the indicated antiviral genes in MDA5 KO SVGAs that were transiently reconstituted with either empty vector or FLAG-tagged MDA5 WT, Lys23Arg/Lys43Arg or Ser88Glu. Data represent at least two independent experiments with similar results (mean ± s.d. of n = 3 biological replicates in f , g and i ). * P < 0.05, ** P < 0.01, *** P < 0.001 (two-tailed, unpaired Student’s t -test). Source data Full size image Biochemical analysis confirmed that the MDA5–2CARD, but not MDA5Δ2CARD (containing helicase and CTD), is the primary site of MDA5 ISGylation showing two prominent bands for ISGylated 2CARD (Fig. 2b ). Reconstitution of ISG15 KO HeLa cells with either WT ISG15, or an unconjugatable ISG15 mutant in which the two glycines needed for conjugation were replaced with alanine (ISG15-AA) 21 , demonstrated covalent ISG15 conjugation (Fig. 2c ). Mutation of individual lysine residues in GST–MDA5–2CARD to arginine revealed that single-site mutation of Lys23 and Lys43 noticeably reduced ISGylation (Extended Data Fig. 2c ), whereas their combined mutation (Lys23Arg/Lys43Arg) almost abolished ISGylation (Fig. 2d ). Full-length FLAG–MDA5 Lys23Arg/Lys43Arg also showed markedly diminished ISGylation (Fig. 2e and Extended Data Fig. 2d ); the residual ISGylation seen in FLAG–MDA5 Lys23Arg/Lys43Arg is probably due to additional minor sites in the 2CARD and/or Δ2CARD. Of note, the Lys23Arg/Lys43Arg mutation had no effect on MDA5–2CARD SUMOylation 5 (Extended Data Fig. 2e ). Furthermore, whereas RIG-I–2CARD was robustly ubiquitinated (which represents covalent Lys63-linked ubiquitination 7 ), neither MDA5–2CARD WT nor the Lys23Arg/Lys43Arg mutant showed detectable levels of ubiquitination (Extended Data Fig. 2f ). Collectively, these results indicate that the MDA5 CARDs undergo ISGylation at two major sites, Lys23 and Lys43. CARD ISGylation is required for MDA5 activation When comparing their signal-transducing ability, MDA5–2CARD Lys23Arg and Lys43Arg single-site mutants showed partially reduced IFN-β promoter activation compared with WT MDA5–2CARD, whereas the Lys23Arg/Lys43Arg mutant had a profoundly reduced signalling activity, which was almost as strong as that of the signalling-defective mutants Ser88Glu and Ser88Asp 6 (Extended Data Fig. 2g ). In contrast, a mutant in which Lys68, which is the lysine residue that is most proximal to Lys43 and Lys23, was substituted with arginine (Lys68Arg) showed comparable ISG15 conjugation and signalling competency to WT 2CARD (Extended Data Fig. 2c,g ). MDA5–2CARD Lys23Arg/Lys43Arg, in contrast to WT MDA5–2CARD, also failed to induce IRF3 dimerization (Extended Data Fig. 2h ). FLAG–MDA5 Lys23Arg, Lys43Arg or Lys23Arg/Lys43Arg also showed reduced, or almost abolished, IFN-β promoter-activating abilities, compared with FLAG–MDA5 WT (Fig. 2f ). MDA5 Lys23Arg/Lys43Arg showed a profound signalling defect even when expressed at high amounts, whereas WT MDA5 induced antiviral transcripts in a dose-dependent manner (Fig. 2g ). In agreement, STAT1 phosphorylation, a hallmark of IFNAR signalling, as well as ISG protein expression were highly induced by MDA5 WT, but not Lys23Arg/Lys43Arg (Fig. 2h ). Complementation of MDA5 -gene-edited human astrocytes (SVGAs) with MDA5 Lys23Arg/Lys43Arg or Ser88Glu led to greatly diminished IFNB1 , CCL5 and ISG transcripts compared with cells expressing WT MDA5 (Fig. 2i and Extended Data Fig. 2i ). These results demonstrate that ISGylation at Lys23 and Lys43 is essential for MDA5-mediated cytokine responses. Dephosphorylation by PP1 regulates MDA5 ISGylation Similar to RIG-I, MDA5 is phosphorylated within the CARDs in uninfected cells, which prevents autoactivation; dephosphorylation of RIG-I and MDA5 by PP1α/γ is crucial for unleashing RLRs from their signalling-repressed states 6 , 22 , 23 , 24 . Dephosphorylation of RIG-I allows Lys63-linked ubiquitination of the CARDs, which promotes RIG-I multimerization and signalling 5 . The details of how CARD dephosphorylation (at Ser88) triggers MDA5 activation have remained elusive, and therefore we tested whether dephosphorylation regulates MDA5 ISGylation. Silencing of PP1α/γ strongly diminished MDA5–2CARD ISGylation (Extended Data Fig. 3a ). Furthermore, the phosphomimetic Ser88Glu and Ser88Asp mutants had reduced ISGylation, whereas the phospho-null Ser88Ala mutant showed stronger ISGylation than WT MDA5–2CARD (Extended Data Fig. 3b ). Conversely, MDA5 WT and Lys23Arg/Lys43Arg had comparable Ser88 phosphorylation (Extended Data Fig. 3c ). Together, these data suggest that MDA5 dephosphorylation at Ser88 precedes CARD ISGylation. We next made use of the measles virus V protein (MeV-V), which antagonizes MDA5 Ser88 dephosphorylation through PP1α/γ antagonism 25 . MeV-V expression enhanced the Ser88 phosphorylation (indicative of ablated dephosphorylation) of GST–MDA5–2CARD or FLAG–MDA5 in a dose-dependent manner, as previously shown 25 . Enhanced phosphorylation by MeV-V correlated with a decline in ISGylation (Extended Data Fig. 3d,e ). In contrast to WT MeV-V, a mutant MeV-V that has abolished PP1-binding and MDA5-dephosphorylation antagonism (MeV-VΔtail) 25 , exhibited little effect on MDA5–2CARD ISGylation (Extended Data Fig. 3f ), strengthening the inhibition of ISGylation being primarily due to PP1 inhibition, and not other antagonistic effects, by MeV-V. The V proteins from Nipah and Hendra viruses (NiV-V and HeV-V) also enhanced MDA5 Ser88 phosphorylation and, correspondingly, dampened MDA5 ISGylation (Extended Data Fig. 3g,h ), suggesting that several paramyxoviral V proteins inhibit MDA5 ISGylation through manipulation of Ser88 phosphorylation, although the precise mechanisms for individual V proteins remain to be determined. Taken together, these data suggest that the MDA5 CARD ISGylation is dependent on dephosphorylation at Ser88. ISGylation promotes higher-order MDA5 assemblies RLR activation requires RNA binding, RLR oligomerization and their translocation from the cytosol to mitochondria for an interaction with MAVS 5 . To elucidate the mechanism by which ISGylation impacts MDA5 activity, we first examined whether ISGylation affects RNA binding. Endogenous MDA5 purified from WT or Isg15 −/− MEFs interacted equally well with HMW-poly(I:C) in vitro (Extended Data Fig. 4a ). MDA5 WT and Lys23Arg/Lys43Arg showed comparable binding to HMW-poly(I:C), indicating that ISGylation does not affect the RNA-binding ability of MDA5 (Extended Data Fig. 4b ). When we monitored the translocation of MDA5 from the cytosol to mitochondria following EMCV RNA stimulation, we found that ISG15 silencing, but not si.C transfection, abolished MDA5 translocation (Fig. 3a ). In contrast, RIG-I translocation after RABV Le transfection was efficient in both ISG15 -depleted and si.C-transfected cells (Fig. 3b ). These data indicated that ISGylation regulates MDA5 translocation, or a step upstream of it. As the cytosol-to-mitochondria translocation of MDA5 requires an interaction with 14-3-3η 26 , we compared 14-3-3η binding of WT and mutant MDA5. The ability of MDA5 Lys23Arg/Lys43Arg to bind 14-3-3η was similar to that of WT MDA5 or the Lys68Arg mutant (Extended Data Fig. 4c ). However, whereas EMCV RNA stimulation effectively induced MDA5 oligomerization in WT MEFs, the formation of MDA5 oligomers was ablated in ISG15 -deficient MEFs (Fig. 3c ). ISG15 knockdown in 293T cells also abolished the oligomerization of FLAG–MDA5–2CARD (Fig. 3d ). Conversely, co-expression of the ISGylation machinery components, Ube1L and UbcH8, strongly enhanced MDA5–2CARD oligomerization in si.C-transfected cells, but not in ISG15 -depleted cells (Fig. 3d ), indicating that ISGylation is required for MDA5 oligomer formation. In support of this concept, FLAG–MDA5 Lys23Arg/Lys43Arg showed almost abolished oligomerization, whereas WT MDA5 oligomerized efficiently (Fig. 3e ). We also compared the effect of the Lys23Arg/Lys43Arg mutation with that of oligomerization-disruptive mutations that localize either to the interface between MDA5 monomers and impede RNA-binding-mediated MDA5 filamentation (Ile841Arg/Glu842Arg and Asp848Ala/Phe849Ala) 27 , 28 , or to the CARDs (Gly74Ala/Trp75Ala) and disrupt 2CARD oligomerization 27 . Unlike WT MDA5, the Lys23Arg/Lys43Arg mutant, similar to MDA5 Gly74Ala/Trp75Ala, showed deficient oligomerization and, consistent with this, abolished IFN-β promoter-activating ability (Fig. 3f,g ). Introduction of Lys23Arg/Lys43Arg into the Ile841Arg/Glu842Arg or Asp848Ala/Phe849Ala background, either of which by itself decreased MDA5 oligomerization and signalling, also abolished MDA5 oligomer formation and IFN-β induction (Fig. 3f,g ). As LGP2 facilitates MDA5 nucleation on double-stranded RNA and thereby MDA5 oligomerization 29 , 30 , we compared LGP2 binding of MDA5 WT and Lys23Arg/Lys43Arg. MDA5 Lys23Arg/Lys43Arg interacted with LGP2 as efficiently as WT MDA5 (Extended Data Fig. 4d ), strengthening the proposal that CARD ISGylation promotes MDA5 oligomerization independently of RNA-binding-mediated filamentation. Collectively, these results establish that ISGylation facilitates CARD oligomerization and higher-order MDA5 assemblies. Fig. 3: CARD ISGylation promotes the formation of higher-order MDA5 assemblies. a , b , Cytosol–mitochondria fractionation of WCLs from NHLFs that were transfected for 30 h with non-targeting control siRNA (si.C) or ISG15-specific siRNA (si.ISG15), and then mock treated or transfected with EMCV RNA (0.4 µg ml −1 ) ( a ) or RABV Le (1 pmol ml −1 ) ( b ) for 16 h. IB was performed with anti-MDA5 ( a ), anti-RIG-I ( b ), anti-ISG15 and anti-actin ( a and b ). α-Tubulin and MAVS served as purity markers for the cytosolic and mitochondrial fraction, respectively ( a and b ). c , Endogenous MDA5 oligomerization in WT and Isg15 −/− MEFs that were transfected with EMCV RNA (0.5 µg ml −1 ) for 16 h, and assessed by SDD–AGE and IB with anti-MDA5. WCLs were further analysed by SDS–PAGE and probed by IB with anti-MDA5 and anti-actin. d , Oligomerization of FLAG–MDA5–2CARD in HEK293T cells that were transfected with the indicated siRNAs, either with or without HA–Ube1L and FLAG–UbcH8 for 48 h, determined by NativePAGE and IB with anti-FLAG. WCLs were further analysed by SDS–PAGE and probed by IB with anti-FLAG, anti-HA, anti-ISG15 and anti-actin. e , Oligomerization of FLAG–MDA5 WT and Lys23Arg/Lys43Arg in transiently transfected MDA5 KO HEK293 cells, assessed by SDD–AGE and IB with anti-FLAG. WCLs were further analysed by SDS–PAGE and IB with anti-FLAG and anti-actin. f , Oligomerization of FLAG-tagged MDA5 WT and mutants in transiently transfected MDA5 KO HEK293 cells, assessed by NativePAGE and IB with anti-MDA5. WCLs were further analysed by SDS–PAGE and probed by IB with anti-MDA5 and anti-actin. g , IFN-β luciferase reporter activity in MDA5 KO HEK293 cells that were transfected for 24 h with either empty vector, or FLAG-tagged MDA5 WT or mutants. Luciferase activity is presented as fold induction relative to the values for vector-transfected cells, set to 1. Data represent at least two independent experiments with similar results (mean ± s.d. of n = 3 biological replicates in g ). *** P < 0.001 (two-tailed, unpaired Student’s t -test). Source data Full size image ISGylation-dependent MDA5 signalling restricts virus replication We next assessed whether ISGylation of MDA5 is required for its ability to restrict virus replication. FLAG–MDA5 WT, but not Lys23Arg/Lys43Arg, potently (by ~2log) inhibited EMCV replication (Fig. 4a ). Similarly, MDA5 KO HEK293 cells reconstituted with WT MDA5, but not cells complemented with the Lys23Arg/Lys43Arg mutant, effectively restricted DENV replication (Fig. 4b ). We also reconstituted MDA5 KO astrocyte SVGAs, a physiologically relevant cell type for ZIKV infection, with either vector, or MDA5 WT or Lys23Arg/Lys43Arg, and then assessed ZIKV replication over a 40-h time course. ZIKV replication was attenuated by ~100-fold in cells reconstituted with WT MDA5 compared with vector-expressing cells. In contrast, cells complemented with MDA5 Lys23Arg/Lys43Arg did not restrict ZIKV, similar to cells expressing MDA5 Ser88Glu (Fig. 4c ). WT MDA5, but not Lys23Arg/Lys43Arg, also restricted SCoV2 replication, although to a lesser extent than that seen for the other viruses tested (Fig. 4d ). Fig. 4: ISGylation is required for viral restriction by MDA5. a , EMCV titres in the supernatant of HEK293T cells that were transfected for 40 h with either vector, or FLAG–MDA5 WT or Lys23Arg/Lys43Arg, and then infected with EMCV (MOI = 0.001) for 24 h, determined by TCID 50 assay. b , Percentage of DENV-infected MDA5 KO HEK293 cells that were transfected for 24 h with either vector, or FLAG–MDA5 WT or Lys23Arg/Lys43Arg, and then mock treated or infected with DENV (MOI = 5) for 48 h, assessed by FACS using anti-flavivirus E (4G2). SSC, side scatter. c , ZIKV titres in the supernatant of MDA5 KO SVGAs that were transfected for 30 h with vector or FLAG-tagged MDA5 WT, Lys23Arg/Lys43Arg or Ser88Glu and then infected with ZIKV (MOI = 0.1) for the indicated times, determined by plaque assay. p.f.u., plaque-forming units; h.p.i., hours post-infection. d , SCoV2 titres in the supernatant of HEK293T–hACE2 cells that were transfected for 24 h with either empty vector, or FLAG–MDA5 WT or Lys23Arg/Lys43Arg, and then infected with SCoV2 (MOI = 0.5) for 24 h, determined by plaque assay. e , Schematic of the experimental approach to ‘decouple’ the role of ISG15 in MDA5-mediated IFN induction from its role in dampening IFNAR signalling. Sup., supernatant. f , NHLF ‘donor’ cells were transfected for 40 h with the indicated siRNAs and then infected with mutEMCV (MOI = 0.1) for 16 h. Cell supernatants were UV inactivated and transferred on to Vero ‘recipient’ cells. After 24 h, cells were infected with ZIKV (MOI = 0.002–2) for 72 h, and ZIKV-positive cells determined by immunostaining with anti-flavivirus E (4G2) and TrueBlue peroxidase substrate. g , RIG-I KO HEK293 ‘donor’ cells were transfected with si.C or si.ISG15 together with either vector, or FLAG–MDA5 WT or Lys23Arg/Lys43Arg, for 24 h, followed by EMCV infection (MOI = 0.001) for 16 h. UV-inactivated cell supernatants were transferred on to Vero ‘recipient’ cells for 24 h, followed by infection with EMCV (MOI = 0.001–0.1) for 40 h. EMCV-induced cytopathic effects were visualized by Coomassie Blue staining. Data represent at least two independent experiments with similar results (mean ± s.d. of n = 3 biological replicates in a – d ). * P < 0.05, ** P < 0.01 (two-tailed, unpaired Student’s t -test). Source data Full size image We next determined the effect of ISG15 silencing on MDA5’s ability to inhibit virus replication. Although MDA5 Lys23Arg/Lys43Arg failed to suppress EMCV replication regardless of ISG15 silencing, WT MDA5 effectively restricted EMCV replication in si.C-transfected cells and, unexpectedly, also in ISG15 -depleted cells (Extended Data Fig. 5a ). In an exploration of the underlying mechanism of these unexpected results, we found that the EMCV-infected cells that expressed WT MDA5 had markedly enhanced levels of ISG protein expression when ISG15 was silenced compared with infected cells transfected with WT MDA5 and si.C (Extended Data Fig. 5b ). Similarly, elevated ISG transcript and protein expression were observed in ISG15 -deficient cells that were transfected with EMCV RNA or infected with mutEMCV, despite abrogation of IFN-β induction (Extended Data Fig. 5c,d ). In contrast, MDA5 knockdown abrogated both IFN-β and ISG protein expression, as expected (Extended Data Fig. 5d ). We noticed that the protein abundance of USP18, a deubiquitinating enzyme that negatively regulates IFNAR signaling 31 , was greatly diminished in ISG15 -depleted cells following EMCV infection compared with infected cells that were transfected with si.C or MDA5-specific siRNA (Extended Data Fig. 5b,d ), which is consistent with the reported role of ISG15 in preventing USP18 degradation 32 . Together, these data suggest that in experimental settings of ISG15 -gene targeting (that is, silencing or KO) the antiviral effect of MDA5 ISGylation is masked by aberrant ISG upregulation due to the ablation of ISG15’s inhibitory effect on IFNAR signalling. We next employed a virus protection assay that experimentally decouples MDA5 signalling in virus-infected cells from downstream IFNAR signalling in the same cells (Fig. 4e ). Supernatants from mutEMCV-infected ‘donor’ cells that were either si.C transfected, or depleted of either ISG15 or MDA5 , were ultraviolet (UV) inactivated and then transferred on to uninfected ‘recipient’ cells. ‘Primed’ recipient cells were then infected with ZIKV to directly monitor the antiviral effect of MDA5-mediated IFN production by donor cells. Whereas the supernatants from si.C-transfected ‘donor’ cells potently inhibited ZIKV replication, the supernatants from ISG15 or MDA5 knockdown cells minimally restricted ZIKV infection (Fig. 4f ). Similarly, the culture supernatants from EMCV-infected donor cells transfected with WT MDA5 together with si.C led to greater protection of recipient cells from viral challenge than that from cells expressing WT MDA5 and depleted of ISG15 (Fig. 4g ). Collectively, these data demonstrate that ISGylation is important for MDA5-mediated restriction of a range of RNA viruses. SCoV2 PLpro targets MDA5 for de-ISGylation Coronaviruses such as SARS-CoV (SCoV), MERS–CoV and the recently emerged SCoV2 encode a PLpro that mediates viral polyprotein cleavage 33 . In addition, PLpro has deubiquitinating and de-ISGylating activities. SCoV2 PLpro was recently shown to modulate antiviral responses primarily via its de-ISGylase activity 15 . As MDA5 is known to be a major sensor for detecting coronaviruses 34 , 35 , and because our data showed that ISGylation is required for MDA5-mediated virus restriction, we examined whether SCoV2 PLpro enzymatically removes MDA5 ISGylation to antagonize innate immunity. SCoV2 PLpro WT, but not its catalytically inactive mutant (PLpro Cys111Ala) 15 , abolished the ISGylation of GST–MDA5–2CARD and FLAG–MDA5 (Fig. 5a,b and Extended Data Fig. 6a ). The PLpro Asn156Glu and Arg166Ser/Glu167Arg mutants, which are marginally and severely impaired in ISG15 binding at the ‘site 1’ interface, respectively 14 , 36 , did slightly, or not, affect ISGylation. In contrast, PLpro Phe69Ala, in which the ‘site 2’ interface that preferentially determines binding to ubiquitin, but not ISG15, is disrupted 14 , 36 , diminished MDA5 ISGylation as potently as WT PLpro (Fig. 5a,b and Extended Data Fig. 6a ). SCoV2 PLpro did not, however, suppress RIG-I–2CARD ubiquitination (Extended Data Fig. 6b ). Fig. 5: SCoV2 PLpro binds to and de-ISGylates MDA5–2CARD. a , Ribbon representation of the crystal structure of the SCoV2 PLpro:ISG15 complex (Protein Data Bank, accession no. 6YVA ). Key residues that mediate site 1 interaction (Asn156 and Arg166/Glu167) or site 2 interaction (Phe69) in PLpro, as well as its catalytically active site (Cys111), are indicated. b , ISGylation of GST–MDA5–2CARD in HEK293T cells that were co-transfected for 20 h with vector or V5-tagged SCoV2 PLpro WT or mutants, along with FLAG–ISG15, HA–Ube1L and FLAG–UbcH8, determined by GST PD and IB with anti-FLAG and anti-GST. WCLs were probed by IB with anti-V5, anti-HA, anti-FLAG and anti-actin. c , Binding of HA-tagged MDA5 or RIG-I to V5-tagged SCoV2–PLpro or FLAG-tagged MeV-V (positive control) in transiently transfected HEK293T cells, determined by HA PD and IB with anti-V5 or anti-FLAG, and anti-HA. WCLs were probed by IB with anti-V5 and anti-FLAG. d , Oligomerization of FLAG–MDA5–2CARD in HEK293T cells that were co-transfected with vector, or V5-tagged SCoV2 PLpro WT or Cys111Ala for 24 h, assessed by NativePAGE and IB with anti-FLAG. WCLs were further analysed by SDS–PAGE and probed by IB with anti-FLAG, anti-V5 and anti-actin. e , ISGylation of GST–MDA5–2CARD in HEK293T cells that also expressed FLAG–ISG15, HA–Ube1L and FLAG–UbcH8, and were co-transfected for 40 h with vector or the indicated V5-tagged coronaviral PLpro proteins, determined by GST PD and IB with anti-FLAG, anti-V5 and anti-GST. Data represent at least two independent experiments with similar results. Source data Full size image We found that PLpro interacted specifically with MDA5, but not RIG-I, as did MeV-V which binds MDA5 and served as a control 37 (Fig. 5c ). Low amounts of PLpro inhibited signalling by MDA5, but not RIG-I, whereas higher amounts of PLpro suppressed antiviral signalling by both RLRs (Extended Data Fig. 6c ). This strengthens MDA5 being a direct target of PLpro. De-ISGylation of IRF3 probably accounts for the inhibitory effect that higher doses of PLpro have on RLR signalling 15 , 38 . When examining the effect of PLpro on MDA5–2CARD oligomerization, PLpro WT but not Cys111Ala efficiently blocked MDA5–2CARD oligomerization (Fig. 5d ), indicating that SCoV2 PLpro inhibits the ISGylation-dependent MDA5 oligomer formation via its enzymatic activity. The PLpro enzymes of the related β-coronaviruses, SCoV, MERS–CoV and murine hepatitis virus (MHV), as well as of the α-coronavirus HCoV-NL63 (NL63) also bound to and efficiently reduced MDA5–2CARD ISGylation (Fig. 5e ), suggesting that MDA5 antagonism by PLpro may be widely conserved among coronaviruses. SCoV2 PLpro antagonizes ISG15-dependent MDA5 signalling We next determined the relevance of ISG15-dependent MDA5 signalling for antiviral cytokine induction elicited by SCoV2. As SCoV2 infection is known to minimally induce type I IFNs due to effective viral antagonisms 39 , we isolated total RNA from SCoV2-infected cells and then re-transfected it into cells to stimulate innate immune signalling. SCoV2 RNA, but not RNA from mock-treated cells, robustly induced IFN transcripts; however, this induction was markedly diminished when ISG15 or MDA5 was silenced (Fig. 6a ). RIG-I knockdown did not adversely affect the antiviral gene expression elicited by SCoV2 RNA, indicating that SCoV2 RNA–PAMPs are primarily sensed by the ISG15–MDA5 axis (Fig. 6a ). Fig. 6: SCoV2 PLpro inhibits ISG15-mediated MDA5 signalling via its de-ISGylase activity. a , RT–qPCR analysis of IFNB1 , IFNL1 , ISG15 , MDA5 and RIG-I transcripts in NHLFs that were transfected with the indicated siRNAs for 40 h and then transfected with mock RNA or SCoV2 RNA (0.4 µg ml −1 ) for 24 h. b , Binding of SCoV2 Nsp3 to endogenous MDA5 in A549–hACE2 cells that were infected with SCoV2 (MOI = 0.5) for 24 h, determined by IP with anti-MDA5 (or an IgG isotype control) followed by IB with anti-Nsp3 and anti-MDA5. WCLs were probed by IB with anti-Nsp3 and anti-actin. c , Endogenous MDA5 ISGylation in A549–hACE2 cells that were mock infected or infected with SCoV2 (MOI = 0.5) for 40 h in the presence of PLpro inhibitor (GRL-0617; 50 µM) or vehicle control (dimethylsulfoxide), determined by IP with anti-MDA5 (or an IgG isotype control), followed by IB with anti-ISG15 and anti-MDA5. Protein abundance of IFIT1, RSAD2, ISG15 and actin in the WCLs was probed by IB. Efficient virus replication was verified by IB with anti-Nsp3 and anti-Spike (S). d , RT–qPCR analysis of IFNB1 , CCL5 and IFIT1 transcripts, and EMCV genomic RNA (gRNA), in HeLa cells that were transiently transfected for 24 h with vector, or V5–SCoV2 PLpro WT or mutants, and then infected with mutEMCV (MOI = 0.5) for 12 h. e , EMCV titres in the supernatant of RIG-I KO HEK293 cells that were transiently transfected for 24 h with vector or FLAG–MDA5, along with V5-tagged SCoV2 PLpro WT, Cys111Ala or Arg166Ser/Glu167Arg, and then infected with EMCV (MOI = 0.001) for 16 h, determined by plaque assay. f , Protein abundance of the indicated ISGs in the WCLs from the experiment in e , determined by IB with the indicated antibodies. Data represent at least two independent experiments with similar results (mean ± s.d. of n = 3 biological replicates in a , d and e ). * P < 0.05, ** P < 0.01, *** P < 0.001 (two-tailed, unpaired Student’s t -test). Source data Full size image We found that SCoV2 non-structural protein 3 (Nsp3), within which PLpro lies, readily interacted with endogenous MDA5 during authentic SCoV2 infection (Fig. 6b ). Endogenous MDA5 ISGylation was undetectable in SCoV2-infected cells, although the virus triggered ISG15 expression; however, in infected cells treated with a specific PLpro inhibitor 15 , MDA5 ISGylation and downstream ISG induction were strongly enhanced (Fig. 6c ), supporting the proposal that PLpro effectively suppresses MDA5 ISGylation and signalling during live SCoV2 infection. We next examined the effect of WT and mutant PLpro on the activation of endogenous MDA5 during mutEMCV infection. Consistent with their effect on MDA5 ISGylation (Fig. 5b and Extended Data Fig. 6a ), SCoV2 PLpro WT and Phe69Ala prevented antiviral transcript induction, whereas MDA5 Arg166Ser/Glu167Arg, similar to the Cys111Ala mutant, did not affect antiviral gene expression (Fig. 6d ). In agreement with this, mutEMCV replication was enhanced in cells expressing PLpro WT or Phe69Ala, but not in cells expressing PLpro Cys111Ala or Arg166Ser/Glu167Arg (Fig. 6d ). Likewise, WT PLpro, but not the Arg166Ser/Glu167Arg or Cys111Ala mutant, blocked EMCV restriction by FLAG–MDA5 (Fig. 6e ); the effect on virus replication correlated with induced ISG proteins (Fig. 6f ). Collectively, this establishes SCoV2 PLpro as an IFN antagonist that actively de-ISGylates MDA5. Discussion ISG15 conjugation is known to confer antiviral activity to a multitude of viruses; however, only few genuine substrates have been identified 12 . On the other hand, ISG15 in its unconjugated form acts provirally by fortifying USP18-mediated IFNAR-signal inhibition 31 , 32 , 40 . The present study identifies a key role for ISGylation in MDA5-mediated IFN induction. Our work also stresses the importance of experimental design in which decoupling the role of ISG15 in MDA5 activation from that in dampening IFNAR signalling is essential to reveal ISG15’s potent antiviral activity. In an infected organism, it is probably the sum of multiple ISGylation events (affecting both host and viral proteins) that determines the outcome of infection and pathogenesis, which may be context dependent 12 . Our findings indicate that ISGylation of MDA5 acts analogously to the Lys63-linked ubiquitination of RIG-I 5 : both PTMs (1) are regulated by PP1-induced dephosphorylation and (2) promote CARD oligomerization and RLR higher-order assemblies. However, whereas ubiquitin is abundant in both uninfected and infected cells, ISG15 expression is strongly increased by IFN stimulation. Nevertheless, even at basal levels, ISG15 is conjugated to many host proteins 20 , including MDA5 as our work showed, which may be sufficient for initial MDA5 activation. During viral infections that are sensed by multiple PRRs, MDA5 ISGylation may be a ‘priming’ mechanism whereby ISG15 upregulation by an immediate innate sensor (for example, RIG-I) 41 primes MDA5 to enter a ‘kick-start’ mode. As ISG15 negatively regulates RIG-I 16 , 17 , ISGylation may trigger ‘sensor switching’ where MDA5 activation is promoted when ISG15 levels increase, while RIG-I activity is being dampened. We identified that SCoV2 PLpro antagonizes MDA5 ISGylation via its enzymatic activity after binding to the sensor; this strategy is probably conserved among coronaviruses, which warrants further investigation. Cryo-electron microscopy analyses revealed that coronaviral Nsp3 is part of a pore complex that spans endoplasmic reticulum-derived double-membrane vesicles and exports newly synthesized viral RNA 42 . Thus, MDA5 may position itself in close proximity to the site of viral RNA export to facilitate PAMP detection; however, the PLpro domain of Nsp3 (which is on the cytoplasmic side) blocks MDA5 signalling through direct de-ISGylation. Some viruses may also inhibit MDA5 ISGylation through dysregulation of MDA5 phosphorylation, as shown for MeV-V. In summary, our study uncovers a prominent role for ISGylation in activating MDA5-mediated immunity as well as its inhibition by SCoV2, unveiling a potential molecular target for the design of therapeutics against COVID-19. Methods Cell culture HEK293T (human embryonic kidney cells), Vero (African green monkey kidney epithelial cells), BHK-21 (baby hamster kidney) and Aedes albopictus clone C6/36 cells were purchased from American Type Culture Collection (ATCC). Human PBMCs were isolated from unidentified healthy donor peripheral blood (HemaCare) and purified by Lymphoprep density gradient centrifugation (STEMCELL Technologies). The WT and isogenic Isg15 −/− MEFs were kindly provided by D. Lenschow (Washington University in St. Louis). SVGAs (human fetal glial astrocytes) were kindly provided by E. Cahir-McFarland (Biogen) 43 . SVGA MDA5 KO cells were generated by CRISPR (clustered regularly interspaced short palindromic repeats)–Cas9-mediated genome editing using a guide RNA (5′-AACTGCCTGCATGTTCCCGG-3′) targeting the exon 1 of IFIH1/MDA5 . The MDA5 KO and RIG-I KO HEK293 cells were a gift from J. Rehwinkel (University of Oxford) 44 . The WT and isogenic ISG15 KO HeLa cells were kindly provided by E. Schiebel (University of Heidelberg) 45 . ISG15 KO HeLa cells stably expressing FLAG–ISG15 WT or FLAG–ISG15-AA (GG156/157AA) were generated by lentiviral transduction followed by selection with puromycin (2 μg ml −1 ). HAP-1 WT and isogenic ISG15 KO cells were purchased from Horizon Discovery. HEK293T–hACE2 and Vero-E6–hACE2 cells were a gift from J. U. Jung (Cleveland Clinic Lerner Research Center). A549–hACE2 cells were kindly provided by B. R. tenOever (Icahn School of Medicine at Mount Sinai) 39 . HEK293T, HEK293, HeLa, MEF, NHLF, Vero, A549–hACE2 and BHK-21 cells were maintained in Dulbecco’s modified Eagle’s medium (DMEM, Gibco) supplemented with 10% (v:v) fetal bovine serum (FBS, Gibco), 2 mM GlutaMAX (Gibco), 1 mM sodium pyruvate (Gibco) and 100 U ml −1 of penicillin–streptomycin (Gibco). HEK293T–hACE2 and Vero-E6–hACE2 were maintained in DMEM containing 200 μg ml −1 of hygromycin B and 2 μg ml −1 of puromycin, respectively. SVGA and HAP-1 cells were cultured in Eagle’s minimum essential medium (MEM, Gibco) and Iscove’s modified Dulbecco’s medium (Gibco), respectively, supplemented with 10% FBS and 100 U ml −1 of penicillin–streptomycin. PBMCs were maintained in RPMI-1640 (Gibco) supplemented with 10% FBS and 100 U ml −1 of penicillin–streptomycin. C6/36 cells were cultured in MEM with 10% FBS and 100 U ml −1 of penicillin–streptomycin. Except for C6/36 cells that were maintained at 28 °C, all cell cultures were maintained at 37 °C in a humidified 5% CO 2 atmosphere. Commercially obtained cell lines were authenticated by vendors and were not validated further in our laboratory. Cell lines that were obtained and validated by other groups were not further authenticated. KO cell lines were validated by confirming the absence of target protein expression. All cell lines used in the present study have been regularly tested for potential Mycoplasma contamination by PCR or using the MycoAlert Kit (Lonza). Viruses DENV (serotype 2, strain 16681) and ZIKV (strain BRA/Fortaleza/2015) were propagated in C6/36 and Vero cells, respectively 46 , 47 . EMCV (EMC strain) was purchased from ATCC and propagated in HEK293T cells 6 ; mutEMCV (EMCV-Zn C19A/C22A ), which carries two point mutations in the zinc domain of the L protein 18 , was kindly provided by F. J. M. van Kuppeveld (Utrecht University) and was propagated in BHK-21 cells. Sendai virus (strain Cantell) was purchased from Charles River Laboratories. SCoV2 (strain 2019-nCoV/USA_WA1/2020) was kindly provided by J. U. Jung (Cleveland Clinic Lerner Research Center) and was propagated in Vero-E6–hACE2 cells. All work relating to SCoV2 live virus and SCoV2 RNA was conducted in the BSL-3 facility of the Cleveland Clinic Florida Research and Innovation Center in accordance with institutional biosafety committee regulations. DNA constructs and transfection The human MDA5 open reading frame (ORF) containing an N-terminal FLAG tag was amplified from the pEF-Bos–FLAG–MDA5 (ref. 6 ) and subcloned into pcDNA3.1/Myc-His B between XhoI and AgeI. Site-directed mutagenesis on pcDNA3.1–FLAG–MDA5 (Lys23Arg/Lys43Arg, Ser88Ala, Ser88Glu, Ile841Arg/Glu842Arg, Asp848Ala/Phe849Ala and Gly74Ala/Trp75Ala) was introduced by overlapping PCR. HA–MDA5 was cloned into pcDNA3.1 + between KpnI and XhoI. GST–MDA5–2CARD (in a pEBG vector) and its Ser88Ala, Ser88Asp and Ser88Glu derivatives have been described previously 6 . The single (Lys23Arg, Lys43Arg, Lys68Arg, Lys128Arg, Lys137Arg, Lys169Arg, Lys174Arg and Lys235Arg) and double (Lys23Arg/Lys43Arg) mutations of MDA5–2CARD (amino acids 1–295) were introduced by site-directed mutagenesis into GST–MDA5–2CARD. In addition, MDA5–2CARD and its Lys23Arg/Lys43Arg mutant were subcloned into pcDNA3.1 − harbouring an N-terminal 3× FLAG tag between NheI and NotI. The pCR3–FLAG–MV-V (strain Schwarz) was a gift from K.-K. Conzelmann (LMU, Munich); pEF-Bos–FLAG–NiV-V, pCAGGS–HA–MeV-V and pCAGGS–HA–MeV-VΔtail have been described previously 25 . PIV2-V, PIV5-V, MenV-V, MPRV-V and HeV-V constructs were kindly provided by S. Goodbourn (University of London), and the respective ORF was subcloned into pEF-Bos containing an N-terminal FLAG tag between NotI and SalI. The pEF-Bos–FLAG–MuV-V was a gift from C. Horvath (Addgene, catalogue no. 44908 (ref. 48 )); pCAGGS–V5–hISG15 was a gift from A. García-Sastre (Icahn School of Medicine at Mount Sinai) 49 ; pCAGGS–HA–Ube1L and pFLAG–CMV2–UbcH8 were kindly provided by J. U. Jung (University of Southern California); pcDNA3.1–Myc–UBE2I was cloned by ligating a synthetic UBE2I ORF into pcDNA3.1/Myc-His B between HindIII and NotI. FLAG–SUMO1 was obtained from F. Full (University of Erlangen-Nuremberg). V5-tagged SARS-CoV–PLpro, MERS–CoV–PLpro, NL63–PLP2 and MHV–PLP2 in pcDNA3.1–V5/His B were kindly provided by S. C. Baker (Loyola University of Chicago). The SARS-CoV-2 PLpro ORF (amino acids 746–1,060) was amplified from pDONR207 SARS-CoV-2 NSP3 (a gift from F. Roth; Addgene catalogue no. 141257 (ref. 50 )) and subcloned into pcDNA3.1–V5. The Cys111Ala, Phe69Ala, Asn156Glu and Arg166Ser/Glu167Arg mutations of SARS-CoV-2 PLpro were introduced by site-directed mutagenesis. The correct sequence of all constructs was confirmed by DNA sequencing. Transient DNA transfections were performed using linear poly(ethylenimine) (1 mg ml −1 of solution in 10 mM Tris-HCl, pH 6.8; Polysciences), Lipofectamine 2000 (Invitrogen), Lipofectamine LTX with Plus Reagent (Invitrogen), Trans IT-HeLaMONSTER (Mirus) or Trans IT-X2 Transfection Reagent (Mirus) as per the manufacturers’ instructions. Antibodies and other reagents Primary antibodies used in the present study include anti-GST (1:5,000; Sigma-Aldrich), anti-V5 (1:5,000, R960-25; Novex), anti-FLAG (M2, 1:2,000; Sigma-Aldrich), anti-HA (1:3,000, HA-7; Sigma-Aldrich), anti-Phospho-IRF3 (Ser396) (1:1,000, D6O1M; CST), anti-IRF3 (1:1,000, D6I4C; CST), anti-Phospho-STAT1 (Tyr701) (1:1,000, 58D6; CST), anti-IFIT1 (1:1,000, PA3-848; Invitrogen and 1:1,000, D2X9Z; CST), anti-IFIT2 (1:1,000; Proteintech), anti-ISG15 (1:500, F-9; Santa Cruz), anti-MAVS (1:1,000; CST), anti-RIG-I (1:2,000, Alme-1; Adipogen), anti-MDA5 (1:1,000, D74E4; CST), anti-Phospho-MDA5 (Ser88) 6 , anti-PP1α (1:2,000; Bethyl laboratories), anti-PP1γ (1:2,000; Bethyl laboratories), anti-USP18 (1:1000, D4E7; CST), anti-RSAD2 (1:1,000, D5T2X; CST), anti-PKR (1:1,000, D7F7; CST), anti-MX1 (1:1,000, D3W7I; CST), anti-IFITM3 (1:1,000, D8E8G; CST), anti-ISG20 (1:1,000, PA5-30073; Invitrogen), anti-ubiquitin (1:1,000, P4D1; Santa Cruz), anti-NS3 (ref. 47 ), anti-Nsp3 (1:1,000, GTX135589; GeneTex), anti-Spike (1:1,000, 1A9; GeneTex), anti-α-tubulin (1:1,000; CST) and anti-β-actin (1:1,000, C4). Monoclonal anti-MDA5 antibody was purified from mouse hybridoma cell lines kindly provided by J. Rehwinkel (University of Oxford) 44 . Monoclonal anti-IFNAR2-neutralizing antibody (1:250, MMHAR-2) was obtained from PBL Assay Science. Monoclonal anti-flavivirus E antibody (4G2) was purified from the mouse hybridoma cell line D1-4G2-4-15 (ATCC). Anti-mouse and anti-rabbit horseradish peroxidase-conjugated secondary antibodies (1:2,000) were purchased from CST. Anti-FLAG M2 magnetic beads (MilliporeSigma), anti-FLAG agarose beads (MilliporeSigma), Glutathione Sepharose 4B resin (GE Healthcare) and Protein G Dynabeads (Invitrogen) were used for protein IP. Protease and phosphatase inhibitors were obtained from MilliporeSigma. HMW-poly(I:C)/LyoVec and HMW-poly(I:C) biotin were obtained from Invivogen. Human IFN-β was purchased from PBL Biomedical Laboratories. GRL-0617 was purchased from AdooQ Bioscience. Mass spectrometry Large-scale GST pulldown (PD) and MS analysis were performed as previously described 7 , 22 . Briefly, HEK293T cells were transfected with GST or GST–MDA5–2CARD, and the cells were collected at 48 h post-transfection and lysed in Nonidet P-40 (NP-40) buffer (50 mM 4-(2-hydroxyethyl)-1-piperazine-ethanesulfonic acid (Hepes), pH 7.4, 150 mM NaCl, 1% (v:v) NP-40, 1 mM ethylenediaminetetraacetic acid (EDTA) and 1× protease inhibitor cocktail (MilliporeSigma)). Cell lysates were cleared by centrifugation at 16,000 g and 4 °C for 20 min, and cleared supernatants were subjected to GST PD using Glutathione Sepharose 4B beads (GE Healthcare) at 4 °C for 4 h. The beads were extensively washed with NP-40 buffer and proteins eluted by heating in 1× Laemmli sodium dodecylsulfate (SDS) sample buffer at 95 °C for 5 min. Eluted proteins were resolved on a NuPAGE 4–12% Bis–Tris gel (Invitrogen) and then stained at room temperature using the SilverQuest Silver Staining Kit (Invitrogen). The bands that were specifically present in the GST–MDA5–2CARD sample, but not the GST control sample, were excised and analysed by LC–MS/MS (Taplin Mass Spectrometry Facility, Harvard University). Immunoprecipitation and immunoblotting Cells were transfected with FLAG–MDA5, GST–MDA5–2CARD or FLAG–MDA5–2CARD in the absence or presence of ISGylation machinery components (that is, HA–Ube1L, FLAG–UbcH8 and V5–ISG15) as indicated. After 48 h, cells were lysed in NP-40 buffer and cleared by centrifugation at 16,000 g and 4 °C for 20 min. Cell lysates were then subjected to GST or FLAG PD using glutathione magnetic agarose beads (Pierce) and anti-FLAG M2 magnetic beads (MilliporeSigma) at 4 °C for 4 h or 16 h, respectively. The beads were extensively washed with NP-40 buffer and proteins eluted by heating in 1× Laemmli SDS sample buffer at 95 °C for 5 min or by competition with FLAG peptide (MilliporeSigma) at 4 °C for 4 h. For endogenous MDA5 IP, NHLFs were stimulated with HMW-poly(I:C)/LyoVec (0.1 µg ml −1 ) or infected with DENV or ZIKV at the indicated multiplicity of infection (MOI) for 40 h. Cell lysates were precleared with Protein G Dynabeads (Invitrogen) at 4 °C for 2 h, and then incubated with Protein G Dynabeads conjugated with the anti-MDA5 antibody or an immunoglobulin (Ig)G1 isotype control (G3A1; CST) at 4 °C for 4 h. The beads were washed four times with RIPA buffer (20 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1% (v:v) NP-40, 1% (w:v) deoxycholic acid and 0.01% (w:v) SDS) and protein eluted in 1× Laemmli SDS sample buffer. Protein samples were resolved on Bis–Tris SDS-polyacrylamide gel electrophoresis (PAGE) gels, transferred on to polyvinylidene difluoride (PVDF) membranes (Bio-Rad), and visualized using the SuperSignal West Pico PLUS or Femto chemiluminescence reagents (Thermo Fisher Scientific) on an ImageQuant LAS 4000 Chemiluminescent Image Analyzer (General Electric) as previously described 47 . Enzyme-linked immunosorbent assay (ELISA) Human or mouse IFN-β in the culture supernatants of NHLFs, HeLa and MEFs was determined by ELISA using the VeriKine Human Interferon Beta ELISA Kit or VeriKine Mouse Interferon Beta ELISA Kit (PBL Assay Science) as previously described 6 . Knockdown mediated by siRNA and shRNA Transient knockdown in NHLFs, HeLa, HAP-1, HEK293T and HEK293 cells was performed using non-targeting or gene-specific siGENOME SMARTpool small interfering (si)RNAs (Horizon Discovery). These are the Non-Targeting siRNA Pool no. 2 (D-001206-14), IFIH1 (M-013041-00), DDX58 (M-012511-01), PPP1CA (M-008927-01), PPP1CC (M-006827-00) and ISG15 (D-004235-17 and D-004235-18). Transfection of siRNAs was performed using the Lipofectamine RNAiMAX Transfection Reagent (Invitrogen) as per the manufacturer’s instructions. Scrambled short hairpin (sh)RNA control lentiviral particles and shRNA lentiviral particles targeting ISG15 (TL319471V) or IFIH1 (TL303992V) were purchased from OriGene. Lentiviral transduction of human PBMCs (1 × 10 5 cells; MOI = 8) was performed in the presence of 8 μg ml −1 of polybrene (Santa Cruz). Knockdown efficiency was determined by quantitative real-time PCR (RT–qPCR) or IB as indicated. RT–qPCR Total RNA was purified using the E.Z.N.A. HP Total RNA Kit (Omega Bio-tek) as per the manufacturer’s instructions. One-step RT–qPCR was performed using the SuperScript III Platinum One-Step qRT-PCR Kit (Invitrogen) and predesigned PrimeTime qPCR Probe Assays (Integrated DNA Technologies) on a 7500 Fast Real-Time PCR System (Applied Biosystems). Relative mRNA expression was normalized to the levels of GAPDH and expressed relative to the values for control cells using the ΔΔ C t method. Luciferase reporter assay IFN-β reporter assay was performed as previously described 51 . Briefly, HEK293T or MDA5 KO HEK293 cells were transfected with IFN-β luciferase reporter construct and β-galactosidase (β-gal) expressing pGK-β-gal, along with GST–MDA5–2CARD (WT or mutants) or FLAG–MDA5 (WT or mutants). At the indicated time points after transfection, luciferase and β-gal activities were determined using, respectively, the Luciferase Assay System (Promega) and β-Galactosidase Enzyme Assay System (Promega) on a Synergy HT microplate reader (BioTek). Luciferase activity was normalized to β-gal values, and fold induction was calculated relative to vector-transfected samples, set to 1. Cytosol–mitochondria fractionation assay The cytosol–mitochondria fractionation assay was performed using a Mitochondria/Cytosol Fractionation Kit (Millipore) as previously described 46 , 47 . Briefly, NHLFs were transfected for 30 h with either non-targeting control siRNA or ISG15-specific siRNA and then transfected with EMCV RNA or RABV Le for 16 h. Cells were homogenized in an isotonic buffer using a Dounce homogenizer and the lysates were centrifuged at 600 g to pellet the nuclei and unbroken cells. The supernatant was further centrifuged at 10,000 g and 4 °C for 30 min to separate the cytosolic (supernatant) and mitochondrial (pellet) fractions. The protein concentration of both fractions was determined by a bicinchoninic acid assay (Pierce), and equal amounts of proteins were analysed by IB. Anti-α-tubulin and anti-MAVS IB served as markers for the cytosolic and mitochondrial fractions, respectively. In vitro RNA-binding assay WT and Isg15 −/− MEFs were stimulated with IFN-β (1,000 U ml −1 ) for 24 h. Cells were lysed in a buffer containing 50 mM Hepes, pH 7.4, 200 mM NaCl, 1% (v:v) NP-40, 1 mM EDTA and 1× protease inhibitor cocktail (MilliporeSigma). NeutrAvidin agarose beads (Pierce) were conjugated with the biotinylated HMW-poly(I:C) at 4 °C for 4 h. Cell lysates were incubated with the conjugated beads at 4 °C for 16 h. The beads were washed three times with lysis buffer and then boiled at 95 °C in 1× Laemmli SDS sample buffer to elute the proteins. Precipitated proteins were resolved on Bis–Tris SDS–PAGE gels and analysed by IB with anti-MDA5. Equal input MDA5 protein amounts were confirmed by IB with anti-MDA5. NativePAGE NativePAGE for analysing endogenous IRF3 dimerization was performed as previously described 52 . For measuring MDA5 oligomerization, HEK293T or MDA5 KO HEK293 cells were transfected with WT or mutant FLAG–MDA5–2CARD or FLAG–MDA5 as indicated. Cells were lysed in 1× native PAGE sample buffer (Invitrogen) containing 1% (v:v) NP-40 on ice for 30 min, and then lysates were cleared by centrifugation at 16,000 g and 4 °C for 10 min. Cleared lysates were resolved on a 3–12% Bis–Tris NativePAGE gel (Invitrogen) as per the manufacturer’s instructions and analysed by IB with the indicated antibodies. Semi-denaturating detergent agarose gel electrophoresis MDA5 oligomerization in MEFs transfected with EMCV RNA, or in MDA5 KO HEK293 cells reconstituted with WT or mutant FLAG–MDA5, was determined by semi-denaturating detergent agarose gel electrophoresis (SDD–AGE) as previously described with modifications 26 . Briefly, cells were lysed in a buffer containing 50 mM Hepes, pH 7.4, 150 mM NaCl, 0.5% (v:v) NP-40, 10% (v:v) glycerol and 1× protease inhibitor cocktail (MilliporeSigma) at 4 °C for 20 min. Cell lysates were cleared by centrifugation at 16,000 g and 4 °C for 10 min and then incubated on ice for 1 h. Cell lysates were subsequently incubated in 1× SDD–AGE buffer (0.5× Tris/borate/EDTA (TBE), 10% (v:v) glycerol, and 2% (w : v) SDS) for 15 min at room temperature and resolved on a vertical 1.5% agarose gel containing 1× TBE and 0.1% (w:v) SDS at 80 V for 90 min at 4 °C. Proteins were transferred on to a PVDF membrane and analysed by IB with the indicated antibodies. Viral RNA purification EMCV RNA was produced as previously described 6 . Briefly, Vero cells were infected with EMCV (MOI = 0.1) for 16 h, and total RNA was isolated using the Direct-zol RNA extraction kit (Zymo Research) as per the manufacturer’s instructions. Mock RNA and SCoV2 RNA were produced by isolating total RNA from uninfected or SCoV2-infected (MOI = 1 for 24 h) Vero–hACE2 cells. RABV Le was generated by in vitro transcription using the MEGAshortscript T7 Transcription Kit (Invitrogen) as previously described 53 . Virus infection and titration All viral infections were performed by inoculating cells with the virus inoculum diluted in MEM or DMEM containing 2% FBS. After 1–2 h, the virus inoculum was removed and replaced with the complete growth medium (MEM or DMEM containing 10% FBS) and cells were further incubated for the indicated times. EMCV titration was performed either on Vero cells using the median tissue culture infectious dose (TCID 50 ) methodology as previously described 54 , or on BHK-21 cells by plaque assay. The titres of ZIKV were determined by plaque assay on Vero cells as previously described 47 . Titration of SCoV2 was performed on Vero–hACE2 cells by plaque assay. Flow cytometry To quantify the percentage of DENV-infected cells, reconstituted MDA5 KO HEK293 cells were washed with phosphate-buffered saline (PBS; Gibco) and fixed with 4% (v:v) formaldehyde in PBS at room temperature for 30 min. Cells were subsequently permeabilized with 1× BD Perm/Wash buffer (BD Biosciences) for 15 min and incubated with an anti-flavivirus E antibody (4G2; 1:100 in 1× BD Perm/Wash buffer) at 4 °C for 30 min. Cells were further washed three times with 1× BD Perm/Wash buffer and incubated with a goat anti-mouse Alexa Fluor-488-conjugated secondary antibody (1:500 in 1× BD Perm/Wash buffer; Invitrogen, catalogue no. A10667) at 4 °C for 30 min in the dark. After washing three times with 1× BD Perm/Wash buffer, cells were analysed on a FACSCalibur flow cytometer (BD Biosciences). Data analysis was performed using the FlowJo software. Virus protection assay The culture supernatants from mutant or WT EMCV-infected NHLFs or RIG-I KO HEK293 cells were UV inactivated in a biosafety cabinet under a UV-C lamp (30 W) at a dose of 5,000 μJ cm −2 for 15 min. Complete inactivation of EMCV was confirmed by plaque assay on BHK-21 cells. The inactivated supernatants were then transferred on to fresh Vero cells for 24 h, and the primed Vero cells were subsequently infected with ZIKV (MOI = 0.002–2) for 72 h, or with EMCV (MOI = 0.001–0.1) for 40 h. ZIKV-positive cells were determined by immunostaining with anti-flavivirus E antibody (4G2) and visualized using the KPL TrueBlue peroxidase substrate (SeraCare). EMCV-induced cytopathic effect was visualized by Coomassie Blue staining. Statistical analysis The two-tailed, unpaired Student’s t -test was used to compare differences between two experimental groups in all cases. Significant differences are denoted by * P < 0.05, ** P < 0.01 or *** P < 0.001. Pre-specified effect sizes were not assumed and, in general, three biological replicates ( n ) for each condition were used. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of the present study are available from the corresponding author upon request. Source data are provided with this paper.
Researchers from Cleveland Clinic's Florida Research and Innovation Center (FRIC) have identified a potential new target for anti-COVID-19 therapies. Their findings were published in Nature Microbiology. Led by FRIC scientific director Michaela Gack, Ph.D., the team discovered that a coronavirus enzyme called PLpro (papain-like protease) blocks the body's immune response to the infection. More research is necessary, but the findings suggest that therapeutics that inhibit the enzyme may help treat COVID-19. "SARS-CoV-2—the virus that causes COVID-19—has evolved quickly against many of the body's well-known defense mechanisms," Gack said. "Our findings, however, offer insights into a never-before characterized mechanism of immune activation and how PLpro disrupts this response, enabling SARS-CoV-2 to freely replicate and wreak havoc throughout the host. We discovered that inhibiting PLpro may help rescue the early immune response that is key to limiting viral replication and spread." One of the body's frontline immune defenses is a class of receptor proteins, including one called MDA5, that identify invaders by foreign patterns in their genetic material. When the receptors recognize a foreign pattern, they become activated and kick-start the immune system into antiviral mode. This is done in part by increasing the downstream expression of proteins encoded by interferon-stimulated genes (ISGs). In this study, Gack and her team identified a novel mechanism that leads to MDA5 activation during virus infection. They found that ISG15 must physically bind to specific regions in the MDA5 receptor—a process termed ISGylation—in order for MDA5 to effectively activate and unleash antiviral actors against invaders. They showed that ISGylation helps to promote the formation of larger MDA5 protein complexes, which ultimately results in a more robust immune response against a range of viruses. "While discovery of a novel mechanism of immune activation is exciting on its own," Gack said, "we also discovered a bit of bad news, which is that SARS-CoV-2 also understands how the mechanism works, considering it has already developed a strategy to block it." The research team shows that the coronavirus enzyme PLpro physically interacts with the receptor MDA5 and inhibits the ISGylation process. "We're already looking forward to the next phase of study to investigate whether blocking PLpro's enzymatic function, or its interaction with MDA5, will help strengthen the human immune response against the virus," Gack said. "If so, PLpro would certainly be an attractive target for future anti-COVID-19 therapeutics."
10.1038/s41564-021-00884-1
Medicine
Researchers uncover new insights into why individuals are affected differently by COVID-19 infection
Systems genetics identifies miRNA‑mediated regulation of host response in COVID‑19, Human Genomics (2023). DOI: 10.1186/s40246-023-00494-4. doi.org/10.1186/s40246-023-00494-4
https://dx.doi.org/10.1186/s40246-023-00494-4
https://medicalxpress.com/news/2023-06-uncover-insights-individuals-affected-differently.html
Abstract Background Individuals infected with SARS-CoV-2 vary greatly in their disease severity, ranging from asymptomatic infection to severe disease. The regulation of gene expression is an important mechanism in the host immune response and can modulate the outcome of the disease. miRNAs play important roles in post-transcriptional regulation with consequences on downstream molecular and cellular host immune response processes. The nature and magnitude of miRNA perturbations associated with blood phenotypes and intensive care unit (ICU) admission in COVID-19 are poorly understood. Results We combined multi-omics profiling—genotyping, miRNA and RNA expression, measured at the time of hospital admission soon after the onset of COVID-19 symptoms—with phenotypes from electronic health records to understand how miRNA expression contributes to variation in disease severity in a diverse cohort of 259 unvaccinated patients in Abu Dhabi, United Arab Emirates. We analyzed 62 clinical variables and expression levels of 632 miRNAs measured at admission and identified 97 miRNAs associated with 8 blood phenotypes significantly associated with later ICU admission. Integrative miRNA-mRNA cross-correlation analysis identified multiple miRNA-mRNA-blood endophenotype associations and revealed the effect of miR-143-3p on neutrophil count mediated by the expression of its target gene BCL2. We report 168 significant cis -miRNA expression quantitative trait loci, 57 of which implicate miRNAs associated with either ICU admission or a blood endophenotype. Conclusions This systems genetics study has given rise to a genomic picture of the architecture of whole blood miRNAs in unvaccinated COVID-19 patients and pinpoints post-transcriptional regulation as a potential mechanism that impacts blood traits underlying COVID-19 severity. The results also highlight the impact of host genetic regulatory control of miRNA expression in early stages of COVID-19 disease. Introduction The three years since the emergence of SARS-CoV-2 have brought unprecedented progress in our scientific understanding of the SARS-CoV-2 infection and COVID-19 disease. However, one overarching question remains: why do individuals infected with SARS-CoV-2 vary in their clinical symptomatology, from asymptomatic infection to severe, and oftentimes, lethal disease [ 1 , 2 ]? The answer to this complex question lies in layers of genetic, biological, environmental, and social factors. Previous studies have found that both men and older patients, as well as those with underlying medical conditions such as diabetes, hypertension and obesity are at a higher risk for severe disease, requirement of intensive care and death [ 3 - 6 ]. Studies have also identified a number of blood phenotypes associated with severe disease, including elevated levels of D-dimers, C-reactive protein (CRP), neutrophil-to-lymphocyte ratio (NLR), Interleukin 6 (IL-6), IL-10, lactate dehydrogenase (LDH), procalcitonin and albumin [ 7 - 14 ]. Neutrophils have been found to play a critical role in the pathophysiology of COVID-19 [ 15 - 17 ], with activation of circulating neutrophils—as observed in transcriptomic data—pinpointed as a predictor of clinical illness in COVID-19 [ 18 , 19 ]. Genetic variation has also been shown to influence COVID-19 susceptibility, severity and clinical outcomes [ 20 , 21 ]. While there has been extensive research to unpack the different sources of variation that influence COVID-19 disease severity, only a few studies to date have focused on the potential roles of human-encoded microRNA (miRNA). miRNAs are a class of small, non-coding RNAs that regulate gene expression by binding to complementary mRNA transcripts to either block translation or mark the target mRNA for degradation [ 22 , 23 ]. miRNAs can regulate both neighboring or distal genes; one miRNA can regulate either one or multiple genes; and multiple miRNAs can target the same gene in either a synergistic or antagonistic manner [ 24 ]. Since regulated miRNA expression is crucial for the differentiation, activation and survival of immune cells [ 25 ], dysregulated miRNA expression can be indicative of aberrant immune function, and has been implicated in numerous diseases including cancers, inflammatory disorders and malaria [ 26 - 28 ]. miRNA expression is also influenced by host genetics, with a few studies describing genetic variation associated with miRNA expression in healthy donors and disease contexts like malaria and cancer [ 29 - 32 ]. Despite the contributions of miRNAs to immune function, our understanding of the roles of miRNAs in response to SARS-CoV-2 is still in its nascency. There are a number of studies (reviewed in Geraylow et al. [ 33 ]) that have identified aberrant miRNA expression during COVID-19 disease progression. Farr and colleagues reported the differential expression of 55 miRNAs between COVID-19 patients during the early stage of disease and healthy donors matched for age and gender [ 34 ]; Fernández-Pato and colleagues identified 200 differentially expressed miRNAs between COVID-19 patients and healthy controls which were also correlated with proinflammatory cytokines such as IL-6, IL-12, IP-10, and TNFɑ [ 35 ]; Pinacchio and colleagues highlighted increased levels of miR-122a and miR-146a in the serum of COVID-19 patients compared to controls, and reported a negative correlation between miR-146a and Interferon alpha-inducible protein 27 (IFI-27) [ 36 ]; de Gonzalo-Calvo et al. identified 10 miRNAs that were dysregulated in hospitalized patients admitted to the intensive care unit (ICU), compared to patients that did not require ICU care, reported correlations between miRNA levels and length of ICU stay, and found that the expression of miR-192-5p and miR-323a-3p differentiated ICU non-survivors from survivors [ 37 ]; Li and colleagues used mendelian randomization to pinpoint two miRNAs (hsa-miR-30a-3p and hsa-miR-139-5p) as potentially causal for COVID-19 severity [ 38 ]; and early in the pandemic, Kim and colleagues identified five miRNAs (hsa-miR-15b-5p, hsa-miR-195-5p, hsa-miR-221-3p, hsa-miR-140-3p, and hsa-miR-422a) predicted to commonly bind the SARS-CoV, MERS-CoV and SARS-CoV-2 viruses, and showed that they were differentially expressed in hamster lung tissues before and after SARS-CoV-2 infection [ 39 ]. Importantly, many of the miRNAs highlighted across these studies were shown to be enriched in inflammatory and antiviral immune response pathways [ 33 ]. Another set of studies have focused on uncovering the mechanisms behind miRNA regulation. Latini and colleagues showed a functional role for hsa-let7b-5p in modulating levels of ACE2 and DPP4—two receptors that play an important role in the onset and progression of COVID-19 disease—and established that low expression of this miRNA was associated with ACE2 and DPP4 overexpression in naso-oropharyngeal swabs in COVID-19 patients [ 40 ]. Meanwhile, seeking to better understand the mechanism behind neurological symptoms in COVID-19, Trampuž and colleagues highlighted 98 miRNAs that have been implicated in both COVID-19 and one of five neurological disorders [ 41 ]. Together, these studies implicate miRNAs in the human immune response to COVID-19 infection; however, most of them only included patient populations from Australia, Europe and North America, and none examined the effect of genome-wide genetic variation on host miRNA expression during SARS-CoV-2 infection. In this study, we generated and analyzed a multi-omics dataset—genotypes, miRNA and mRNA expression—and phenotypes derived from electronic health records (EHRs) to understand the genetic and biological underpinnings of ICU admission and its associated blood phenotypes for a diverse group of 259 unvaccinated COVID-19 patients in Abu Dhabi, the United Arab Emirates (UAE). This systems genetic approach revealed miRNAs associated with blood traits underlying COVID-19 disease severity and progression and provide evidence for the role of post-transcriptional regulation of neutrophils in COVID-19. We also report the impact of host genetic regulatory variation on miRNA expression traits supporting the hypothesis that severity of COVID-19 is under host genetic control of post-transcriptional events in circulating immune cells. Results Systems genetics to study early stages of COVID-19 in a diverse unvaccinated cohort To understand the clinical and biological factors underpinning COVID-19 disease severity, we analyzed electronic health records (EHRs) data for 259 unvaccinated patients and multi-omics data—genotypes, miRNA and RNA expression—for a subset of 96 patients (Fig. 1 A). Among the 259 patients, 61 were admitted to the ICU (23.6%) at some point during their hospital stay; 65.3% of patients identified as male; the average age was 46.9 (SD = 14.2); and patients were predominantly from the Middle East and North Africa (MENA, 54.4%) and Southeast Asia (40.0%) (Additional file 2 : Table S2A, see Additional file 1 : Note 1 for classification of nationalities into regions). Around 75% of the cohort had at least one pre-existing condition, most commonly hypertension (47.1%) or diabetes (41.3%) (Additional file 2 : Table S2A). The most common symptoms reported at the time of hospital admission were fever (54.4%) and cough (53.3%) (Additional file 2 : Table S2B). Of the 259 patients, 96 were selected for miRNA and mRNA sequencing and genotyping (see Methods for selection criteria), including 29 patients (30.2%) that were admitted to the ICU. Due to technical reasons, RNA-seq data was not available for 2 of the 96 individuals. Notably, the distribution of demographics, pre-existing conditions and symptoms did not significantly differ between the full sample ( n = 259) and the miRNA subset ( n = 96) (Additional file 2 : Table S2A). Fig. 1 Clinical variables and miRNA levels at the time of hospital admission, prior to any clinical intervention or treatment, are associated with later ICU admission for COVID-19 patients. A Study design. B Significant Pearson correlations (FDR < 0.05) between 18 factors from electronic health records and ICU admission. C Volcano plots of miRNAs associated with ICU admission. Age and self-reported time from symptom onset to hospital admission were used as covariates in a logistic regression model. miRNAs significant at p < 0.05 are highlighted in blue, and miRNAs significant at p < 0.01 are highlighted in red, with the 5 most significant miRNAs labeled Full size image Several clinical variables and miRNA levels at the time of hospital admission are associated with later ICU admission To identify factors associated with COVID-19 disease severity—using ICU admission as a proxy—we computed correlations between 62 variables from the EHR with ICU admission (see Additional file 1 : Note 2 for a list of the 62 factors) in the full dataset of 259 individuals, and identified 18 significant correlations (FDR < 0.05). Being male ( r = 0.20), from Southeast Asia ( r = 0.18), and previously diagnosed with acute kidney failure ( r = 0.21), sepsis ( r = 0.16) or myocardial infarction ( r = 0.16) were all positively correlated with ICU admission. At the time of hospital admission, self-reported symptoms of fever ( r = 0.17) and cough ( r = 0.22), as well as clinician-recorded body temperature ( r = 0.22), oxygen saturation ( r = − 0.32) and respiratory rate ( r = 0.39) were all correlated with later ICU admission. Notably, we identified 8 blood phenotypes significantly correlated with ICU admission, of which urea ( r = 0.17), CRP ( r = 0.33), IL-6 ( r = 0.40), absolute neutrophil number ( r = 0.30), NLR ( r = 0.37) and D-dimers ( r = 0.28) were positively correlated, while chloride ( r = − 0.26) and absolute lymphocyte number ( r = − 0.31) were negatively correlated (Fig. 1 B; Additional file 2 : Table S3A–B). Some of these associations are not independent, considering the correlation between some of the blood phenotypes (Additional file 1 : Fig. S1). Interestingly, the viral load at time of hospital admission was not associated with later ICU admission in our dataset. To understand whether miRNA levels at the time of hospital admission are associated with later ICU admission, we performed logistic regression analyses for each of the 632 miRNAs (see Methods for details on quality control of miRNA-seq data, Additional file 1 : Fig. S2) in the sub-sample of 96 individuals, controlling for age and self-reported time from symptom onset to hospital admission (we did not control for gender because out of the 27 ICU patients, only 3 identified as women, Additional file 2 : Table S3B). We identified 21 miRNAs whose levels at the time of hospital admission—10 downregulated, and 11 upregulated—were significantly associated ( p < 0.01) with later ICU admission (Fig. 1 C; Additional file 2 : Table S4A; see results from a model adjusted for gender in Additional file 1 : Fig. S4, Additional file 2 : Table S4B). To understand the biological significance of the 21 ICU-associated miRNAs, we sought to identify their putative mRNA targets. After computing the Spearman correlations between the levels of the 21 miRNAs and 44,586 unique mRNA transcripts measured from the same blood sample collected at hospital admission (see Methods for details on quality control of RNA-seq data, Additional file 1 : Fig. S4), we identified 15,336 correlated miRNA-mRNA pairs (FDR < 0.05), of which 6490 were negatively-correlated (mean r = − 0.37, SD = 0.05), concerning 12 miRNAs (Additional file 1 : Fig. S5A–B). Using IPA miRNA Target Prediction, we annotated the experimentally validated and/or highly predicted gene targets of the 12 miRNAs of interest (data was not available for 2 miRNAs), and found that 18 highly predicted miRNA-gene target pairs (6 miRNAs, 18 genes) were negatively correlated in our dataset (Additional file 2 : Table S5; Additional file 1 : Fig. S5C–H). These findings suggest that beyond clinical variables and blood phenotypes, miRNAs may also play a role in the host immune response to early COVID-19 infection. However, admission to the ICU is a complex phenotype resulting from both the patient’s clinical manifestation and the decision making of their clinical team. As such, ICU-associated miRNAs may not be most informative of underlying cellular mechanisms, which is why we next turned to study the miRNA architecture of the 8 ICU-associated blood endophenotypes. Numerous miRNAs are associated with ICU-associated phenotypes measured at the time of hospital admission To identify miRNAs associated with the 8 ICU-correlated blood endophenotypes, we performed linear regression analyses of the 632 miRNAs with each of the 8 blood endophenotypes, using standardized miRNA and blood phenotype levels, and adjusting for age and gender. We identified 9 miRNAs significantly associated with urea, 5 with chloride, 32 with CRP, 28 with neutrophil count (Fig. 2 A), 20 with lymphocyte count, 18 with NLR, 16 with D-dimers ( p < 0.01), and no miRNAs significantly associated with IL-6 (Additional file 1 : Fig. S6; Additional file 2 : Table S6A). We also quantified 5 of these miRNAs with qPCR, and found consistent associations between these miRNAs and their associated blood phenotypes in all 15 cases, of which 10 were also significant with qPCR-based data (Additional file 2 : Table S6B). A total of 97 unique miRNAs were associated with at least one blood phenotype (for a comparison of results between models adjusted for age and gender, and models with no covariates, see Additional file 1 : Fig. S7). In fact, we found that out of the 21 ICU-associated miRNAs, 5 were also associated with at least one blood phenotype: hsa-miR-4443, hsa-miR-450b-5p, p-hsa-miR-14, hsa-miR-150-3p, and hsa-miR-3615 (Additional file 1 : Fig. S8), and noticed that many miRNAs were associated with more than one blood endophenotype, with neutrophil count and CRP sharing the maximum of 12 significant miRNA associations (Additional file 1 : Fig. S9). These findings indicate that miRNAs may contribute to the complex biological mechanisms that regulate blood phenotypes during early stages of COVID-19 infection. Fig. 2 The positive association between hsa-miR-143-3p and neutrophil count is mediated by BCL2 expression. A Numerous miRNAs are associated with neutrophil count, including hsa-miR-143-3p (labeled). Both miRNA expression and blood phenotype levels were measured from the same blood sample, collected at the time of hospital admission. miRNAs significant at p < 0.05 are highlighted in blue. miRNAs significant at p < 0.01 are highlighted in red. Both miRNA expression and blood phenotype levels were standardized. B Correlation between hsa-miR-143-3p expression ( x -axis) and BCL2 transcript expression ( y -axis). C Correlation between BCL2 transcript expression ( x -axis) and absolute neutrophil count ( y -axis). The Pearson correlation and p value are in blue. miRNA expression, transcript expression and neutrophil count have all been standardized Full size image To understand the regulatory roles of the miRNAs associated with blood endophenotypes, we calculated Spearman correlations between the 97 miRNAs and 44,586 unique mRNA transcripts, and identified 50,427 significant negative correlations (FDR < 0.05), with a mean of − 0.37 (SD = 0.07), concerning 37 unique miRNAs (Additional file 1 : Fig. S10). Using the IPA miRNA Target Prediction tool, we annotated the experimentally observed and highly predicted gene targets of the 37 miRNAs (data was not available for 9 miRNAs). We identified 16 experimentally observed miRNA-gene pairs that were negatively correlated in our dataset, corresponding to 16 genes targeted by 4 miRNAs—hsa-miR-21-5p, hsa-miR-338-5p, hsa-miR-199b-5p and hsa-miR-143-3p—most of which were associated with CRP, neutrophil count and NLR. We also observed 184 highly predicted miRNA-gene targets (20 miRNAs, 184 gene targets) that were negatively correlated in our dataset (Additional file 2 : Table S7; Additional file 1 : Fig. S11). Using IPA pathway enrichment analysis, we found that the 197 unique genes—pooled across experimentally observed and highly predicted gene targets—were implicated in MYC mediated apoptosis signaling, crosstalk between dendritic cells and natural killer cells, and p53 signaling, among others, and enriched in cancer, infectious disease and immunological disease (Additional file 2 : Table S8A–B). Overall, these results imply that miRNAs associated with ICU-associated blood endophenotypes at the time of hospital admission putatively regulate genes involved in apoptotic and immunological pathways. To test whether the effect of miRNAs on blood phenotypes is mediated by their regulation of gene expression, we performed medication analysis on the 599 unique triplets of miRNA, gene target and associated blood phenotype reported in Additional file 2 : Table S7 (for genes with multiple transcripts, we only tested the transcript with the lowest p value). We identified 74 Bonferroni-significant mediations ( p < 0.05), 8 of which concerned experimentally observed miRNA-gene target pairs. Most interestingly, we found that hsa-miR-143-3p—which is highly expressed in neutrophils (Juzenas et al., [ 42 ], Additional file 1 : Fig. S12)—affects neutrophil count and NLR through the expression of BCL2, an apoptotic gene that regulates cell death (Fig. 2 B, C; the negative correlation between hsa-miR-143-3p and BCL2 was also replicated in qPCR, r = − 0.38, p = 2.4 × 10 −8 ). We observed a few other notable examples: hsa-miR-199b-5p affecting NLR through the expression of ETS1, a transcription factor; hsa-miR-21-5p affecting neutrophil count and NLR through the expression of FASLG, an apoptosis-inducing transmembrane protein, as well as affecting NLR through the expression of TNF, a proinflammatory cytokine involved in cell proliferation, differentiation and apoptosis; and lastly, hsa-miR-338-5p influencing NLR by regulating BACE1, a protein involved in the proteolytic processing of the amyloid precursor protein (Additional file 2 : Table S9). These patterns in our data provide further evidence for the role of miRNAs in regulating the expression of genes that are important for the host immune response to infection, and therefore, for response to SARS-CoV-2. To further probe the relationship between miRNA expression and COVID-19 disease severity, we tested the association between miRNAs and ICU-associated clinical symptoms and self-reported symptoms at hospital admission (see Additional file 1 : Note 2 for the full list of variables). We identified 2 miRNAs significantly associated with body temperature, 4 with oxygen saturation, and 16 with respiratory rate ( p < 0.01) (Additional file 1 : Fig. S13A–C; Additional file 2 : Table S10A). Classifying individuals with 4 or more self-reported symptoms at hospital admission (out of 13) as highly symptomatic (note that the threshold of 4 symptoms was chosen because 4 is both the median and the mean of the number of symptoms reported), we found only 1 miRNA associated with being highly symptomatic (Additional file 1 : Fig. S13D; Additional file 2 : Table S10B). Some of the implicated miRNAs are genetically controlled by nearby genetic variants Lastly, we tested whether genetic variation influenced the expression of miRNAs associated with the 8 blood endophenotypes. We performed cis -eQTL analysis using expression levels for 632 miRNAs and SNP data from 91 individuals (Additional file 1 : Fig. S14; see Methods for details on quality control of genotyping data). For each miRNA, we tested between 1 and 671 SNPs, depending on the density of the SNP array within 300,000 base pairs (bp) from the miRNA (59 miRNAs were excluded from this analysis since there were no SNPs within this window). We identified a total of 168 significant cis -eQTLs (Bonferroni p < 0.05), of which 57 concerned 28 unique miRNAs that were associated with either ICU or an ICU-associated blood endophenotype (Additional file 2 : Table S11). For each of the 28 miRNAs, we annotated the SNP with the lowest Bonferroni P value as the top SNP, and used wANNOVAR [ 43 ] to assign SNPs to genes, resulting in 28 top cis -eQTLs consisting of an e-SNP and an e-miRNA (Table 1 , Fig. 3 A). These cis -eQTLs reflect a significant linear relationship between the genotype (number of minor alleles) and miRNA expression levels (Fig. 3 B, E, G), with 6 of the peak e-SNPs found within a 1000 bp window from the miRNA (Fig. 3 D), and other e-SNPs found closely downstream (Fig. 3 F) or upstream (Fig. 3 H). In most scenarios, the top e-SNP was near other significant e-SNPs, likely due to linkage disequilibrium (e.g. Fig. 3 F). The volcano plots and fine-mapping plots for other cis -eQTLs can be found in Additional file 1 : Fig. S15. To test whether the effect of the e-SNP on the blood phenotype was mediated by e-miRNA expression, we performed mediation analysis for the 28 cis -eQTLs and their associated blood phenotypes, or for a total of 39 unique triplets of e-SNP, e-miRNA and blood phenotype. We identified 2 e-SNP-neutrophil associations that are mediated through miRNA expression ( p < 0.00128, using a Bonferroni threshold): rs1256522 has effects on both neutrophil count and neutrophil/lymphocyte ratio that are mediated through the expression of hsa-miR-625-3p and rs79260648 has an effect on neutrophil/lymphocyte ratio mediated through the expression of hsa-miR-576-3p (Additional file 2 : Table S12). Altogether, these results show that allelic variation influences the expression levels of miRNAs that contribute to the variation of ICU-associated blood endophenotypes during early stages of SARS-CoV-2 infection. Table 1 List of top cis -eQTLs for 29 miRNAs associated with one of the 8 blood phenotypes correlated with ICU admission Full size table Fig. 3 Expression of numerous miRNAs is genetically controlled by cis -eQTLs. A Manhattan plots showing all cis-eQTLs (defined as an association between a miRNA and a SNP in a 300,000 base pairs window). Points highlighted in pink show cis-eQTLs for miRNAs associated with one of the 8 blood phenotypes The dashed line corresponds to Bonferroni p < 0.05, and all points above the dashed line are significant cis-eQTLs. Labeled points refer to cis-eQTLs with Bonferroni p < 0.05, and some of the top cis-eQTLs are annotated with the e-miRNA. B – G Pairs of violin and fine-mapping plots for cis- eQTLs. The violin plot shows the linear relationships between the number of minor alleles and miRNA expression associated with each genotype. The dashed line corresponds to the linear regression fit, and the p value is stated on the plot. The fine-mapping plot shows all tested SNPs for each miRNA. Points highlighted in blue show e-SNPs significant at Bonferroni p < 0.05. Labeled point shows the top e-SNP for that cis-eQTL. The pink diamond shows the genomic position of the miRNA. B – C cis -eQTL hsa-miR-5189-5p and rs34088055. E – F cis -eQTL rs1256522 and hsa-miR-625-3p. G – H cis -eQTL rs1434282 and hsa-miR-181a-3p Full size image Discussion In the three years since the emergence of SARS-CoV-2, scientific undertakings have improved our understanding of the host immune response to infection, and have led to the development of effective vaccines and improved treatments for COVID-19 disease. Yet the question of what contributes to the variation in COVID-19 disease severity—ranging from asymptomatic to severely symptomatic, and sometimes, lethal infection—remains standing. In this study, we pinpoint miRNA expression as a previously underappreciated mechanism for regulating blood phenotype levels during early stages of COVID-19, which, as correlated with later ICU admission, can be indicative of disease severity. We highlight 21 miRNAs whose levels at the time of hospital admission are associated with ICU admission, and 97 miRNAs associated with ICU-correlated blood endophenotypes, of which 5 are associated with both ICU admission and an endophenotype. Many of these miRNA have been reported as differentially expressed between COVID-19 patients and healthy controls (Additional file 1 : Note 3, [ 34 , 35 , 39 , 40 , 44 - 47 ]). Through integrative miRNA-mRNA analysis, we identify 194 experimentally observed or highly predicted miRNA-gene target pairs that are negatively correlated in our dataset, and, using mediation analysis, find 8 instances where the miRNA affects neutrophil counts likely through transcriptional regulation of immune cell apoptosis. We furthermore describe the role of genetic variation in shaping miRNA expression levels—we characterize 28 top cis -eQTLs, and, using mediation analysis, document 3 examples of SNPs influencing neutrophil counts, mediated by miRNA expression. Our results highlight two interesting aspects of the biology of SARS-CoV-2 infection that warrant further investigation. The first is the role of neutrophils in early response to SARS-CoV-2 infection. Not only were both neutrophil counts and neutrophil/lymphocyte ratio at the time of hospital admission correlated with later ICU admission, but also most of the miRNAs that negatively co-varied with their experimentally validated gene targets in our dataset were also associated with these two blood traits. We found that four of these miRNAs—hsa-miR-143-3p, hsa-miR-199b, hsa-miR-21-5p and hsa-miR-338-5p—affect these blood endophenotypes by regulating BCL2, ETS1, FASTLG, TNF and BACE1, many of which are involved in apoptotic (BCL2, FASTLG) and immune-related pathways (BCL2, TNF). These trends in our data are consistent with prior studies that have show enrichment of COVID-19-associated miRNAs in inflammatory and immune pathways [ 33 ] and highlighted neutrophils as key players in COVID-19 pathophysiology [ 15 - 17 ]. Secondly, some of the miRNAs highlighted in our study are strong candidates for functional follow-up. For instance, hsa-miR-143-3p—which affects neutrophil counts mediated by BCL2 expression—has been implicated in numerous immunological diseases such as cancers and ischemic stroke [ 48 - 50 ], has been found to influence inflammatory factors and cell apoptosis [ 51 , 52 ], and to inhibit Wnt and MAPK signaling [ 53 ]. Similarly, hsa-miR-199b, which influences neutrophils through ETS1 expression, has previously been implicated in breast cancer [ 54 , 55 ], and described as a tumor suppressor in acute myeloid leukemia [ 56 ] and an inducer of apoptosis in oral cancer [ 51 , 57 ]. Another promising candidate for functional follow-up is hsa-miR-21-5p, which affects neutrophils by regulating FASTLG and TNF expression, and has been previously implicated in colon, breast and gastric cancers [ 58 - 60 ]. This study has several strengths and some limitations. While previous studies have already pointed to genetic, miRNA and transcriptomic variation as predictive of COVID-19 disease severity, to our knowledge this is the first study to investigate the relationship between these three sources of biological variation in a matched, multi-omics dataset. This study design helped us prioritize miRNAs with miRNA-gene target correlation patterns that are consistent with a regulatory relationship, as well as identify miRNAs that are genetically controlled. More so, our study enrolled unvaccinated patients and investigated miRNA and RNA expression levels collected at the time of hospital admission, after a positive COVID-19 test, but before any medication or clinical intervention. By doing so, our results are free of numerous confounding factors, many of which are abundant in recent studies enrolling vaccinated patients. Lastly, our study cohort consists entirely of patients from MENA and South Asia, two geographical regions whose populations have been under-represented in COVID-19 research, as well as genetic research at large [ 61 , 62 ]. The diversity of our study cohort (Additional file 1 : Fig. S11) not only powered us to detect cis -eQTLs with large effects along a larger spectrum of genetic variation, but also to present findings that are potentially generalizable to around 30% of the world population living in MENA and South Asia. Another limitation of our study is the lack of matched controls, i.e. individuals uninfected with SARS-CoV-2. For this reason, we are unable to discern whether the miRNAs we report are induced only upon SARS-CoV-2 infection. Conclusion In conclusion, this systems genetics study has given rise to the first genomic picture of the architecture of whole blood miRNAs in unvaccinated COVDI-19 patients. The results pinpoint post-transcriptional regulation as a potential mechanism that impacts blood traits underlying COVID-19 severity and warrant similar investigations in other populations. The study also reveals the association between host allelic genetic regulatory variation and miRNA expression levels in the context of COVID-19. Moreover, the multi-omics analysis presented demonstrated the value of the approach in capturing meaningful biological associations in cohorts of relatively small sample sizes. Methods Research ethics statement The project was approved by the Research Ethics Committee at New York University Abu Dhabi (NYUAD) (HRPP-2020-59) and the Abu Dhabi COVID-19 Research International Review Board Committee of the Department of Health (Ref no: DOH/CVDC/2020/874). Study participants and enrollment This was a prospective study of unvaccinated adult COVID-19 patients, where baseline sampling and phenotyping was done at the time of hospital admission following a positive COVID-19 test. Clinical follow-up was done during the course of the hospital stay. The study enrolled 264 patients across four hospitals in Abu Dhabi, UAE: Al Ain Hospital, Al Rahba Hospital, Mafraq Hospital and Sheikh Khalifa Medical Center, which are all managed under the same healthcare system authority, and therefore have the same procedures and clinical protocols. The recruitment period lasted from June to September 2020. Electronic consent forms were administered in English, Urdu, Hindi and Tagalog, depending on the participant’s preference; these four languages were selected as the most commonly spoken languages among patients in these clinics. The exclusion criteria included having Hb levels lower than 70 mmol/L, having platelet count less than 100,000/ml and/or having received transfusion within 24 h of potential study recruitment. Of the 264 patients, 256 completed a questionnaire upon enrollment which asked about the date of symptom onset and the experienced symptoms. Participants also provided biological specimens (whole blood samples for genotyping and miRNA/RNA extraction, and saliva for viral load quantification) at the time of hospital admission. Sample collection was done following unified procedures, and samples were randomized throughout downstream experiments to avoid potential batch effects. At the end of the study recruitment period, electronic health records (EHRs) were extracted for 259 patients (from hereafter referred to as the “study cohort”), which included information about demographics, pre-existing conditions and an extensive documentation of their COVID-19-related hospital stay, including physical measurements and lab tests. All clinical and biological data was de-identified. All analyses including correlations between EHR variables—for example, correlations between blood phenotypes and ICU admission—were conducted in the full sample of 259 patients. RNA extraction In the clinic, blood was collected in Tempus tubes and refrigerated at 4 °C before being transported to the research labs at NYU Abu Dhabi (NYUAD). Whole blood RNA was isolated using the Tempus Spin RNA Isolation Kit (Thermo Fisher) following manufacturer’s instructions. Quantification and quality control of the extracted RNA was performed using a 2100 Bioanalyzer instrument and a Qubit 2.0 Fluorometer. Selection of the miRNA subsample miRNA sequencing was performed on 96 patient blood samples collected at the time of hospital admission, prior to any clinical interventions or treatments. We focused on 96 patients from the full set of 259 participants, maximizing the number of ICU patients from MENA and South Asia—populations that are underrepresented in existing COVID-19 studies and well-represented in our study cohort—and individuals with complete clinical data. To maximize matching between patients admitted to the ICU and those who were not, we kept all patients who identified as women (since they comprised only 35% of the full sample), and all patients who identified as men and were admitted to the ICU; note that within the EHR in the UAE, sex is reported as either a “man” or a “woman”. Finally, we prioritized patients with complete relevant clinical data in their EHR. All analyses including miRNA and RNA expression were conducted in this sub-sample of 96 patients. miRNA sequencing Small RNA libraries were prepared from 400 ng of high-quality total RNA (RNA Integrity Number RIN > 8) using the NEBNext Multiplex Small RNA Library Prep Set for Illumina (New England Biolabs). Size selection of small RNA cDNA libraries was done using the gel purification method. The library size distribution was checked using a 2100 Bioanalyzer instrument to ensure correct size amplicons are selected for sequencing. All samples and libraries were randomized and processed in the same way to minimize batch effects. Individual libraries were quantified, and equimolar quantities of each library were pooled and sequenced on an S1 flow cell using a NovaSeq instrument (Illumina). The miRNA data is deposited in GEO under accession GSE220077. Bioinformatic analysis of miRNA data Raw miRNA sequencing reads were demultiplexed and converted to FASTQ files using the standard Illumina pipeline with bcl2fastq. Sequences were processed with Trimmomatic v0.36 to remove indexes and adapter sequences. Trimmed reads were processed with the FASTX Toolkit v0.0.14 to filter out reads with tail quality < 15 nucleotides and retain reads of 16–25 nucleotides for downstream analyses. FastQC v0.11.5 was used to visualize quality metrics before and after the filtering. High-quality reads were subject to small RNA annotation and quantification using OASIS [ 63 , 64 ]. miRNAs with a minimum count of 5 reads in at least 50% of the samples were retained, log2 transformed and standardized (mean = 0, SD = 1) (Additional file 1 : Fig. S2). For each miRNA, individuals who were outliers for miRNA expression—having miRNAs expression values more/less than + 3/-3 standard deviations from the mean—were removed. Replication of miRNA expression with qPCR The expression of five miRNAs (hsa-miR-21-5p, assay ID: 000397; hsa-miR-143-3p, assay ID: 002249; hsa-miR-150-3p, assay ID: 002637; hsa-miR-625-3p, assay ID: 002432, and hsa-miR-5189-3p, assay ID: 466901_mat, all from Thermo) were validated using quantitative PCR (qPCR). Total RNA was extracted from 92 samples (leaving 4 spaces in a 96-well-plate for negative controls), which were the same as those used for miRNA-sequencing, and re-quantified using the Qubit RNA Broad Range Kit (Thermo Fisher Scientific). Of those, 88 samples passed initial quality control. Next, 20 ng of RNA from each sample was reverse transcribed using the TaqMan MicroRNA Reverse Transcription Kit following the manufacturer's instructions. After cDNA synthesis, miRNAs were pre-amplified using a mixture of 1 µL of PreAmp master mix (Standard Biotools), 1.25 µL of pooled TaqMan miRNA Assays (0.2X), 1.5 µL of water, and 1.25 µL of cDNA. The pre-amplification reaction was cycled under the following conditions: 95 °C for 2 min, followed by 14 cycles of 95 °C for 15 s and 60 °C for 4 min, and finally held at 4 °C. The pre-amplified reactions (5 µL) were then diluted in a 96-well plate with 20 µL of low TE buffer (Thermo Fisher Scientific). Finally, qPCR of the miRNAs was performed using the Gene Expression with the 192.24 IFC using Fast TaqMan Assays protocol (PN 100-6174 C1, Standard Biotools) on the Juno and Biomark HD instruments for sample and assay loading and qPCR, respectively. RNA sequencing RNA sequencing was performed on RNA extracted from the whole blood of the 96 patients with miRNA data generated. The extracted RNA samples (400 ng) were subject to globin and ribosomal RNA depletion using the NEBNext® Globin & rRNA Depletion Kit (as per manufacturer’s protocol; New England Biolabs). Preparation of cDNA libraries was subsequently performed using the NEBNext® Ultra II Library Prep Kit for Illumina (New England Biolabs). Libraries were checked for quality and quantified with a 2100 Bioanalyzer instrument, and then pooled into one lane of an S2 flow cell and 101-bp paired-end sequenced on a NovaSeq instrument (Illumina) in XP mode. The mRNA data is deposited in GEO under accession GSE220076. Bioinformatic analysis of RNA-seq data Raw reads were processed for quality control: first, using Trimmomatic v0.36 to remove adapter sequences and low-quality bases (using the parameters ILLUMINACLIP: trimmomatic_adapter.fa:2:30:10 TRAILING:3 LEADING:3 SLIDINGWINDOW:4:15 MINLEN:36), and then, using the Fastp program to remove sequencing artifacts and poly-G tails. Filtered reads were then mapped to the human reference genome (Ensembl GRCh38.p4 release-81) using HISAT v2.0.4 with default options other than --dta. The resulting SAM output was converted to sorted BAM using SAMtools v1.3.1. Raw count per gene was calculated from the sorted bam for individual samples using the options (-s no -t exon -I gene_id) in Htseq-count program. Transcript abundance quantification was performed using Stringtie v1.3.0, and raw gene counts from Htseq-count were converted to TPM (transcripts per million) using COEX-Seq R shiny app program. Transcripts with a minimum of 1 TPM in 50% of the samples were retained for downstream analyses, resulting in 44,629 unique transcripts that mapped to 15,545 unique genes. The RNA-seq data was then log10 transformed and standardized (mean = 0, SD = 1) (Additional file 1 : Fig. S4). Following the same pipeline for miRNA filtering, for each transcript, we removed individuals who were outliers for transcript expression. Viral load quantification Automated extraction of viral RNA from 300 μL of patients’ saliva was performed using the Chemagic 360 automated nucleic acid extraction system (2024-0020, Perkin Elmer, Waltham, MA, USA) and the Chemagic Viral DNA/RNA 300 Kit H96 (CMG-1033S, Perkin Elmer, Waltham, USA) according to the manufacturer’s instructions. RNA was eluted in 80 μL elution buffer followed by reverse transcription (RT), preamplification and quantitative PCR (qPCR) using the Fluidigm Real-Time PCR for Viral RNA Detection protocol (FLDM-00103, Fluidigm, San Francisco, CA, USA). Viral load was quantified in saliva samples from 161 COVID-19 patients using a microfluidic ultra-sensitive quantitative test [ 65 ]. Per CDC recommendations, two assays were used for SARS-CoV-2 detection: 2019-nCoV_N1 and 2019-nCoV-N2 (2019-nCoV CDC EUA Kit, 10006606, IDT). The human RNase P (RP) assay was used as a control for RNA extraction and in RT-qPCR reactions. Each sample was analyzed using 9 replicates for N1, 9 replicates for N2, and 6 replicates for RP assays. Details of the quantitative nature of the microfluidic test are described in detail in [ 65 ]. The Ct values were converted to copies/µL using standard curves based on 100-fold serial dilutions of Twist RNA and SARS-CoV-2 plasmids ranging from 5 to 50,000 copies/µL. Viral load was calculated as the mean viral load from the N1 and N2 assays, given the high concordance of the N1 and N2 assays (Pearson r = 0.91). The viral load data was log2 transformed and standardized. Genotyping Whole-genome genotyping was performed for the 96 samples selected for miRNA/RNA sequencing using the UAE Healthy Future Study [ 66 ] custom design Axiom genotyping array which contains > 850,000 single-nucleotide polymorphisms (SNPs) with 90% similar content to the Axiom PMDA array (ThermoFisher). 400 μl of whole blood samples were used for genomic DNA extraction with the Chemagic DNA isolation kit. DNA quantification and quality check were carried out using NanoDrop spectrophotometer followed by gel electrophoresis for DNA integrity check. Total genomic DNA (200 ng) was amplified and randomly fragmented into 25 to 125 base pair (bp) fragments. These fragments were purified, re-suspended, and hybridized to the Axiom arrays. Following hybridization, the bound target was washed under stringent conditions to remove nonspecific background caused by random ligation events. Each polymorphic nucleotide was queried via a multi-color ligation event carried out on the array surface. After ligation, the arrays were stained and imaged on the GeneTitanTM Multi-Channel Instrument, and a row intensity data file (.CEL file) was generated for each sample. Genotyping calling and quality control The row intensity files were analyzed using Applied Biosystems Axiom™ Analysis Suite software, which automates data analysis and includes allele-calling algorithms. Following the best practice genotyping analysis workflow, samples were filtered for dish QC ≥ 0.82, QC call rate ≥ 97, and average call rate ≥ 98.5. SNPs with a call rate cutoff ≥ 95 and Fisher’s linear discriminant cutoff ≥ 3.6 were retained. Bioinformatic analysis of genotyping data Genotyping data was filtered using PLINK [ 67 ] to remove SNPs with minor allele frequency (MAF) < 5%, Hardy Weinberg Equilibrium (HWE) test p value < 0.005, genotype missingness > 10% and individual missingness > 10%, resulting in 91 individuals and 263,838 SNPs. Cis -eQTL analyses were performed using a linear model adjusted for age and gender. For each of the 632 miRNA and 661 unique genetic positions, SNPs within 300,000 base pairs from the middle genomic position of the miRNA were tested (note that for miRNAs with more than one genetic position due to copy number variation, each cis region were tested independently), and p values were adjusted using Bonferroni correction [ 68 ]. The following PLINK commands were used: --bfile --no-pheno --allow-no-sex --chr --from-kb --to-kb --pheno --covar --covar-name --linear hide-covar --adjust --out. Principal component analysis (PCA) was performed with PLINK using genotyping data from this study ( n = 96) merged with genotyping data from 2504 individuals of the 1000 Genomes Project (1000 [ 69 ] based on 238,313 common SNPs. Statistical analysis All statistical analysis and data visualization were performed using R statistical software v. 4.0.2. Results were reported as significant if the nominal p value < 0.01, or an adjusted p value (FDR or Bonferroni) < 0.05 depending on the analysis as detailed in the results section. In figures and tables, statistical significance was reported using the following criteria: ns ( p > 0.05), * ( p < 0.05), ** ( p < 0.01), *** ( p < 0.001), and **** ( p < 0.0001), except in the mediation analyses where Bonferroni-corrected p value threshold was used and reported as * ( p < P Bonferonni ), ns ( p > P Bonferonni ). Associations between miRNAs levels at admission and ICU admission were calculated using logistic regression models adjusted for age and the time from COVID-19 symptom onset to hospital admission (the self-reported time from symptom onset to hospital admission meant to control for the fact that hospital admission occurred at a different stage of the COVID-19 disease for different patients). Associations between miRNAs and continuous blood phenotypes, both measured at admission and from the same blood sample, were calculated using linear regression models adjusted for age and gender. For continuous variables, outliers—defined as having values + / − 3 SD from the mean—were removed, and then variables were standardized (mean = 0, SD = 1). Causal mediation analysis was performed using the mediate() function in R. The fitted models for the mediator and the outcome were linear. Results include the average causal mediation effects (ACME), the ACME confidence interval, and bootstrapped p values. Availability of data and materials The miRNA data is deposited in GEO under accession GSE220077, and the mRNA data is deposited in GEO under accession GSE220076. All code and phenotype data used for analyses is available at .
A team of researchers at NYU Abu Dhabi, led by Associate Professor of Biology Youssef Idaghdour and working in collaboration with clinicians at several Abu Dhabi hospitals, investigated the association between microRNAs, a class of small RNA molecules that regulate genes, and COVID-19 severity among 259 unvaccinated COVID-19 patients living in Abu Dhabi. The team identified microRNAs that are associated with a weakened immune response and admission to ICU. During this process, they created the first genomic picture of the architecture of blood microRNAs in unvaccinated COVID-19 patients from the Middle East, North Africa, and South Asia regions whose populations are consistently underrepresented in genomics research. The researchers identified changes in microRNAs at the early stages of infection that are associated with specific blood traits and immune cell death, allowing the virus to evade the immune system and proliferate. The results of the system's genetics study demonstrate that a patient's genetic make-up affects immune function and disease severity, offering new insights into how patient prognosis and treatment can be improved. Given the diversity of the sample, there is promise that these findings can be applied to approximately 30% of the world's population who reside in the MENA region and South Asia. In the study titled "Systems genetics identifies miRNA‑mediated regulation of host response in COVID‑19," published in the journal Human Genomics, the research team presents the results of the analysis of multiple omics datasets—genotypes, miRNA, and mRNA expression of patients at the time of hospital admission, combined with phenotypes from electronic health records. The researchers analyzed 62 clinical variables and expression levels of 632 miRNAs measured at hospital admission, as well as identified 97 miRNAs associated with eight blood phenotypes significantly associated with ICU admission. "These findings improve our understanding of why some patients withstand COVID-19 better than others," said Idaghdour. "This study demonstrates that microRNAs are promising biomarkers for disease severity, more broadly, and targets for therapeutic interventions. The methods of this study can be applied to other populations to further our understanding of how gene regulation can serve as a core mechanism that impacts COVID-19 and, potentially, severity of other infections."
10.1186/s40246-023-00494-4
Biology
'Phage' fishing yields new weapon against antibiotic resistance
Benjamin K. Chan et al. Phage selection restores antibiotic sensitivity in MDR Pseudomonas aeruginosa, Scientific Reports (2016). DOI: 10.1038/srep26717 Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep26717
https://phys.org/news/2016-05-phage-fishing-yields-weapon-antibiotic.html
Abstract Increasing prevalence and severity of multi-drug-resistant (MDR) bacterial infections has necessitated novel antibacterial strategies. Ideally, new approaches would target bacterial pathogens while exerting selection for reduced pathogenesis when these bacteria inevitably evolve resistance to therapeutic intervention. As an example of such a management strategy, we isolated a lytic bacteriophage, OMKO1, (family Myoviridae ) of Pseudomonas aeruginosa that utilizes the outer membrane porin M (OprM) of the multidrug efflux systems MexAB and MexXY as a receptor-binding site. Results show that phage selection produces an evolutionary trade-off in MDR P. aeruginosa , whereby the evolution of bacterial resistance to phage attack changes the efflux pump mechanism, causing increased sensitivity to drugs from several antibiotic classes. Although modern phage therapy is still in its infancy, we conclude that phages, such as OMKO1, represent a new approach to phage therapy where bacteriophages exert selection for MDR bacteria to become increasingly sensitive to traditional antibiotics. This approach, using phages as targeted antibacterials, could extend the lifetime of our current antibiotics and potentially reduce the incidence of antibiotic resistant infections. Introduction Widespread and inappropriate uses of chemical antibiotics have selected for multi-drug resistant (MDR) bacterial pathogens, presenting more frequently in human infections and contributing significantly to morbidity 1 , 2 , 3 , 4 . Some bacteria even show evolved resistance to ‘drugs of last resort’, resulting in emergent strains that are pan-drug-resistant (PDR) 5 . One example is the Gram-negative bacterium Pseudomonas aeruginosa , a prevalent opportunistic MDR pathogen that is poised to become a common PDR disease problem. Humans readily encounter P. aeruginosa , which thrives in both natural and artificial environments, varying from lakes and estuaries to hospitals and household sink drains 6 . P. aeruginosa causes biofilm-mediated infections, including catheter associated urinary tract infections, ventilator associated pneumonia and infections related to mechanical heart valves, stents, grafts and sutures 7 , 8 . Individuals with cystic fibrosis, severe burns, surgical wounds and/or compromised immunity are particularly at risk for P. aeruginosa infections, especially acquired in hospitals 9 , 10 , 11 . P. aeruginosa infections are notoriously difficult to manage due to low antibiotic permeability of the outer membrane and mechanisms of antibiotic resistance that allow cross resistance to multiple classes and types of antibiotics. Arguably, the most problematic of these mechanisms is antibiotic drug efflux via m ulti-drug e fflu x (Mex) systems, which extrude different antibiotics that permeate the cell. Mex systems contain three components that function via active transport to move numerous molecules, including antibiotics, out of the cell: an antiporter that functions as a transporter (e.g., MexB, MexY), an outer membrane protein that forms a surface-exposed channel (e.g., OprM) and a periplasmic membrane fusion protein that links the two proteins (e.g., MexA, MexX) 12 . Because efflux systems such as MexAB-OprM and MexXY-OprM are able to efflux multiple classes of antibiotics 13 and are major contributors to increased antibiotic resistance 12 , 13 , 14 , 15 , 16 , there is a pressing need to develop alternative methods for the management of antibiotic efflux of MDR P. aeruginosa 17 . One alternative for treating MDR bacterial infections is phage therapy: the use of lytic (virulent) bacteriophages (bacteria-specific viruses) as self-amplifying ‘drugs’ that specifically target and kill bacteria 18 , 19 , 20 . Lytic phages bind to one or more specific receptors on the surfaces of particular bacterial hosts 18 , 20 , 21 , allowing for a targeted approach to treating bacterial infections which predated widespread use of broad-spectrum chemical antibiotics 22 . Due to the recent precipitous rise in antibiotic resistance, phage therapy has seen revitalized interest among Western physicians 23 , buoyed by successful clinical trials demonstrating safety and efficacy 21 , 24 . However, an obvious limitation to phage therapy is the abundant evidence that bacteria readily evolve resistance to phage infection 25 , 26 . While multiple mechanisms of phage resistance exist, phage attachment to a receptor binding-site exerts selection pressure for bacteria to alter or down-regulate expression of the receptor, thereby escaping phage infection 25 . Given the certainty of evolved phage-resistance, modern approaches to phage therapy must acknowledge and capitalize on this inevitability. Genetic trade-offs are often observed in biology, where organisms evolve one trait that improves fitness (a relative advantage in reproduction or survival), while simultaneously suffering reduced performance in another trait 27 , 28 , 29 . Here we propose an evolutionary-based strategy that forces a genetic trade-off: utilize phages that drive MDR bacterial pathogens to evolve increased phage resistance by suffering increased sensitivity to chemical antibiotics. Thus, this approach to phage therapy should be doubly effective; success is achieved when phage lyse the target bacterium and success is also achieved when bacteria evolve phage resistance because they suffer increased sensitivity to antibiotics. We predicted that phage binding to surface-exposed OprM of the MexAB and MexXY systems of MDR P. aeruginosa would exert selection for bacteria to evolve phage resistance, while impairing the relative effectiveness of these efflux pumps to extrude chemical antibiotics. We obtained samples from six natural sources (sewage, soil, lakes, rivers, streams, compost) and enriched for phages that could infect P. aeruginosa strains PA01 and PA14, two widely used MDR P. aeruginosa models 30 , 31 , 32 , 33 . This effort yielded 42 naturally isolated phages that successfully infected both strains of MDR P. aeruginosa . To test if any of these phages could bind to OprM of MexAB and MexXY efflux systems, we used a transposon knockout collection of bacterial mutants derived from P. aeruginosa strain PA01 34 . These assays determined which bacterial mutants failed to support phage infection, because such mutants lacked the surface-expressed protein necessary for phage infection. The assays measured the efficiency of plating (EOP), defined as the ratio of phage titer (plaque-forming units [pfu] per mL) on the knockout host relative to titer on the unaltered PA01 host. EOP ≈ 1.0 would indicate that the protein associated with the knocked out gene was irrelevant for phage binding, whereas EOP = 0 would implicate the knocked out protein as necessary for infection. Results showed that one of the 42 phage isolates failed to infect the Δ oprM knockout strain, but successfully infected wildtype PA01 and all other tested knockout mutants. This phage was originally isolated from a freshwater lake sample (Dodge Pond, East Lyme, Connecticut, USA). We then experimentally evolved the phage on P. aeruginosa strain PA01 for 20 consecutive passages, where each passage consisted of 24-hour growth on naïve (non co-evolved) bacteria grown overnight from frozen stock; this design selected for generalized improvement in phage growth but prevented possibility for host co-evolution 27 , 28 . Following serial passage, we isolated a plaque-purified sample from the evolved phage population to obtain strain OMKO1 (i.e., o uter-membrane-porin M k nock o ut dependent phage # 1 ). We conducted whole-genome sequencing analysis of this clone and determined that phage OMKO1 had genome size ~278 kb and belonged to the dsDNA virus family Myoviridae (genus: phiKZ-like-viruses ). We next tested whether resistance to phage OMKO1 caused the desired genetic trade-off between phage resistance and antibiotic sensitivity in MDR P. aeruginosa . In particular, we determined whether phage resistance allowed improved killing efficiency (decreased minimum inhibitory concentration; MIC) of four antibiotics, representing four drug classes of varying capacity for efflux via MexAB and/or MexXY-OprM: Ceftazidime (CAZ), Ciprofloxacin (CIP), Tetracycline (TET) and Erythromycin (EM). CAZ is effluxed by the Mex system, but resistance is also inducible, determined by genetically encoded β-lactamases 35 . CIP resistance can also be regulated by multiple factors such as mutations in DNA gyrase or topoisomerase IV 36 , 37 in addition to efflux 38 . However, resistance to TET and EM is primarily due to efflux via the MexAB- and MexXY-OprM efflux systems 16 , 39 . We tested effects of phage resistance on sensitivity to the four drugs in replicated assays with PA01 and PA14, as well as with three environmental strains (PAN, 1607, 1845) and three clinical isolates (PAPS, PASk, PADFU). In these assays, the phage-OMKO1 resistant strain was either a knockout mutant (∆ oprM derived from PA01), or an independently derived spontaneous mutant of the associated parental strain. Results for strain PA01 are shown in Fig. 1 . In comparison, strain PA01 ∆ oprM showed increased average drug sensitivity relative to PA01, in the two antibiotic environments where Mex systems provide primary (TET: 2.00 ± 0.00 μg/mL; EM: 4.667 ± 0.00 μg/mL) or moderate (CIP: 0.016 ± 0.00 μg/mL; CAZ: 0.210 ± 0.144 μg/mL) drug resistance. Thus, loss of OprM expression provided resistance to phage OMKO1, but caused greater sensitivity to all four drugs (Fold Increased Sensitivity to TET, CAZ and EM: p < 0.01; CIP: p < 0.05) (cf. Fig. 1 ). The ratio of mean MIC for PA01 relative to that for ∆ oprM was used to estimate the fold increased drug sensitivity associated with phage resistance ( Fig. 1 ), which may be considered a baseline improvement in drug efficacy upon acquisition of phage resistance. Similar results were observed for a spontaneous phage-OMKO1 mutant of PA01 when Mex systems provided primary resistance (TET and EM; Fig. 1 ). As a control for transposon insertion, we examined strain ∆ mexR , which was also derived from PA01. mexR , the repressor of MexAB-OprM and MexXY-OprM operons should not negatively alter phage sensitivity. As expected, this control strain was phage sensitive and our MIC assays showed inhibitory antibiotic concentrations equivalent or higher than PA01 (TET: 256.00 ± 0.00 μg/mL; EM: 256.00 ± 0.577 μg/mL; CIP: 32.00 ± 0.00 μg/mL; CAZ: 1.333 ± 0.035 μg/mL), confirming that over-expression of Mex systems improved growth in antibiotic environments where PA01 showed drug sensitivity. In addition, we examined the trade-off hypothesis in model strain PA14; for all four drugs, spontaneous phage resistance caused a statistically significant fold-increase in antibiotic sensitivity ( Fig. 1 ). Altogether, these data showed that phage resistance led to greater drug sensitivity for antibiotics primarily controlled by Mex systems, but only sometimes improved drug efficacy when Mex systems exerted less control. Figure 1 Selection for phage resistance causes a trade-off resulting in significantly reduced Minimum Inhibitory Concentrations (MIC) to four drugs drawn from different antibiotic classes. LEFT: Average MIC ± SD of four antibiotics for phage sensitive MDR bacteria (left column) and for spontaneous mutants of these bacteria resistant to phage OMKO1 (right column). RIGHT: Fold improvement of MIC for isolated strains resistant to OMKO1 (*p < 0.05, **p < 0.01). For comparison, data for fold-increased sensitivity of transposon knockout PAO1-∆ oprM (phage resistant) is displayed as a vertical black line. Full size image Model strains PA01 and PA14 and knockout mutants derived from these strains, are useful for elucidating mechanisms such as phage binding targets. However, microbial models inevitably experience some selection for improved fitness under controlled lab conditions, creating a potential divergence from more recently isolated clinical and environmental samples. Thus, we sought to confirm whether the desired trade-off between phage-OMKO1 resistance and increased drug sensitivity occurred in environmental and clinical strains. After determining that the clinical and environmental strains were sensitive to phage OMKO1, we isolated spontaneous phage-resistant mutants of each strain and conducted MIC assays. Results ( Fig. 1 ) confirmed that resistance to phage OMKO1 coincided with increased sensitivity of each environmental isolate to antibiotics TET and EM. Phage resistance led to greater drug sensitivity for two clinically relevant antibiotics (CAZ, CIP), with the majority of outcomes showing statistical significance ( Fig. 1 ). Importantly, the phage resistant mutants of all three of the clinical isolates (PAPS, PASk, PADFU) showed significantly increased drug sensitivity to the tested antibiotics. Thus, results for the environmental and clinical isolates qualitatively matched those observed in the well-characterized strains PA01 and PA14, suggesting that phage OMKO1 is generally capable of forcing the desired genetic trade-off in MDR P. aeruginosa . We further compared the effects of phage sensitivity versus resistance on P. aeruginosa fitness, by examining growth kinetics of bacterial mutants in antibiotic medium when phage OMKO1 was either present or absent. We obtained bacterial growth curves by monitoring changes in optical density (OD 600 ) in liquid culture. These assays challenged knockout strains ∆ mexR and ∆ oprM to grow in a TET (10 μg/mL) environment, where phage OMKO1 was either present or absent. Results ( Fig. 2 ) confirmed that over-expression of Mex systems allowed robust growth of populations founded by strain ∆ mexR in the presence of TET. However, as expected these phage sensitive populations grew three-fold worse and highly similar to the ∆ oprM population in an identical drug environment containing phage OMKO1 ( Fig. 2 ). Phage presence did not completely eliminate ∆ mexR bacteria, perhaps explained by persistence of spontaneous phage resistant mutants that suffered the desired trade-off and failed to increase in density during the assay. Phage resistant populations founded by strain ∆ oprM showed impaired growth in the TET environment due to the knocked out OprM component of the Mex system. As expected, presence of phage OMKO1 had no effect on growth kinetics of ∆ oprM populations, because the virus was incapable of binding to these cells. In both cases, the observed weak growth of ∆ oprM populations in TET environments was perhaps due to the low permeability of P. aeruginosa cell membranes, which is problematic for treatment of these infections using antibiotics alone. Figure 2 Phage OMKO1 selects against the expression of OprM and, consequently, the function of the mexAB/XY-OprM efflux systems. Average cell densities (OD 600 ) of PA01-Δ mexR and PA01-Δ oprM over time in the presence of Tetracycline (10 mg/L) and phage OMKO1 (green and red lines). PAO1 ∆ mexR (blue, green) overexpresses mex-OprM and readily grows in TET to high densities alone due to active efflux of TET (blue) but is susceptible to phage infection (green). PAO1 ∆ oprM grows poorly in the presence of TET (red) but is resistant to phage OMKO1 (yellow). Full size image To evaluate whether phage OMKO1 would be broadly useful in targeting P. aeruginosa strains we examined the conservation of the MexAB- and MexXY-OprM efflux systems. To do so, we estimated effects of selection on five genes encoded in these Mex systems ( oprM , mexA , mexB , mexX , mexY ) using genetic data from 38 P. aeruginosa strains representing the known genetic diversity of P. aeruginosa queried from NCBI GenBank. For each gene, this analysis measured ω (d N /d S ): the ratio of the number of non-synonymous substitutions per non-synonymous site (d N ) to the number of synonymous substitutions per synonymous site (d S ), which is used to indicate selective pressure acting on a protein-coding gene. Results ( Table 1 ) showed that strong stabilizing selection was acting on oprM , mexA , mexB and mexX genes, such that none of these loci were observed to be changing under positive selection. These data indicated that the structure of the OprM protein was strongly constrained to remain stable through time; thus, phage OMKO1 should be capable of infecting a wide variety of P. aeruginosa genotypes due to genetic stability of the binding target. Furthermore, the analysis suggested low probability of wildtype functionality for novel mutations that would confer P. aeruginosa resistance to phage OMKO1 via alteration of the OprM attachment site. Nevertheless, it is plausible that even highly conserved genes in bacteria can transiently change in response to intermittent selection pressures (such as phages), while leaving no signs of rapid evolution within the gene sequence. For example, a gene may acquire a short duplicated region that interrupts its function and in oprM such a mutation could simultaneously cause efflux pump deficiency while conferring phage resistance. Because small duplications within a gene may revert rapidly, this dynamic process could allow efficient restoration of the original phenotype, fostering bacterial ability to phenotypically change in response to prevailing selection pressure while leaving no signs of rapid evolution within the gene sequence. This example is hypothetical and future studies will be necessary to elucidate precise mechanisms by which P. aeruginosa may evade the trade-off observed in our study. Regarding the other genes in our analysis, we noted that strong positive selection was detected only for P. aeruginosa gene mexY , for unknown reasons but indicating that this component of MexXY is changing relatively rapidly. Table 1 Evaluation of selection acting upon genes associated with MexXY- and MexAB-OprM efflux systems of P. aeruginosa . Full size table Our study showed that phage OMKO1 is a naturally occurring virus that forces a desired genetic trade-off between phage resistance and antibiotic sensitivity, which should benefit phage therapy efforts against MDR bacteria such as P. aeruginosa . Isolation of phage OMKO1 from nature suggested that other phages might have evolved to utilize OprM or other surface-exposed proteins of Mex systems as binding sites. These types of phage could be highly useful for developing therapeutics, because target bacteria are expected to inevitably evolve phage resistance resulting in antibiotic susceptibility. Previous studies similarly demonstrated the evolutionary interplay between phage selection and maintenance of antibiotic resistance in bacterial pathogens. For example, phage binding may rely on surface proteins coded by plasmid genes, causing phage to select against plasmid maintenance in bacterial populations, thereby reducing the prevalence and spread of plasmid-borne antibiotic resistance genes 40 . Other studies also suggest that combined use of phages and antibiotics is superior to either selection pressure alone, indicating that the dual approach is promising as an antimicrobial strategy 41 , 42 . Our study demonstrates that phage OMKO1 is also a promising evolutionary-based phage adjunctive, which can be used to directly exploit a genetic trade-off between efflux mediated antibiotic resistance and phage resistance. Taken together, these examples illustrate the potentially valuable approach by which an evolutionary-based antibiotic adjunctive could greatly improve clinical outcomes and reduce the spread of antibiotic resistant infections. The clinical utility of phages such as OMKO1 is vital because selection using this phage restores usefulness of antibiotics that are no longer considered to be therapeutically valuable. In the past, there was an attempt to restore waning amoxicillin efficacy, by combining this drug with clavulanic acid (a β-lactamase inhibitor). Although clavulanic acid has minimal antibacterial activity, it interacts with β-lactamase enzyme via mechanism-based inhibition, allowing amoxicillin to inhibit cell wall synthesis. While this therapeutic approach often can be effective as demonstrated by more than 30 years of successful use of amoxicillin/clavulanic acid, the negligible antibacterial activity of clavulanic acid exerts selection pressure for hyper-production of β-lactamase as the means for bacteria to successfully evolve resistance to the adverse effects of clavulanic acid 43 . In contrast, a phage therapy approach exerts selection pressure in the desired direction, causing bacteria to become increasingly antibiotic sensitive and allowing for renewed use of historically effective antibiotics that have been rendered useless by the evolution of antibiotic resistance. Furthermore, our approach suggests that antibiotics not typically used during treatment of P. aeruginosa infections due to intrinsic resistance 44 could be used with phage OMKO1. This method effectively ‘re-discovers’ a class of antibiotics that has already been clinically tested/approved. Consequently, this approach has the potential to extend the effective lifetime of antibiotics in our ‘drug arsenal’ and broaden the spectrum of these drugs, greatly reducing the burden on drugs of last resort, preserving them for future use. Ideally, phage therapy that utilizes phages such as OMKO1 would not only improve clinical efficacy against MDR bacteria, but also could potentially slow or reverse the incidence of antibiotic resistant bacterial pathogens. Materials and Methods Pseudomonas aeruginosa strains P. aeruginosa strains PA01 and PA14 were kindly provided by B. Kazmierczak (Yale School of Medicine). Strains derived from PA01 that each contained a knockout of a gene in the Mex system were obtained from the Pseudomonas aeruginosa PA01 Transposon Mutant Library (Manoil Lab, University of Washington). P. aeruginosa PAPS was collected from fistular discharge of a patient with a history of chronic infection associated with an aortic arch replacement surgery. This strain was associated with a biofilm that formed on an indwelling Dacron aortic arch and has been present for >1 year in the patient. P. aeruginosa PASk was collected from an open wound on the skull of a 60 y.o. male that was not responsive to antibiotic therapy or hyperbaric oxygen. P. aeruginosa PADFU was collected from a diabetic foot ulcer. These strains were collected from consented donors and de-identified. Furthermore, experiments were performed in accordance with The Yale University Human Investigation Committee/Institutional Review Board (HIC/IRB) guidelines and relevant experimental protocols were approved by Yale’s HIC/IRB committee. P. aeruginosa strains 1845 and 1607 were collected from household sink drains (1845: bathroom sink drain; 1607: Kitchen sink drain) in a previous study 6 and kindly provided by S. Remold, University of Louisville. Challenge assays using knockout library of P. aeruginosa The transposon knockout mutants used for screening included 11 strains, which differed in the knockout of a gene for a surface expressed protein in the Mex system: oprC , oprB , oprG , oprD , oprI , oprH , oprP , oprO , oprM , oprJ , oprN . Also, we tested phage ability to grow on 8 strains that differed in the knockout of a gene for an internal protein of the Mex system: mexH , mexA , mexB , mexR , mexC , mexD , mexE , mexF . These replicated ( n = 3) assays calculated the average efficiency of plating (EOP) on a knockout host: plating ability (titer in plaque-forming units per mL) for a phage on the test knockout strain, relative to its plating ability on a phage sensitive host (PA01). Isolation of phage OMKO1 The phage isolated from Dodge Pond was serially passaged on host strain PA01 for 20 consecutive passages. To do so, PA01 was grown to exponential phase in 25 ml of Luria-Bertani (LB) broth and then infected with phage at multiplicity of infection (MOI; ratio of phage particles to bacterial cells) of ~0.1, using 37 °C shaking (100 rpm) incubation. After 12 hours, the culture was centrifuged and filtered (pore size: 0.22 μm) to remove bacteria and to obtain a cell-free lysate. The next passage was initiated under identical conditions, using naïve (non-coevolving) PA01 bacteria grown fresh from frozen stock. This process was continued for 20 passages total and phage OMKO1 was plaque purified from the endpoint phage population. Isolation of phage resistant mutants Phage OMKO1 was amplified on P. aeruginosa in liquid culture in conditions identical to the Serial passage assays. Following 12 hours of amplification, 100 μl of culture was plated on LB agar and incubated for 12 hours. Individual colony-forming units (CFUs) were then collected and verified to be phage resistant by classic ‘spot tests’ (i.e., 10 7 PFU of phage OMKO1 was pipetted onto a lawn of each bacterial isolate to test whether the phage was capable of visibly clearing the lawn [indicating bacterial sensitivity to phage] versus incapable of clearing the lawn [indicating bacterial resistance to phage]). Minimum inhibitory concentration assays Bacterial strains were grown overnight at 37 °C as described above. A 200 μL sample of the culture was then spread onto an LB agar plate and allowed to dry for 10 minutes, followed by application of an eTestStrip (BioMerieux) for a test antibiotic. Plates were incubated at 37 °C for 12 hours and MIC was estimated as the point at which bacterial growth intersected the eTest strip. Each strain was tested in triplicate for each antibiotic. Bacterial growth kinetics Bacterial growth was assayed using a TECAN Freedom EVO workstation (TECAN Schweiz AG, Männedorf, Switzerland), which included an automated spectrophotometer (TECAN INFINITE F200 plate-reader) to monitor changes in bacterial density (optical density = OD 600 ) and a Robotic Manipulator Arm (RoMA) to manipulate cultures grown in 96-well flat-bottomed optical plates (Falcon). Each test strain was grown in LB broth with replication ( n = 3) and some assays included bacteria mixed with phage OMKO1 at an MOI ~10 to increase the probability that all susceptible bacteria in the well were initially infected. Assays were controlled via scripts prepared in TECAN’s Freedom EVOWare and iControl software. Plate incubation occurred at 37 °C with 5 Hz continuous shaking in incubation ‘towers’. Every 2 min, each plate was sequentially transferred by the RoMA to the plate reader to measure OD. Within the plate reader, prior to OD reading the plate was shaken orbitally at 280 rpm and with 2 mm amplitude for 10 seconds. Absorbance wavelength was measured at 620 nm over the course of 15 flashes and the resulting OD for each well was outputted by iControl into a time-stamped delimited text file, which was then imported to Excel (Microsoft) for further analysis. The plate was then transferred by RoMA back to the incubation tower and the protocol was repeated for 12 hours total. Bioinformatics analysis Syntenic copies of the genes oprM , mexA , mexB , mexX and mexY were extracted from 38 publicly available genomes ( Appendix 1 ) of P. aeruginosa , representing a cross section of the extant genetic diversity of the species. These sequences were aligned using MUSCLEv3.8.31 45 and refined by eye. Maximum likelihood trees were estimated for each gene using RaxMLv8.0.0 46 . The d N /d S (ω) ratio for each gene was calculated using the codeML of PAMLv4.8 47 using model M2a with ω both fixed and variable. Significance of positive selection for each gene was evaluated by conducting a likelihood ratio test of the likelihood values implemented in the base package of R software v. 3.2.1. Additional Information How to cite this article : Chan, B. K. et al . Phage selection restores antibiotic sensitivity in MDR Pseudomonas aeruginosa . Sci. Rep . 6 , 26717; doi: 10.1038/srep26717 (2016).
Yale researchers were fishing for a new weapon against antibiotic resistance and found one floating in a Connecticut pond, they report May 26 in the journal Scientific Reports. The virus called a bacteriophage, found in Dodge Pond in East Lyme, attacks a common multi-drug resistant bacterial pathogen called Pseudomonas aeruginosa, which can lethally infect people with compromised immune systems. In a neat evolutionary trick, the virus attaches to the cell membrane where bacteria pump out antibiotics, a system that had originally evolved to resist antibiotics. The presence of the virus in turn leads to evolutionary changes in the bacterial membrane that makes this pumping mechanism less efficient. This makes bacteria once more susceptible to existing antibiotics. "We have been looking for natural products that are useful in combating important pathogens," said Paul Turner, professor and chair of the Department of Ecology and Evolutionary Biology. "What's neat about this virus is it binds to something the organism needs to become pathogenic, and backs it into an evolutionary corner such that it becomes more sensitive to currently failing antibiotics." The virus should help preserve our limited antibiotic arsenal in combating deadly bacteria, he said. This "phage" therapy could be used in conjunction with antibiotics to treat dangerous P. aeruginosa infections that afflict patients with severe burns, surgical wounds, cystic fibrosis, and other conditions that compromise the immune system. Turner also noted that other phages hold promise to combat bacterial pathogens that cause economic losses in plant and animal agriculture, and those that contaminate pipes and equipment such as bioreactors in food manufacturing..
10.1038/srep26717
Medicine
Researchers restore sight in mice by turning skin cells into light-sensing eye cells
Pharmacologic fibroblast reprogramming into photoreceptors restores vision, Nature (2020). DOI: 10.1038/s41586-020-2201-4 , nature.com/articles/s41586-020-2201-4 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-2201-4
https://medicalxpress.com/news/2020-04-sight-mice-skin-cells-light-sensing.html
Abstract Photoreceptor loss is the final common endpoint in most retinopathies that lead to irreversible blindness, and there are no effective treatments to restore vision 1 , 2 . Chemical reprogramming of fibroblasts offers an opportunity to reverse vision loss; however, the generation of sensory neuronal subtypes such as photoreceptors remains a challenge. Here we report that the administration of a set of five small molecules can chemically induce the transformation of fibroblasts into rod photoreceptor-like cells. The transplantation of these chemically induced photoreceptor-like cells (CiPCs) into the subretinal space of rod degeneration mice (homozygous for rd1 , also known as Pde6b ) leads to partial restoration of the pupil reflex and visual function. We show that mitonuclear communication is a key determining factor for the reprogramming of fibroblasts into CiPCs. Specifically, treatment with these five compounds leads to the translocation of AXIN2 to the mitochondria, which results in the production of reactive oxygen species, the activation of NF-κB and the upregulation of Ascl1 . We anticipate that CiPCs could have therapeutic potential for restoring vision. Main Many retinopathies—such as age-related macular degeneration, diabetic retinopathy and retinitis pigmentosa—ultimately result in the loss of retinal neurons, which leads to irreversible vision loss 1 , 2 . Stem-cell therapy, using embryonic stem cells or induced pluripotent stem cells, is a promising strategy to replace lost retinal cells and improve vision 3 , 4 . However, protocols for the derivation of candidate replacement cells are cumbersome and time consuming, presenting a challenge for their use in clinical therapy 5 , 6 , 7 , 8 . Direct reprogramming—using ectopic transcription factors and chemicals—bypasses the requirement for pluripotent cells and has resulted in the generation of neurons, astrocytes and cardiomyocytes; however, the pharmacological conversion of photoreceptors using this method has not been realized 9 , 10 , 11 , 12 , 13 . ASCL1, a powerful proneural transcription factor, has been reported to reprogram glial cells into photoreceptors 14 , 15 , 16 . An improved mechanistic understanding of direct reprogramming may lead to the generation of new cell types. Here we identify a set of five small molecules that can induce fibroblasts to become functional CiPCs without the need for pluripotent cells or viral transcription factors. We demonstrate that CiPCs restore pupil reflex and vision when transplanted into the subretinal space of rd1 mice, a mouse model of retinal degeneration. Moreover, our mechanistic analysis reveals an AXIN2–NF-κB–ASCL1 pathway that promotes retinal lineage during reprogramming and identifies mitochondria as a signalling hub in the orchestration of cell fate conversion. A set of five compounds transforms fibroblasts to CiPCs To generate CiPCs, we used mouse embryonic fibroblasts (MEFs) derived from a transgenic Nrl –GFP mouse, in which the Nrl promoter drives expression of eGFP specifically in rod photoreceptors 17 , 18 . We began by testing an established combination of four small molecules—valproic acid (V), CHIR99021 (a GSK3 inhibitor) (C), RepSox (R) and forskolin (F), together denoted VCRF—that is known to convert fibroblasts into neurons, but only a few of the resultant cells were Nrl –GFP positive 12 . Various small molecules and culture conditions were attempted (Supplementary Table 5 ), and we found that the Wnt/β-catenin pathway inhibitor IWR1 (I)—in combination with VCRF and STR (Sonic hedgehog, taurine and retinoic acid)—was able to substantially improve the efficiency of conversion of MEFs into Nrl –GFP + cells (Fig. 1a, b ). STR was added on day 8 of reprogramming, to promote and support the formation of photoreceptors after photoreceptor-specifying transcription factors—such as RORβ, ASCL1, PIAS3—were upregulated 14 , 19 , 20 , 21 , 22 (Fig. 1l , Supplementary Tables 1 , 2 ). Similarly, human adult dermal fibroblasts (HADF), and human fetal lung fibroblasts transduced with a Nrl– DsRed reporter, were also converted into CiPCs (Fig. 1f–h, j , Extended Data Fig. 1a, b , Supplementary Fig. 2 , Supplementary Table 6 ). We next tested the small molecules individually and in various combinations, and observed that they failed to generate as many Nrl –GFP + cells (Extended Data Fig. 1c, d ). We conclude that all five compounds in combination can efficiently convert fibroblasts into NRL - expressing CiPCs. Fig. 1: Conversion of fibroblasts and the molecular characterization of CiPCs. a , Protocol for the reprogramming of mouse fibroblasts into CiPCs. MC, medium change; PIM, photoreceptor induction medium; PDM, photoreceptor differentiation medium. b , Images of CiPCs expressing Nrl –GFP on day 11 and day 16. Scale bars 14.4 μm c , Images of CiPCs expressing CRX on day 11. Scale bars 14.4 μm d , FACS purification of reprogrammed Nrl –GFP + CiPCs (0.2%). e , PCR with reverse transcription reveals the expression of the indicated photoreceptor-specific genes in mouse. For gel source data, see Supplementary Fig. 2a, b . f , Protocol for the reprogramming of HADFs into CiPCs. g , Quantitative PCR (qPCR) analysis (fold change compared with HADF) CiPCs after reprogramming from HADF, showing increased expression of photoreceptor-specific genes. Data are presented as mean ± s.e.m. of n = 3 independently treated biological replicates. h , Micrograph of NRL-stained CiPCs after conversion from HADF. Scale bars 33 μm i , Principal component analysis for all RNA-seq samples ( n = 3 samples for each). j , Images of CiPCs after conversion from HADF at day 10. Rcvrn, recoverin; Rho, rhodopsin.Scale bars 14.4 μm k , Heat map of RNA-seq data for the indicated photoreceptor genes. RI, reprogramming intermediate. l , Heat map of RNA-seq data for the expression of the indicated genes that express retinal transcription factors during CiPC conversion. Experiments in b , h , i were repeated three times with similar results and the experiments in c , d , were repeated twice with similar results. CPM, counts per million. Source Data Full size image Next, Nrl –GFP + CiPCs were purified by fluorescence-activated cell sorting (FACS) and subject to transcriptomic analysis, which revealed the expression of early retinal neuronal markers (CHX10 and OTX2) and photoreceptor markers (CRX and NRL) (Fig. 1d, e , Supplementary Table 3 ). Immunostaining analysis of CiPCs on day 11 enabled the visualization of CRX and CHX10 (Fig. 1c , Extended Data Fig. 1c , Supplementary Table 4 ). Molecular analysis of reprogrammed human CiPCs revealed the expression of photoreceptor-specific markers such as CRX, rhodopsin and recoverin (Fig. 1g–i , Extended Data Fig. 1c, d , Supplementary Fig. 2 ). CiPCs express photoreceptor genes Transcriptome profiling demonstrated that the transcriptome of Nrl –GFP + CiPCs resembles that of native rods, which were used as a positive control 23 . Heat map analysis revealed the expression of rod-specific genes in CiPCs (Fig. 1i–l , Extended Data Fig. 9a, b ), although progenitor-specific genes of other retinal cells—such as cone cells, ganglion cells and Müller glial cells—were expressed at very low levels (Extended Data Fig. 2b ). Moreover, fibroblast-specific genes were found to be downregulated in CiPCs and in reprogramming intermediates (Extended Data Fig. 9c ). Expression of retinal transcription factors—such as Otx2 , Nrl and Crx —was also evident in CiPCs (Fig. 1k ). Notably, photoreceptor-specific transcription factors—such as Rorb , Ascl1 , Pias3 , Thrb and Rxrg —were upregulated in reprogramming intermediates 21 , 24 , 25 (Fig. 1l ). These results indicate that CiPCs have a similar gene expression profile to that of their native counterparts. Because MEFs are composed of a heterogeneous cell population, we performed lineage tracing with Fsp1 -Cre;Ai9 MEFs (after FACS purification) and confirmed that CiPCs can originate from fibroblasts (Extended Data Fig. 1e–h ). Staining with 5-bromodeoxyuridine (BrdU) during reprogramming to CiPCs indicated that approximately 95% of Nrl –GFP + cells were negative for BrdU, suggesting that no intermediate proliferative stage exists (Extended Data Fig. 2a ). CiPC transplantation in rd1 mice To test whether CiPCs can activate existing retinal circuitry and restore visual function, FACS-purified Nrl –GFP + CiPCs were transplanted into the subretinal space of 14 rd1 eyes, a mouse model of retinal degeneration (Extended Data Fig. 2c ). Recently, pupillary light reflex has been reported as a robust method to measure photoreceptor function after cell transplantation 26 . Pupillary constriction under low light levels is critically dependent upon the function of rod photoreceptors. Six out of 14 (43%) rd1 eyes demonstrated improved pupil response under low-light conditions three and four weeks after transplantation (Fig. 2a, b ). None of the mice demonstrated pupil constriction at two weeks, which served as a baseline (internal negative control) for longitudinal comparison and reduced the likelihood that a pre-existing photoreceptor or alternative pupillary pathway—such as the melanopsin pathway—is responsible for the observed pupillary restoration (Fig. 2b ). To assess the restoration of visual function, the six pupil-responsive mice underwent light-aversion testing. The light-aversion test provides an option for mice to spend time in either a dark space or a lit space. Mice have an innate tendency to avoid lit spaces; as such, those with visual function favour dark spaces 27 . To test rod vision in these pupil-reflex-positive mice, we allowed the mice to adapt to dark conditions and then performed the test using illumination conditions of 50 lx. rd1 mice injected with CiPCs were found to spend significantly more time in the dark space compared with those injected with PBS control (Fig. 2c ). None of the CiPC-injected mice without improved pupil responses demonstrated a dark preference. We also performed a modified optomotor test with neutral density filters; this modification was necessary in order to prevent rod saturation and to test rod-mediated vision. An additional benefit of testing at low light intensities is that the possibility of activating any residual cones is substantially reduced. One of the six CiPC-injected mice that demonstrated improved pupil responses and dark preference also showed improvements in visual acuity and contrast sensitivity (Extended Data Fig. 3e, f ). These results provide a proof-of-concept that CiPCs can restore visual function in rd1 mice. Fig. 2: Functional analysis of CiPCs in a mouse model of retinal degeneration. a , Representative images from the pupil analysis. CiPC-injected eyes 3 months after transplantation. b , Illuminance response curves for pupillary constriction ( n = 6). The stimulus was a 20-s light exposure and the data are fitted with a sigmoidal function. Arrows show the recovery of pupil function. The results presented are for pupil-responsive mice only. Statistical significance was assessed using a two-tailed Student’s t -test. WT, wild type. c , The time spent by mice in the dark area of the light-aversion test. Wild-type C57BL/6 mice were used for comparison. Statistical significance was assessed using a one-way ANOVA. Data are presented as mean ± s.e.m. d , Top left, integration and survival of GFP + CiPCs in the retina of an rd1 mouse 3 months after transplantation. Scale bar, 20 μm. Top right, a magnified view of the top left image. Scale bar, 10 μm. Bottom, additional images of the integration of GFP + CiPCs. Bottom left, 40× magnification; bottom right, z -section. Scale bars: left, 10 μm; right, 20 μm. GCL, ganglion cell layer; INL, inner nuclear layer. Experiments in a and d were repeated twice with similar results and n denotes the number of biologically independent mice. Source Data Full size image We next sought to rule out the possibility that CiPCs primarily function to reduce the degeneration of host photoreceptors. We transplanted Nrl –GFP + CiPCs into the subretinal space of rd1 mice on day 31—a later time point at which there is near complete photoreceptor degeneration and electroretinogram (ERG) signals are extinguished 28 . Long-term retinal function was analysed by pupillary light reflex and ERG (Extended Data Fig. 2c ). Approximately three months after transplantation (at postnatal day (P)128), we recorded a 30%–40% increase in pupillary constriction in response to a low light stimulus (50 lx) in three of the six eyes into which CiPCs had been transplanted (Extended Data Figs. 2c and 3d ). Full-field ERG analysis demonstrated an improvement of the scotopic a-wave in three out of six eyes into which CiPCs had been transplanted at P45; this improvement was lost at and after P59 (Extended Data Fig. 3a–c ). One eye showed improvement in the scotopic b-wave at P45 (Extended Data Fig. 2d ). Fig. 3: The mROS–NF-κB–ASCL1 signalling axis determines the reprogramming of fibroblasts to CiPCs. a , Ascl1 transcript expression during reprogramming of MEFs to CiPCs, analysed by qPCR using the 2 −ΔΔCT method. RI, reprogramming intermediates, R, reprogrammed cells. b , Ascl1 qPCR analysis in Ascl1 depleted MEFs, using short hairpin RNA (shRNA). c , Number of GFP + cells after reprogramming to CiPCs from Ascl1 -knockdown (KD) and wild-type Nrl –GFP MEFs. d , NF-κB-luciferase activity during reprogramming of MEFs to CiPCs. LPS, lipopolysaccharide. e , rVista sequence alignment of human ASCL1 and mouse Ascl1 genes show highly conserved NF-κB-binding sites (red vertical bar) downstream of the 3′UTR region. f , ChIP assay shows binding of NF-κB at downstream loci of Ascl1 . g , Accumulation of mROS in CiPCs converted from Nrl –GFP MEFs on day 11. Scale bar 33 μm h , Fluorometric analysis of mROS generation during reprogramming of MEFs to CiPCs. AntiA, antimycin A; RFU, relative fluorescence units. i , Quantification of Nrl –GFP + cells on day 11 after depletion (by MitoTEMPO treatment) or generation (by Tfam knockdown) of mROS. SM, small molecules. In a – d , f , h , i , data are presented as mean ± s.e.m. of n = 3 independently treated biological replicates. Statistical significance was assessed by two-tailed Student’s t -test. Source Data Full size image The long-term survival of transplanted CiPCs and their synaptic connections to inner retinal neurons were assessed by immunofluorescence. Three months after CiPC transplantation, we identified GFP + , GFP + recoverin + or GFP + rhodopsin + CiPCs in the outer aspect of the inner nuclear layer in eyes that demonstrated restoration of the pupillary light reflex (Fig. 2d , Extended Data Figs. 4a, b , 8d ). We quantified CiPC survival and found a strong correlation between pupil constriction and cell survival (Extended Data Fig. 8e ). Specifically, eyes with improved pupil response and light/dark response had an average of 58 CiPCs per section, compared with 8 CiPCs per section in non-responders (Extended Data Fig. 8f ). Moreover, CiPCs were found to have synaptic terminals that express the rod ribbon-synapse protein ribeye and the synaptic vesicle protein synaptophysin. These synaptic terminals are in close proximity to rod bipolar cells (expressing bipolar marker PKC-α), which is essential for transmitting a light signal into the inner retina (Extended Data Fig. 4c–e ). Taken together, these results suggest that some of the transplanted CiPCs survive, function and connect with the inner retinal neurons of rd1 mice. NF-κB induces ASCL1 during conversion to CiPCs To explore the mechanism of reprogramming to CiPCs, we identified candidate transcription factors from RNA sequencing (RNA-seq) analysis. We detected the expression of Ascl1 on day 5, which continued to increase until day 8 (Fig. 1l , Fig. 3a ). ASCL1, a proneural transcription factor, is capable of converting fibroblasts into neurons 11 , 29 . Furthermore, ASCL1 is reported to be transiently expressed in photoreceptor precursors and can reprogram Müller glia into rod photoreceptor-like cells 14 , 15 , 30 . Taken together, we proposed that the Ascl1 transcriptional network has a central role in reprogramming fibroblasts to CiPCs. To test this hypothesis, we reprogrammed Ascl1 -depleted (Fig. 3b ) Nrl –GFP MEFs with the set of five compounds (VCRFI). We noted a 70%–80% reduction in the generation of CiPCs, which suggests that ASCL1 has an important role in the reprogramming of fibroblasts to CiPCs (Fig. 3c ). To determine the mechanism of ASCL1 induction, we investigated potential upstream regulators of ASCL1. NF-κB is a rapidly acting primary transcription factor that is known to have a role in neural stem-cell differentiation and embryonic neurogenesis 31 . We therefore postulated that NF-κB is an upstream regulator of ASCL1 induction. A luciferase assay demonstrated that NF-κB activation began on day 5 and reached a maximum at day 11 (Fig. 3d ). These results suggested the involvement of NF-κB, and so we next explored whether NF-κB induces the expression of ASCL1. Bioinformatics analysis (using rVista) identified a putative binding site for NF-κB near the Ascl1 loci (Fig. 3e ), and a chromatin immunoprecipitation (ChIP) assay confirmed the binding of NF-κB at this locus (Fig. 3f ). Furthermore, transient transfection analysis with a luciferase reporter gene confirmed that NF-κB positively regulates Ascl1 during reprogramming to CiPCs. NF-κB depletion in MEFs that were subsequently reprogrammed to CiPCs resulted in reduced expression of Ascl1 and fewer CiPCs overall, which further confirms the pathway (Extended Data Fig. 5a–c ). Overexpression of Ascl1 alone in MEFs was not sufficient for reprogramming to CiPCs (Extended Data Fig. 5f–h ). These results indicate that treatment with our set of five compounds leads to the activation of NF-κB, which in turn binds to the regulatory regions of Ascl1 and controls its expression. Notably, analysis of CiPCs using an assay for transposase-accessible chromatin with sequencing (ATAC–seq) revealed the presence of open chromatins at the upstream regions of the Ascl1 gene in reprogramming intermediates. Homer analysis revealed the enrichment of Ascl1 and NF-κB-binding motifs in both intermediates and reprogrammed cells, which may be important for the binding of regulatory transcription factors and for their expression (Extended Data Fig. 9f , Supplementary Table 7 ). Reactive oxygen species activate NF-κB To investigate the mechanism by which NF-κB activation is induced by the combination of five small molecules, we considered known inducers of NF-κB—such as TNFα, lipopolysaccharide, ionizing radiation and mitochondrial reactive oxygen species (mROS)—as possible candidates 32 , 33 . mROS have been reported to induce nuclear gene expression through the activation of NF-κB 32 . We therefore postulated that mROS generated by treatment with the five compounds may activate NF-κB. We observed an evident increase in mROS accumulation, beginning on day 8 in reprogramming intermediates (Fig. 3g, h ). To determine the importance of mROS, we used the antioxidant MitoTEMPO, an mROS scavenger, and observed a reduction in reprogramming to CiPCs (Fig. 3i ). Depletion of mitochondrial transcription factor A (TFAM) has been reported to lead to the generation of mROS 34 . Our data show that withdrawal of the compound IWR1 reduces reprogramming efficiency and mROS generation (Fig. 3i , Extended Data Fig. 6f ). We considered the possibility that TFAM-depleted MEFs may have increased reprogramming potential in the absence of IWR1. Consistent with this hypothesis, we found increased reprogramming to CiPCs and increased mROS generation in TFAM-depleted MEFs in the absence of IWR1 (Fig. 3i , Extended Data Fig. 6e, f ). Furthermore, exogenous ROS or TNFα (a NF-κB inducer) did not have a significant effect on CiPC conversion efficiency, and excessive ROS had a negative effect (Extended Data Fig. 5d, e ). These data indicate that mROS has an important role in the reprogramming of fibroblasts to CiPCs. To determine whether the activation of NF-κB is dependent on mROS, we added MitoTEMPO on day 3 of reprogramming and observed significantly decreased activity in the luciferase assay (Extended Data Fig. 6a ). Additionally, we found that the extent of binding of NF-κB near the Ascl1 loci was reduced upon treatment with MitoTEMPO (Extended Data Fig. 6g ). Taken together, these results demonstrate that mitochondrial ROS activates NF-κB, which controls the expression of Ascl1 by binding its regulatory region. The five compounds promote Axin2 mitolocalization To identify the mechanism by which mROS is generated during reprogramming to CiPCs, we considered two reports that demonstrate the stabilization of AXIN2 by treatment with CHIR and IWR1 35 and the translocation of AXIN2 to mitochondria upon treatment with XAV939 (an IWR1 analogue) 36 . We proposed that treatment with the group of five compounds—which includes IWR1 and CHIR—induces the stabilization of AXIN2 and its subsequent translocation to mitochondria; mitochondria-targeted AXIN2 then generates mROS, which in turn activates NF-κB. To test the hypothesis, the expression of AXIN2 was examined during the reprogramming to CiPCs. We discovered that AXIN2 in reprogramming intermediates and in day 11 non-purified CiPCs is more stabilized compared with that in starting MEFs (Fig. 4a ). Subsequently, we found that stabilized AXIN2 translocates to the mitochondria of CiPCs, as evidenced by its co-localization with mitochondria (Fig. 4b, c , Extended Data Fig. 7a, b ). Axin2 -depleted MEFs demonstrate reduced reprogramming to CiPCs and reduced mROS generation (Fig. 4d–f , Extended Data Fig. 6d ). For further confirmation we measured NF-κB activation and Ascl1 expression in Axin2 -depleted reprogramming intermediates. We detected reduced Ascl1 expression and decreased NF-κB activity (Fig. 4g , Extended Data Fig. 6b ). Taken together, these results indicate that mitochondria-targeted AXIN2 induces increased mROS generation in CiPCs and activates the downstream NF-κB–ASCL1 pathway to promote a lineage switch to a photoreceptor fate. Moreover, mitochondrial analysis of CiPCs revealed low basal mitochondrial respiration, low ATP turnover, low reserve capacity and high glycolysis rates; this is indicative of an immature mitochondrial state, which may be linked to the generation of mROS (Extended Data Fig. 7c–f ). Fig. 4: Mitochondria-translocated AXIN2 causes mROS generation and the reprogramming of fibroblasts to CiPCs. a , Western blot demonstrating AXIN2 expression during reprogramming of MEFs to CiPCs. For gel source data, see Supplementary Fig. 1c, d . b , No co-localization of AXIN2 (green) with mitochondria (MitoTracker) was observed in MEFs. Scale bar, 13.4 μm. c , AXIN2 (red) localization in mitochondria (green) of GFP + CiPCs (purple pseudocolour). Scale bar, 2 μm. d , Western blot demonstrating reduced levels of AXIN2 after knockdown of its encoding gene, Axin2 , by shRNA. For gel source data, see Supplementary Fig. 1g, h . In a – c , experiments were repeated twice; in d , the experiment was performed once. e , Quantification of Nrl –GFP + cells after conversion of Axin2 -depleted MEFs to CiPCs on day 11. f , Reduced mROS generation in Axin2 -depleted cells during reprogramming to CiPCs, as assessed by fluorimetry. AFU, arbitrary fluorescence units. g , qPCR analysis shows that knockdown of Axin2 is associated with reduced expression of Ascl1 in reprogramming intermediates on day 8 and CiPCs on day 11. Scr., scramble. In e – g , data are presented as mean ± s.e.m. of n = 3 independently treated biological replicates. Statistical significance was assessed by two-tailed Student’s t -test. Source Data Full size image Discussion Here we report that a combination of five small molecules can convert fibroblasts into functional CiPCs that are capable of partially restoring pupil reflex and visual function in a mouse model of retinal degeneration (Extended Data Fig. 10a ). Gene expression profiling reveals that CiPCs are similar to their in vivo rod counterpart, and CiPC conversion recapitulates the ontogeny of photoreceptor genesis—as indicated by the upregulation of Thrb , Rorb and Pias3 , among others 21 , 24 . The occurrence of open chromatin regions near photoreceptor loci and the enrichment of photoreceptor-specifying transcription-factor-binding motifs during reprogramming to CiPCs further validate these results 37 , 38 (Extended Data Fig. 9f , Supplementary Table 7 ). Subretinal transplantation of rod-like CiPCs into rd1 mice led to long-term improvement in pupillary light reflex and the restoration of normal visual behaviour in the light aversion test. In our study, 6 out of 14 mice (43%) demonstrated an improved pupillary response. The strong correlation between pupil constriction and cell survival may explain why some eyes showed an improvement and others did not (Extended Data Fig. 8e ). Material transfer and cell fusion are mechanisms that can explain improvement in visual function; however, these mechanisms have not been reported in the late stages of retinal degeneration, during which viable host photoreceptors are required in order for improvement to occur 26 , 39 . These mechanisms are of most concern in photoreceptor loss-of-function models—such as Gnat1 —in which the photoreceptors have limited or no degeneration and retinal morphology and structure are preserved. To avoid the possibility of donor–host cell fusion and material transfer, we chose to transplant CiPCs into rd1 mice at a time point (postnatal day (P)31) at which no rod photoreceptors exist 28 , 40 . Although rod loss is almost complete at P31, we examined the possibility that existing cones or bipolar cells could participate in CiPC material transfer or cell fusion; however, we found that these are unlikely to have a major role (Extended Data Figs. 4d , 8c ). Our mechanistic studies reveal that mROS-mediated NF-κB activation directly regulates the expression of Ascl1 and the reprogramming of fibroblasts into CiPCs. Mitochondrial signalling is reported to have a role in cellular homeostasis and neuronal function 41 , 42 , 43 . To our knowledge, this report is the first to demonstrate that mitochondria-to-nucleus signalling acts as a mediator for direct chemical reprogramming (Extended Data Fig. 10b ). Moreover, mitochondria-translocated AXIN2 probably generates mROS, although the mechanism is currently unknown. Induction of the reprogramming of fibroblasts to CiPCs by our group of five compounds reveals a new function of the mitochondria that may provide valuable knowledge for the generation of other cell types. Although CiPCs have therapeutic potential, a lack of proliferation—as is the case for native photoreceptors—and low conversion efficiency are the main impediments for a translational application. We anticipate that optimization of our current protocol may be beneficial for obtaining large numbers of CiPCs. For example, temporal modulation of IWR1 in the protocol resulted in a significant increase in the conversion of HADF to CiPCs (Fig. 1f , Extended Data Fig. 8d, right ). Overall, CiPCs are a promising cell-replacement candidate and may lead to a scalable therapy for vision restoration. Methods Generation of chemically induced photoreceptor-like cells All small molecules were diluted in DMSO or DMEM according to the manufacturer’s instructions. Approximately 50,000–80,000 MEFs (passage 2) were seeded into each well of a 12-well plate (0.1% gelatin-coated overnight) and cultured overnight. On day 1, the medium was replaced with photoreceptor induction medium (PIM; Supplementary Tables 1 , 2 ) containing V (0.5 mM), C (4.8 μM), R (2 μM) and F (10 μM). On day 3, fresh PIM containing V, C, R, F plus I (10 μM) were added into each well. The medium was then replaced with PIM containing VCRFI on days 4–7 depending on cell appearance. S (3 nM), T (100 μM) and R (1 μM) were added on day 8 with fresh medium containing all the small molecules mentioned above. On day 10–11, GFP + cells were collected and analysed for gene expression. For CiPC maturation, PIM was replaced with photoreceptor differentiation medium with the small molecules VCRF and cultured up to day 15–16 (Supplementary Tables 1 , 2 ). CiPC survival decreases considerably after day 11 with only a few mature cells obtained at day 15–16. For human adult dermal fibroblast (HADF) reprogramming, cells were seeded (to 95 to 100% confluence) in IMR90 medium. On day 1, PIM containing VCRFI (at the same concentrations as stated above) was added to the wells. PIM containing VCRFISTR was added on day 5. NRL staining was performed between day 7–10. We noted more autofluorescent cells after fixation and these cells were not considered as reprogrammed. The medium and all small molecules were replenished daily throughout the conversion period depending upon appearance. Human Fetal Lung Fibroblasts (HFL1) conversion to CiPCs was similar to mouse protocol, except that 5 μM IWR1 was added on day 2. The small molecules S, T and R, as well as brain-derived neurotrophic factor, glial cell line-derived neurotrophic factor and neurotrophin-3 were added on day 5. On day 6–8, DsRed + cells were collected for analysis. Mouse models and MEF isolation and lineage tracing All animal studies and animal care were performed in accordance with relevant guidelines and regulations and approved by the Institutional Animal Care and Use Committee at the University of North Texas Health Science Center. Nrl –GFP reporter mice were a gift from A. Swaroop (National Eye Institute, NEI) and were used to generate MEFs. Nrl –GFP cells were not identified in either starting fibroblast culture 44 . rtTa mice (Jackson Laboratory, 006965) were crossed with Nrl –GFP mice for unrelated experiments. Embryos from Fsp1 -Cre mice (Jackson Laboratory, stock# 012641) crossed with R26-LSL-tdTomato mice (Jackson Laboratory, stock# 007909) were used for lineage tracing. Embryos were examined for tdTomato expression before isolation of MEFs. td-Tomato + cells obtained from FACS were used for reprogramming to CiPCs according to the protocol described above, and CiPCs were stained with anti-NRL antibody according to the method detailed below. CBA/J ( rd1 ) mice (stock# 000656) and C57BL/6 mice (stock# 000664) were purchased from the Jackson Laboratory. Subretinal injection of CiPCs into rd1 mice After conversion, GFP + CiPCs were sorted by FACS from multiple 10-cm dishes and resuspended in IMR90 medium. For subretinal injection, mice (P31 and P24) were first anaesthetized by intramuscular injection of 85 mg kg −1 ketamine and 14 mg kg −1 xylazine. A 30-gauge blunt-end needle attached to a 10 ml Nanofil syringe (World Precision Instruments) was inserted into the puncture site (0.5–1 mm under the limbus line) with visualization aided by the use of a surgical microscope (Leica). CiPCs (80,000; 1.5–2 μl per eye) or saline (1.5–2 μl per eye) were delivered into the subretinal space in the superior temporal quadrant. After injection, the needle was left in place for a few seconds to reduce reflux and enable maximal cell release before being slowly withdrawn. The eyelid was returned to its original position and a drop of Triple Antibiotic (Equate, Walmart) ointment was applied. Mice were warmed on a 37 °C bed until fully awake. Pupillometry Similar to a previously published protocol 26 , dark-adapted mice were used to capture images under infrared illumination. CiPC-transplanted eyes that showed ERG improvement were exposed to low-irradiance white light through a light guide from a 100-W goose arc lamp at a range of intensities. For the illuminance curve experiments, dark-adapted mice were subjected to a series of light exposures (with increasing illuminance) for 10 s on weeks 2–4 after transplantation. A complete intensity series was performed in one eye before retesting the other eye at similar intensities. A gap of at least 2 min was maintained in between measurements. Pupil constriction was imaged in the contralateral, non-transplanted eyes. An infrared light-emitting diode was used throughout the experiment for background illumination. An infrared camera (Sony, DCR-HC96) was used to acquire images. The pupil area for each eye was measured before and after light exposure with ImageJ software (National Institutes of Health). For each mouse, the change in pupil constriction was represented by the difference between the pupil area measured in the dark and in the light. Quantitative PCR and PCR with reverse transcription Total RNA was extracted using a kit (Zymo Research, Cat# R1050) according to the manufacturer’s instructions. RNA (1 μg) was converted to cDNA using a High Capacity cDNA Reverse Transcription kit (Applied Biosystems, 4368814). Isolated RNAs were treated with DNAase-I before c-DNA synthesis. A thermal cycler from Applied Biosystems and OneStep Plus real-time PCR were used for amplification. qPCR was performed using Fast SYBR Green Master Mix (Applied Biosystems, 4385612). Results were normalized with glyceraldehyde 3-phosphate dehydrogenase or hypoxanthine-guanine phosphoribosyltransferase and fold change was calculated using the 2 −ΔΔCT method. A list of primers is provided in Supplementary Table 3 . Transcriptome analysis Quantification RNA-seq data analysis was performed at the gene level as previously 45 with Ensembl data release 87. The gene level count matrix was then TMM-normalized (TMM, trimmed mean of M values) using the edgeR (v3.18.1) package in the R (v3.4.0) programming environment, as previously described 46 . Gene expression clustering and heat maps Gene expression clustering was performed on selected genes using the affinity propagation algorithm by using negative distance as the similarity for ordering the genes in each set before drawing the heat maps. The heat map function was developed in-house and is available upon request. ATAC–seq Cells were washed twice with fresh medium and 1:100 volume of 100XDNase buffer (250 mM MgCl 2 and 50 mM CaCl in H 2 O) and 1:100 volume of 100X DNAase I solution (Worthington, LS002006, resuspended in HBSS at 20,000 units per ml) was added to the medium. Cells were incubated in DNase at 37 °C for 30 min in a tissue culture incubator. After washing the medium with PBS, cells were then trypsinized, centrifuged (800 g for 5 min) and resuspended in 500 μl of growth medium containing 5% DMSO in a slow cooling chamber. Approximately 100,000 cells were then shipped to Active Motif to perform ATAC–seq and bioinformatics analysis. Immunohistochemistry, subcellular fractionation and immunoblotting Paraformaldehyde (4%) was used to fix enucleated eyes. Cryo-embedded eyes were then sectioned with a thickness of 14 μm. Fixed eye sections were analysed with primary and secondary antibodies listed in Supplementary Table 4 . DAPI (0.1%) was used to stain the nucleus in mounting medium. Images were taken using a Zeiss LSM510 confocal/Leica DMi8 florescence microscope. Subcellular fractionation was performed following a kit protocol from Thermo Fisher Scientific (Cat# 89874). For immunoblotting, total protein was extracted with commercially available lysis buffer (Thermo Scientific, Cat# 89900) and then concentration of proteins was measured using a BCA protein assay kit (Thermo Scientific, Cat# 23227). An equivalent amount of proteins was loaded into each well, immunoblotted and antibody-stained using a standard procedure (Supplementary Table 4 ). SuperSignal West Femto maximum sensitivity substrate (Thermo Scientific, Cat# 34094) was used to develop the subcellular fractionation western blot. Electroretinogram After a minimum dark-adaptation period of 12 h, mice were anaesthetized by intraperitoneal injection of 85 mg kg −1 ketamine and 14 mg kg −1 xylazine. Preparation was performed under a dim red light (<50 lux). ERG analyses were performed using an Espion system (Diagnosys). For the assessment of scotopic response, a stimulus intensity of 40 cd·s m −2 was presented to the dark-adapted dilated eyes. The amplitude of the scotopic a-wave was then measured from the pre-stimulus baseline to the a-wave trough. The amplitude of the b-wave was then measured from the trough of the a-wave to the crest of the b-wave. A total of 15 repeated flashes and measurements were averaged to produce the final waveform. The amplitude of the photopic b-wave was then measured from the trough of the a-wave to the crest of the b-wave. At the beginning of the day, the response of wild-type C57BL/6 mice (aged over P21, n ≥ 2) was recorded and quantified to ensure proper device calibration. Light-aversion test The visual discrimination (light/dark) test was conducted in an apparatus that consists of black opaque (100%) acrylic test chambers (30.48 × 15.24 × 30.48 cm (length, width, height)). This chamber was further divided into equal-sized compartments (15.24 × 15.24 × 30.48 cm) by the addition of an insert, to create a dividing wall in the centre. Further, to create light and dark zones, one compartment was illuminated with dim ambient light (50 ± 1.5 lx) and the other compartment was kept dark (~ 0.1 lx). The light and dark compartments were connected by an opening (5 × 5 cm). The position of the mouse within the apparatus was recorded using a photocell-based system (Model 71-CPPX, Omnitech). The acrylic chambers were housed separately in sound-attenuating chambers (Model 71-ECC, Omnitech). Ambient noise within the chambers was 64 dB and testing took place under dim illumination. Mice were maintained in the testing room overnight (about 12 h) in dark conditions in their home cage with free access to food and water. Each mouse was allowed to habituate to the testing apparatus (both sides) for 10 min while in the dark. After habituation, one side of the apparatus was illuminated with an ambient light at around 50 lx and the mouse was allowed to roam freely between each compartment for 5 min. The time spent in the dark and light compartments was recorded by a photocell-based system. Optomotor task The testing apparatus was a chamber (39 × 39 × 32.5 cm) with mirrored floors and ceilings. Attached to each of the four walls was a 20-inch computer monitor facing inwards. In the centre of the chamber was a platform (7-cm diameter) that was elevated approximately 15 cm from the floor. When a mouse was placed on the platform, a video camera positioned in the ceiling of the apparatus enabled the behaviour of the mouse to be clearly visible during testing. A computer program was used to project visual stimuli (vertical gratings) onto the monitors (OptoMotry, CerebralMechanics). The gratings were rotated at 12° s −1 , producing the appearance of a virtual rotating cylinder. The moving gratings elicited a tracking behaviour and the visual acuity threshold and contrast sensitivity was determined for each eye rotating in a clockwise direction (when testing the left eye) or in the anticlockwise direction (when testing the right eye). Light levels were lowered to 50 lx and 65 lx by using neutral density filters placed on the screens. Before any testing took place, the mice were dark-adapted in their home cages with free access to food and water. Visual acuity A mouse was placed on top of the platform and allowed to acclimatize for a short period of time. Testing began when the mouse was no longer actively moving around. The visual acuity threshold was determined with contrast set at 100% and a grating of low spatial frequency (0.042 cycles per degree) as previously described in a recent report. When tracking behaviour was observed, the same stimuli were rotated in the anticlockwise direction (thus effectively testing the right eye). A staircase method of determining acuity threshold was implemented. A series of gratings of increasingly higher spatial frequencies were presented (rotating in one, then in the alternative direction) as long as the mouse indicated that it could detect the grating movements. When the mouse ceased to respond to a particular spatial frequency, a lower frequency grating was presented; when the mouse responded to a frequency, the frequency was increased. The acuity threshold was set as the highest spatial frequency to which the animal responded. Contrast-sensitivity function A contrast threshold was measured for six spatial frequencies (0.031, 0.064, 0.092, 0.103, 0.192, 0.272 cycles per degree). The initial contrast was set at 100% for each of the above spatial frequencies. Contrast was lowered until the mouse ceased to respond to the particular grating. The lowest contrast that elicited a response was assessed for each of the six spatial frequencies, and a contrast-sensitivity function was calculated with the formula 100/ C , where C is the lowest contrast that elicits a response at a particular frequency. This data transform means that when a mouse can see at a very low contrast setting, the sensitivity number will be large and will indicate better visual performance at a particular spatial frequency. Immunofluorescence and laser scanning confocal microscopy For confocal microscopy, converted and sorted cells were seeded on a chambered coverglass coated with 0.2% gelatin (Nunc). For mROS detection, cells were stained with MitoTracker Red (500 nM; Molecular Probes, M7512) for 30 min, washed, and used for downsream applications. For antibody staining, cells were fixed with 4% paraformaldehyde for 15 min and permeabilized with 0.25% Triton. Cells were then stained with primary antibody overnight at 4° C. After incubation with the appropriate secondary antibody, images were captured using a Zeiss LSM 510 confocal microscope. Data analysis and 3D reconstruction were performed with the assistance of ZEN lite software. Alexa Fluor 633-, 549- and 488-tagged secondary antibodies were used. For counting Nrl –GFP + CiPCs, ten 20× visual fields were selected randomly. The list of primary antibodies is provided in Supplementary Table 4. Measurement of mitochondrial ROS and TNF, H 2 O 2 treatment during reprogramming Mitochondrial ROS was detected and quantified using a published protocol 47 . In brief, CiPCs, Axin2 -depleted MEFs or reprogramming intermediates were incubated with MitoSOX Red (500 nM; Molecular Probes, M36008) for 30 min. Cells were then washed twice with 1× PBS and fluorescence was monitored with a microplate reader set to an excitation wavelength of 510 nm (excitation bandwidth, 10 nm) and an emission wavelength of 595 nm (emission bandwidth, 35 nm). mROS generation was also imaged with a fluorescence microscope and quantified with Leica Application Suite X Software. The region of interest was outlined for each cell on the image after background subtraction. The average intensity within each region of interest was measured and exported to an Excel spreadsheet. The average change in fluorescence was calculated for each type of cell. There were at least three replicates for each condition. For H 2 O 2 production during reprogramming, d -galactose (0.5 mM) and galactose oxidase (0.015, 0.05, 0.1, 1 U ml −1 ) were added on day 4 of reprogramming along with all the small molecules 48 . On day 10–11, the number of CiPCS was quantified under the microscope (20 fields were randomly selected for counting). For TNFα treatment, cells were treated with three different concentrations of TNFα (20 ng ml −1 , 50 ng ml −1 , 100 ng ml −1 ) on day 8 and the number of cells was counted on day 11. Measurement of oxygen consumption rate and extracellular acidification rate by Seahorse assay About 25,000 CiPCs in 100 μl medium with all small molecules were seeded in XF24 cell culture plates. Blank medium was added to the appropriate wells for background correction. Plates were then placed in an incubator at 37 °C with 5% CO 2 for 6 h. This step enables the cells to adhere on the surface of the plate. Next, 150 μl growth medium containing all small molecules was added to each well followed by incubation overnight at 37 °C and 5% CO 2 . The following morning, the experiment was performed in a Seahorse Extracellular Flux Analyzer according to the manufacturer’s instruction at the Metabolic Phenotyping Core facility, University of Texas South Western Medical Center. FACS For FACS, CiPCs were trypsinized (0.25%) and passed through a 40-μm Nylon cell strainer (Fisher Scientific, Cat# 08-777-1) and suspended in PBS containing 3% bovine serum. Initiating MEFs were used as a negative control. Cells were then sorted in a Beckton-Dickinson LSRII Flow cytometer at the Flow Cytometry core facility, University of Texas Southwestern Medical Center and a Sony SH800 cell sorter at Flow Cytometry core facility, University of North Texas Health Science Center. Sorted cells were collected in IMR90 medium, centrifuged, and processed for RNA extraction and other downstream applications. RNAi and generation of shRNA transduced MEFs Lentiviral doxycycline-inducible shRNA constructs for Axin2 (GE Dharmacon, 12006), Tfam (GE Dharmacon, 21780) and RelA (Sigma, TRCN00023583) were purchased for lentivirus preparation. Lentiviral supernatants were collected for 4 days and concentrated using lenti-X concentrator (Clonetech, Cat# 631231). An aliquot of concentrated lentivirus was then used to transduce P0 Nrl –GFP MEFs. For Ascl1 knockdown experiments, control (scramble) shRNA and lentiviral shRNA (a gift from J. Johnson, UT-Southwestern Medical Center; cloned into PLKO.1 (Addgene)) constructs were transduced in MEFs. All lentiviral-transduced cells were selected for 3d in the presence of puromycin (1 μg ml −1 ). Drug-selected cells were then used for chemical conversion and shRNA induction ( Tfam and Axin2 ) was performed between day 4 and day 8 with doxycycline. Finally, CiPCs were quantified on day 11. Western blotting Whole-cell lysates were prepared as follows: cells were washed twice with ice-cold PBS, then lysed in RIPA buffer (Thermo Fisher Scientific) with 1X protease and phosphatase inhibitor cocktail (Thermo Fisher Scientific) on ice for 15–20 min. The mitochondrial and cytosolic fractions were prepared using a commercial kit (Thermo Fisher Scientific, Cat# 89874). Protein concentration was measured using a Pierce BCA protein assay kit (Thermo Fisher Scientific), and optical density at 562 nm was measured using a Biotek Synergy 2 Microplate Reader (BioTek Instruments). The NuPAGE protein gel system was used to separate cell lysates by electrophoresis. Blots were incubated in appropriate primary antibodies overnight and incubated with western HRP substrate (Millipore) or SuperSignal West Femto Maximum Sensitivity Substrate (34094). Images were acquired using the Innotech FluorChem HD2 imaging system (Alpha Innotech). ChIP and amplification Real-time PCR-based quantitative ChIP analysis was performed using a ChIP Assay Kit according to the manufacturer’s instructions (EMD Millipore, Cat# 17-295). In brief, 20,000 cells were cross-linked with formaldehyde (to a final concentration of 1%) for 10 min at room temperature with gentle agitation. Cell sonication was performed such that the size of the chromatin would be between 300–500 bp. After pre-clearance with protein A agarose, chromatins were used for immunoprecipitation with specific antibodies targeted to NF-κB-p65. Immunoprecipitated chromatins were amplified with a Whole-Genome Amplification Kit (Sigma, WGA2). Amplified products were identified in agarose gel. Primers were designed to amplify 60–100 bp amplicons and were based on sequences from the Ensembl Genome Browser for mouse. Products were amplified with Fast SYBR Green Master Mix in a 20-μl reaction. The amount of product was determined relative to a standard curve of input chromatin. Dissociation curves showed a single product for the amplicons. Primers for ChIP analysis are detailed in Supplementary Table 3 . BrdU labelling and staining BrdU (1 μM) was added on day 3 of reprogramming, and conversion was continued until day 11. CiPCs were washed twice with 1× PBS and immunofluorescence was performed according to the published procedure ( ). Preparation of Nrl –DsRed promoter reporter cells To prepare the Nrl –DsRed promoter reporter, we digested out the promoter and reporter fragment from a commercially available vector, p Nrl –DsRed (Addgene, 13764), using restriction enzymes. These fragments were then cloned into a gateway entry vector pENTR2B (Thermo Fisher Scientific, A10463). Positive clones were then shuffled into a destination vector, pLentiX1 Zeo DEST (Addgene, 17299). The final product was then used for lentivirus preparation. Human fetal lung fibroblasts (HFL1; ATCC, CCL153) were transduced with the lentivirus and selected with Zeocin (200 μg ml −1 ; InvivoGen, ant-zn-1) for 8 days. Statistical analysis All data are presented as mean ± s.e.m. Statistical significance was determined using Student’s t -test and one-way ANOVA using GraphPad Prism Software (GraphPad Software); P values are indicated in the figures. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Supporting RNA-seq and ATAC–seq data are deposited in the Gene Expression Omnibus under accession codes GSE138520 (RNA-seq) and GSE138521 (ATAC–seq) respectively. Source data for Figs. 1 – 4 and Extended Data Figs. 1 – 3 , 5 – 8 are available within the manuscript files.
Researchers have discovered a technique for directly reprogramming skin cells into light-sensing rod photoreceptors used for vision. The lab-made rods enabled blind mice to detect light after the cells were transplanted into the animals' eyes. The work, funded by the National Eye Institute (NEI), published April 15 in Nature. Up until now, researchers have replaced dying photoreceptors in animal models by creating stem cells from skin or blood cells, programming those stem cells to become photoreceptors, which are then transplanted into the back of the eye. In the new study, scientists show that it is possible to skip the stem-cell intermediary step and directly reprogram skins cells into photoreceptors for transplantation into the retina. "This is the first study to show that direct, chemical reprogramming can produce retinal-like cells, which gives us a new and faster strategy for developing therapies for age-related macular degeneration and other retinal disorders caused by the loss of photoreceptors," said Anand Swaroop, Ph.D., senior investigator in the NEI Neurobiology, Neurodegeneration, and Repair Laboratory, which characterized the reprogrammed rod photoreceptor cells by gene expression analysis. "Of immediate benefit will be the ability to quickly develop disease models so we can study mechanisms of disease. The new strategy will also help us design better cell replacement approaches," he said. Scientists have studied induced pluripotent stem (iPS) cells with intense interest over the past decade. IPSCs are developed in a lab from adult cells —rather than fetal tissue— and can be used to make nearly any type of replacement cell or tissue. But iPS cell reprogramming protocols can take six months before cells or tissues are ready for transplantation. By contrast, the direct reprogramming described in the current study coaxed skin cells into functional photoreceptors ready for transplantation in only 10 days. The researchers demonstrated their technique in mouse eyes, using both mouse- and human-derived skin cells. "Our technique goes directly from skin cell to photoreceptor without the need for stem cells in between," said the study's lead investigator, Sai Chavala, M.D., CEO and president of CIRC Therapeutics and the Center for Retina Innovation. Chavala is also director of retina services at KE Eye Centers of Texas and a professor of surgery at Texas Christian University and University of North Texas Health Science Center (UNTHSC) School of Medicine, Fort Worth. Direct reprogramming involves bathing the skin cells in a cocktail of five small molecule compounds that together chemically mediate the molecular pathways relevant for rod photoreceptor cell fate. The result are rod photoreceptors that mimic native rods in appearance and function. The researchers performed gene expression profiling, which showed that the genes expressed by the new cells were similar to those expressed by real rod photoreceptors. At the same time, genes relevant to skin cell function had been downregulated. The researchers transplanted the cells into mice with retinal degeneration and then tested their pupillary reflexes, which is a measure of photoreceptor function after transplantation. Under low-light conditions, constriction of the pupil is dependent on rod photoreceptor function. Within a month of transplantation, six of 14 (43%) animals showed robust pupil constriction under low light compared to none of the untreated controls. Moreover, treated mice with pupil constriction were significantly more likely to seek out and spend time in dark spaces compared with treated mice with no pupil response and untreated controls. Preference for dark spaces is a behavior that requires vision and reflects the mouse's natural tendency to seek out safe, dark locations as opposed to light ones. "Even mice with severely advanced retinal degeneration, with little chance of having living photoreceptors remaining, responded to transplantation. Such findings suggest that the observed improvements were due to the lab-made photoreceptors rather than to an ancillary effect that supported the health of the host's existing photoreceptors," said the study's first author Biraj Mahato, Ph.D., research scientist, UNTHSC. Three months after transplantation, immunofluorescence studies confirmed the survival of the lab-made photoreceptors, as well as their synaptic connections to neurons in the inner retina. Further research is needed to optimize the protocol to increase the number of functional transplanted photoreceptors. "Importantly, the researchers worked out how this direct reprogramming is mediated at the cellular level. These insights will help researchers apply the technique not only to the retina, but to many other cell types," Swaroop said. "If efficiency of this direct conversion can be improved, this may significantly reduce the time it takes to develop a potential cell therapy product or disease model," said Kapil Bharti, Ph.D., senior investigator and head of the Ocular and Stem Cell Translational Research Section at NEI.
10.1038/s41586-020-2201-4
Chemistry
Knowledge gap closed in our understanding of degradation of ethane
Song-Can Chen et al. Anaerobic oxidation of ethane by archaea from a marine hydrocarbon seep, Nature (2019). DOI: 10.1038/s41586-019-1063-0 Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1063-0
https://phys.org/news/2019-03-knowledge-gap-degradation-ethane.html
Abstract Ethane is the second most abundant component of natural gas in addition to methane, and—similar to methane—is chemically unreactive. The biological consumption of ethane under anoxic conditions was suggested by geochemical profiles at marine hydrocarbon seeps 1 , 2 , 3 , and through ethane-dependent sulfate reduction in slurries 4 , 5 , 6 , 7 . Nevertheless, the microorganisms and reactions that catalyse this process have to date remained unknown 8 . Here we describe ethane-oxidizing archaea that were obtained by specific enrichment over ten years, and analyse these archaea using phylogeny-based fluorescence analyses, proteogenomics and metabolite studies. The co-culture, which oxidized ethane completely while reducing sulfate to sulfide, was dominated by an archaeon that we name ‘ Candidatus Argoarchaeum ethanivorans’; other members were sulfate-reducing Deltaproteobacteria. The genome of Ca . Argoarchaeum contains all of the genes that are necessary for a functional methyl-coenzyme M reductase, and all subunits were detected in protein extracts. Accordingly, ethyl-coenzyme M (ethyl-CoM) was identified as an intermediate by liquid chromatography–tandem mass spectrometry. This indicated that Ca . Argoarchaeum initiates ethane oxidation by ethyl-CoM formation, analogous to the recently described butane activation by ‘ Candidatus Syntrophoarchaeum’ 9 . Proteogenomics further suggests that oxidation of intermediary acetyl-CoA to CO 2 occurs through the oxidative Wood–Ljungdahl pathway. The identification of an archaeon that uses ethane (C 2 H 6 ) fills a gap in our knowledge of microorganisms that specifically oxidize members of the homologous alkane series (C n H 2 n +2 ) without oxygen. Detection of phylogenetic and functional gene markers related to those of Ca . Argoarchaeum at deep-sea gas seeps 10 , 11 , 12 suggests that archaea that are able to oxidize ethane through ethyl-CoM are widespread members of the local communities fostered by venting gaseous alkanes around these seeps. Main Natural gas venting from deep marine horizons towards the sediment surface is a potent source of energy and carbon for microbial communities using various electron acceptors (or ‘oxidants’). The high abundance of sulfate in seawater (a concentration of 28 mM; by contrast, the concentration of oxygen is approximately 0.3 mM) often results in extended anoxic subsurface zones that are shaped by the reduction of sulfate to sulfide 13 . Here, anaerobic oxidation of methane (AOM) is a prominent process carried out by various archaea 14 , 15 , 16 . According to geochemical depth profiles, ethane, propane, n -butane and iso -butane (non-methane alkanes) may also be biodegraded 1 , 2 , 3 . Involved microorganisms have been identified so far for the degradation of propane and n -butane, and are either bacteria or archaea 6 , 9 , 17 , 18 . Like the AOM archaea, those that oxidize propane and butane also depend on syntrophic interactions with sulfate-reducing bacteria 9 , 19 , 20 . The activation of propane and butane in the archaea apparently involves the same basic mechanism—the formation of the thioethers propyl- or butyl-coenzyme M 9 , 21 , 22 . By contrast, the alkane-degrading bacteria couple propane or butane oxidation to sulfate reduction in the same cell and initiate alkane oxidation by reaction with fumarate, yielding alkyl-succinates 6 , 23 . Ethane is the second most abundant hydrocarbon in gas of thermogenic origin, often exceeding 10% by volume 24 , but is the least studied with respect to anaerobic biodegradation 8 . Although its anaerobic utilization has been measured as ethane-dependent sulfate reduction or ethane consumption in sediment slurries 4 , 5 , 6 , 7 , the microorganisms and reactions that catalyse this process remain unknown 8 . Here we use continued selective enrichment to reveal the microorganisms that are capable of anaerobic ethane oxidation. The identified organisms are related to a lineage of uncultured archaea, initiate ethane oxidation through coenzyme M thioether formation and apparently depend on sulfate-reducing bacteria. A slurry with ethane-dependent sulfate reduction 6 was cultivated (inoculum size: one-third by volume) over 10 years at 12 °C, a temperature that is also suitable for anaerobic methanotrophs 25 . This resulted in a sediment-free enrichment culture, termed Ethane12, which reduced 10 mM sulfate during a period of approximately 7 months with strict dependence on ethane addition. The approximate rate of ethane oxidation was 4 mmol per day per gram cell dry weight, comparable with AOM rates ( Supplementary Information ). The molar ratios of consumed ethane to formed sulfide in duplicate experiments were 1.63 and 1.86 (Fig. 1a and Extended Data Table 1 ), indicating ethane oxidation according to: $$4{{\rm{C}}}_{2}{{\rm{H}}}_{6}({\rm{g}})\,+\,7{{\rm{S}}{\rm{O}}}_{4}^{2-}\,+\,14{{\rm{H}}}^{+}\,\to \,8{{\rm{C}}{\rm{O}}}_{2}({\rm{g}})\,+\,7{{\rm{H}}}_{2}{\rm{S}}({\rm{a}}{\rm{q}}.)\,+\,12{{\rm{H}}}_{2}{\rm{O}}$$ $${\rm{\Delta }}{G}^{\circ ^{\prime} }\,=\,-73.2\,{\rm{k}}{\rm{J}}\,{\rm{p}}{\rm{e}}{\rm{r}}\,{\rm{m}}{\rm{o}}{\rm{l}}\,{\rm{e}}{\rm{t}}{\rm{h}}{\rm{a}}{\rm{n}}{\rm{e}}$$ Fig. 1: Ethane oxidation with sulfate and major 16S rRNA gene phylotypes. a , The Ethane12 culture oxidized ethane (two different starting concentrations of ethane; black and grey diamonds, 2.7 and 1.6 mmol l − 1 , respectively) while reducing sulfate to sulfide (corresponding black and grey circles). Cultures without ethane did not produce sulfide (white circles). Ethane concentrations were stable in sterile incubations (white diamonds). Similar results were obtained with n = 4 biological replicates. b , Candidatus Argoarchaeum ethanivorans is affiliated with a lineage of uncultured Methanosarcinales. c , Eth-SRB1 and Eth-SRB2 share over 95% 16S rRNA gene identity and belong to the marine SEEP-SRB1 group of the Desulfosarcina – Desulfococcus clade. Filled circles on branch nodes indicate bootstrap values >60%; scale bars, nucleotide subsitutions per site ( b , c ). Source data Full size image Grown Ethane12 cultures were turbid and light microscopy revealed small single cocci (with a diameter of around 0.5 μm) as the abundant morphotype. Amplicon and metagenome sequencing consistently retrieved a distinct archaeal phylotype affiliated with the anaerobic methanotrophic (ANME)-2d cluster of Methanosarcinales (Fig. 1b and Supplementary Table 1 ). This phylotype, named Eth-Arch1, constituted on average 65% of the total number of cells (Fig. 2a, b and Supplementary Table 2 ). Cells of the Eth-Arch1 phylotype grew mostly unattached. Many formed protruding smaller vesicular structures that had DAPI and catalysed reporter deposition–fluorescence in situ hybridization (CARD–FISH) signals (Fig. 2f–h and Extended Data Fig. 1 ), suggesting division by budding, which is an alternative cell-division mechanism found across major archaeal phyla 26 . In addition, two phylotypes of Desulfosarcina -affiliated sulfate-reducing bacteria (SRB) were identified, Eth-SRB1 and Eth-SRB2, each accounting for about 15% of the total number of cells (Figs. 1c, 2a, b and Supplementary Table 2 ). Their cells were morphologically distinct, appearing as slightly curved rods (1.0–1.5 μm by 0.3 μm) and larger ellipsoids (2.0–2.5 μm by 1 μm), respectively; they were also found predominantly as single cells (Fig. 2a, b ). An estimated 10% of the total culture biomass occurred as archaeal–bacterial aggregates (Fig. 2c–e ). Fig. 2: Microscopic characterization of the Ethane12 culture. a – e , Fluorescence upon specific probing of Ca . A. ethanivorans ( a – e , red), the Eth-SRB1 phylotype ( a , green) or both Eth-SRB phylotypes ( b – e , green). Images are representative of n = 50 recorded images. Aggregates were rare, of 10–20 μm in diameter and consisted of varying proportions of archaea and bacteria ( c – e ). f , g , Helium ion microscopy images of Ca . A. ethanivorans displayed vesicular structures. Representative of n = 20 recorded images. h , The violet colour from the overlay of Ca . A. ethanivorans probe and DAPI signals indicates nucleic acids (selected budding cells). Representative of n = 10 recorded images. Scale bars, 5 μm ( a – e , h ) and 500 nm ( f , g ). Full size image Because the abundance of the archaeal phylotype suggested a key role in ethane oxidation, we examined its catabolism through metagenome, metaproteome and metabolite analyses. One bin of 1.99 Mb from the metagenome assembly corresponded to Eth-Arch1. Marker genes for Euryarchaeota and Archaea indicated a genome completeness of 89–94% (Supplementary Table 3 ). The Eth-Arch1 genome bin contained all genes ( mcrABG ) for a functional methyl-coenzyme M reductase (MCR)-like enzyme. In addition, their gene products were detected in protein extracts (Extended Data Table 2 ). The large subunit clustered closely to a McrA type that has been identified in Ca . Syntrophoarchaeum 9 ; both are divergent from the McrA of methanogens (Fig. 3a ). Other MCR-encoding genes were not detected. Genes that encode (methyl)alkylsuccinate synthases, which would indicate a reaction of ethane with fumarate, were not found in the genomes of Eth-Arch1, Eth-SRB1 and Eth-SRB2, or in the whole metagenome library. Fig. 3: Phylogeny of McrA and identification of ethyl-coM. a , The McrA of Ca . A. ethanivorans (red) branches with McrA of Ca . Syntrophoarchaeum (green) and unidentified microorganisms at marine hydrocarbon-impacted settings (blue). Filled circles on branch nodes indicate bootstrap values >80%. Scale bar, amino acid substitutions per site. b , c , FT–ICR–MS of Ethane12 metabolite extracts identified a mass peak corresponding to ethyl-CoM ( m / z = 168.9999) ( b ) and the fragment masses of ethyl-CoM-derived bisulfite and ethenesulfonate ( c ). Similar results were obtained with n = 8 independent cultures. d – f , LC–MS/MS in multiple-reaction monitoring mode confirmed the metabolite structure as ethyl-CoM, with all three characteristic fragments at a retention time of 3.715 min. Source data Full size image The Fourier transform ion cyclotron resonance mass spectrometry (FT–ICR–MS) spectrum of extractable metabolites from Ethane12 cultures revealed a peak of m / z = 168.9999 (Fig. 3b ), which corresponds exactly to a predicted ethyl-coM anion (C 4 H 9 O 3 S 2 − ). Subsequently produced fragments of this mass peak had m / z values of 80.9652 and 106.9808 and were identified as ethyl-CoM-derived bisulfite (HSO 3 − ) and ethenesulfonate (C 2 H 3 O 3 S − ), respectively. Identical mass peaks and fragments were obtained with synthetic ethyl-CoM (Fig. 3b, c ). These findings were further corroborated by liquid chromatography–tandem mass spectrometry (LC–MS/MS) analyses, yielding identical retention time and mass transitions for extracted metabolites and the synthetic standard (Fig. 3d–f ). We conclude that Eth-Arch1 archaea use the MCR-like enzyme to activate ethane to ethyl-CoM, similar to propane and n -butane activation in thermophilic archaea 9 . All analyses are thus consistent with Eth-Arch1 being the primary ethane-degrading microorganism; we therefore name it ‘ Candidatus Argoarchaeum ethanivorans’. The description of the taxon is as follows: Argoarchaeum, argós (Greek): slow, unhurried, archaeum from archaeon (Greek): an ancient life form; ethanivorans, ethane (hydrocarbon) from aithérios (Greek): airy, gaseous, vorans (Latin): eating, devouring. The name implies a slow-growing archaeon capable of ethane oxidation. Further metagenomics and metaproteomics analyses of Ca . Argoarchaeum predict terminal oxidation of the ethane-derived C 2 unit through cleavage of acetyl-CoA by acetyl-CoA decarbonylase/synthase (ACDS), and stepwise dehydrogenation of the derived C 1 -units (oxidative Wood–Ljungdahl pathway) (Extended Data Fig. 2 , Extended Data Tables 2 , 3 ). The reactions correspond—in principle—to those in acetoclastic methanogenesis and AOM, respectively 27 . Beta-oxidation, as noted for the butane-derived C 4 unit in Ca . Syntrophoarchaeum, is not needed in Ca . Argoarchaeum; its metagenome did not show evidence of such a pathway. The reactions for the conversion of the thioether (ethyl-CoM) to the thioester (acetyl-CoA) remain unknown. Ca ndidatus Argoarchaeum encodes a complete canonical tetrahydromethanopterin S -methyltransferase, of which four subunits were detected in protein extracts (Extended Data Table 2 ). Similar to Ca . Syntrophoarchaeum 9 , this finding suggests the possibility of a previously undescribed transfer of ethyl (rather than methyl) from ethyl-CoM to a tetrahydropterin for further oxidation to the acetyl level. Ca ndidatus Argoarchaeum lacks established sulfate-reduction enzymes, indicating a syntrophic interaction with the sulfate-reducing bacteria. Both sulfate-reducing microorganisms encode multi-haem cytochromes and type IV pili, similar to sulfate-reducing bacteria partners of anaerobic methanotrophic archaea and Ca . Syntrophoarchaeum 9 , 19 , 20 (Extended Data Tables 4 , 5 ); however, nanowire-mediated direct electron transfer is not supported by the observation of predominantly planktonic growth of Ca . Argoarchaeum and the absence of nanowire-like structures (Extended Data Fig. 1 ). Instead, the high enrichment of sulfur in Ca . Argoarchaeum cells compared to the bacterial cells (as revealed by nanoscale secondary ion mass spectrometry (nanoSIMS); Extended Data Fig. 3 ) suggests an interspecies interaction similar to that found in cold-adapted AOM consortia; archaea in these consortia have been proposed to foster their bacterial partners by a diffusible sulfur species 16 . This is, to our knowledge, the first identification of an ethane-degrading anaerobe, closing the microorganism-related gap in our understanding of the biodegradability of members of the homologous alkane series in the absence of oxygen. There is a phylogenetic relationship and metabolic similarity between Ca . Argoarchaeum and the clade of methanogenic and methanotropic archaea (Figs. 1 and 4 ). The unique MCR reaction may have provided an evolutionarily successful, mechanistic trait for the formation and cleavage of primary apolar C–H bonds. Ethane is chemically most closely related to methane, and synthetic ethyl-CoM has been used as an analogue of methyl-CoM. In such a study, C–H bond cleavage in ethane has been analysed as the back reaction during net ethyl-CoM conversion to ethane by MCR from Methanothermobacter marburgensis 28 . For the archaea with genuine metabolism of ethane, propane and butane, one may expect a specific substrate–enzyme fit. Structural modelling of MCR from Ca . Argoarchaeum and Ca . Syntrophoarcheum based on the crystal structure of MCR (from Methanothermobacter 29 ) revealed differences between the amino acid sequences of the catalytic pocket (Extended Data Fig. 4 ). However, matches between the size of the hydrocarbon molecules and the suggested pocket are partly ambiguous. Three MCR types from Ca . Syntrophoarcheum exhibited replacement of some aromatic amino acids (most obviously Phe330 and Tyr333 of McrA) by less space-filling aliphatic residues (Gly, Ala or Thr). By contrast, another Ca . Syntrophoarchaeum enzyme and that from Ca . Argoarchaeum did not show exchanges that reflect binding of substrates that are bulkier than methane (Extended Data Fig. 4c–g ). The proposed enzymes for utilization of the non-methane gaseous alkanes apparently belong to an offset cluster within the MCRs, with clear separation from the enzymes involved in methanogenesis or AOM. Still, McrA from ethane-, propane- and butane-degrading microorganisms are distantly related to each other (Fig. 3a ). This suggests that adaptation to higher alkanes may have occurred through independent evolutionary events. The entire tree may be regarded as a family of ‘alkyl-coenzyme M reductases’, acting according to $${\rm{Alkyl}}\mbox{-}{\rm{S}}\mbox{-}{\rm{CoM}}\,+\,{\rm{HS}}\mbox{-}{\rm{CoB}}\,\rightleftharpoons \,{\rm{a}}{\rm{l}}{\rm{k}}{\rm{a}}{\rm{n}}{\rm{e}}\,+\,{\rm{CoM}}\mbox{-}{\rm{S\mbox{--}}}{\rm{S}}\mbox{-}{\rm{CoB}}$$ Fig. 4: Basic reactions in archaeal oxidation of gaseous alkanes compared to methanogenesis. a – d , Viewing AOM ( a ) as the ‘archetype’, oxidation of non-methane alkanes ( b – d ) was apparently achieved by modified methyl-coenzyme M reductases (‘MCR’) and acquisition of metabolic modules. Reactions converting alkyl-thioethers to acyl-thioesters ( b – d , red) are presently unknown. Oxidation of propionyl-CoA and butyryl-CoA may involve the methylmalonyl-CoA pathway ( c , purple; not yet validated) and beta-oxidation ( d , purple), respectively. e , Methanogenesis from CO 2 has been the basis for understanding AOM and oxidation of other alkanes by archaea. b – d , f , Cleavage of acetyl-CoA (blue) and subsequent oxidation of bound CO is known from acetoclastic methanogens. a , Oxidation of the methyl moiety may occur as in AOM. g , The overall reaction resembles the oxidative branch in methanogenesis from methanol. [Methyl] to [Formyl] summarizes the reaction sequence of the cofactor-bound C 1 units. Full size image In summary, oxidation of the non-methane gaseous alkanes in comparison to AOM requires adaptation of the activating enzyme and additional enzymatic reactions (Fig. 4 ). The latter may be viewed as metabolic modules, which raises the question of their evolutionary acquisition. Whereas ACDS for acetyl-CoA synthesis or cleavage shows vertical inheritance in archaea 30 , other modules may have been acquired from unrelated microorganisms. This was found to be the case in Ca . Syntrophoarchaeum, in which the entire beta-oxidation pathway was probably acquired through lateral gene transfer from sulfate-reducing bacteria 9 . From such a perspective, an analogous modular extension of methanogenesis to the biological formation of ethane, propane and butane appears to be possible, through still unknown natural processes or metabolic engineering. In addition, the present finding of Ca . Argoarchaeum further refines our functional interpretation of environmental sequence data. Phylotypes related to Ca . Argoarchaeum have frequently been retrieved from geographically distinct marine hydrocarbon seeps where ethane was abundant (Extended Data Table 6 ). Together with the detection of Ca . Syntrophoarchaeum 12 and of divergent mcr genes 10 , 11 this may indeed reflect in situ biological activity towards non-methane gaseous alkanes. Methods Data reporting No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Etymology Argoarchaeum, argós (Greek): slow, unhurried, archaeum from archaeon (Greek): an ancient life form; ethanivorans, ethane (hydrocarbon) from aithérios (Greek): airy, gaseous, vorans (Latin): eating, devouring. The name implies a slow-growing archaeon capable of ethane oxidation. Locality Enriched from cold marine hydrocarbon seeps of the Gulf of Mexico at 550 m water depth, Gulf of Mexico, USA. Diagnosis Ethane-oxidizing archaeon, mostly single-celled cocci with a diameter of 0.5 μm; grown at 12 °C, pH 7–8. Sediment samples, enrichment and cultivation Sediment samples from marine hydrocarbon seeps of the Gulf of Mexico were collected in the Green Canyon area at 550 m water depth, at sites GC232 (27° 44.4566′ N, 91° 18.981′ W) and GC234 (27° 44.7003′ N, 91° 13.3093′ W). The samples were collected during a cruise with the RV Seward Johnson II (Harbour Branch Oceanographic Institution) in July 2002. Sediments were stored under N 2 at 4 °C until use. Incubations with ethane were set up in 20-ml cultivation tubes provided with 10 ml artificial seawater (ASW) prepared as described previously 31 , 32 and 2 ml sediment as inoculum. The tubes were sealed with butyl-rubber stoppers under an anoxic atmosphere of N 2 and CO 2 (9:1 by volume) and provided with ethane at a partial pressure of 0.1 MPa. The tubes were incubated horizontally at 12 °C with slow horizontal shaking (100 r.p.m.). Clear ethane-dependent sulfide production in ethane-amended cultures compared to sulfide production in control incubations without added ethane was observed after about 1 year of incubation. Subsequent transfers (30% volume) over 5 years in fresh culture medium led to a sediment-free enrichment culture. The sediment-free culture was further maintained by inoculating fresh culture medium with 30% volume of a grown culture (≥15 mM sulfide production). Quantitative growth experiments were done in 80-ml bottles that were provided with 45 ml ASW medium and 5 ml inoculum from a grown culture; the bottles were flushed and sealed as described above. Bottles were supplied with either 1.7 or 3.0 ml ethane, which was added to the headspace with gas-tight syringes. Inoculated medium without ethane and sterile ASW medium with ethane were used as controls. Chemical analyses For sulfide measurements, 0.1-ml samples were withdrawn from cultures and controls with N 2 -flushed syringes, and mixed with 4 ml of acidified copper sulfate solution. The formed colloidal copper sulfide was quantified photometrically at 480 nm, as described previously 33 . Ethane concentrations in the culture headspace were quantified using a gas chromatograph (GC-14B, Shimadzu) equipped with a flame ionization detector and a Supel-Q PLOT column (30 m × 0.53 mm; film thickness, 30 μm). The gas chromatograph was operated with N 2 as carrier gas (flow rate, 3 ml min −1 ); the oven temperature was 140 °C; the injector and detector temperatures were 150 °C and 280 °C, respectively 34 . Headspace samples (100 μl) were withdrawn with N 2 -flushed, gas-tight syringes and injected into the injection port with a 1:10 split ratio. Given sulfide and ethane concentrations (Fig. 1a ) are mean values of duplicate measurements (technical replicates). Nucleic acid extraction and community sequencing For DNA extraction, the cells from 20-ml grown cultures were collected by centrifugation (20 min, 16,000 g , 4 °C; ROTINA 380R, Hettich) and suspended in 1.35 ml extraction buffer (100 mM Tris-HCl, 100 mM sodium EDTA, 100 mM sodium phosphate, 1.5 M NaCl, 1% CTAB, pH 8.0). Lysis was achieved by three cycles of freezing (liquid nitrogen) and thawing (37 °C), followed by incubation with proteinase K (10 mg ml −1 ) at 37 °C for 30 min, and with 20% SDS at 65 °C for 1 h. The supernatant was extracted with an equal volume of chloroform:isoamyl alcohol (24:1, v/v). Nucleic acids were precipitated with 0.6 volumes of isopropanol (1 h, room temperature), collected by centrifugation (30 min, 21,000 g , 4 °C), suspended in 40 μl PCR water and stored at −20 °C. Amplicon sequencing for the 16S rRNA gene (MiSeq; 2 × 300 cycles) was done using the universal prokaryotic primers 341F (5′-CCTACGGGNGGCWGCAG-3′) and 785R (5′-GACTACHVGGGTATCTAATCC-3′). The paired-end reads were merged using BBMerge 34.48 ( ) after clipping the adaptors and primers. The combined sequences were analysed using the SilvaNGS pipeline 35 , 36 . Metagenome sequencing and data analysis Sequencing of paired-end libraries was performed on an Illumina MiSeq V3 platform (2 × 300 cycles), which generated about 2 million reads. The paired-end Illumina reads were demultiplexed using the Illumina bcl2fastq v.1.8.4 software, and quality trimmed after removal of sequencing-adaptor remnants using Trimmomatic v.0.33 37 (minimum average Phred quality score of 33; minimum read length ≥36 bp). The quality-trimmed reads were assembled using metagenome mode (-meta option) of SPAdes v.3.11.1 38 with the BayesHammer error-correction step and k -mer sizes of 21, 33, 55, 77, 99 and 127. The quality of the assembly was inspected with metaQUAST v.4.6.3 39 and scaffolds ≥500 bases were selected for downstream analyses. Metagenomics sequencing was also performed on a MinION Mk1B instrument using a R9.4.1 flow cell (Oxford Nanopore Technologies). The SQK-LSK 109 Ligation Sequencing Kit was used for library preparation using 1 μg genomic DNA according to the manufacturer’s instructions, except for increasing the end-repair incubation time to 30 min at room temperature and 30 min at 65 °C and of the ligation step to 60 min. Raw sequence data were base-called using Albacore v.2.3.1 and adapters were removed using Porechop v.0.2.3. Protein-encoding genes in the bulk assembly were predicted using prodigal v.2.6.3 (-p meta option). To search for enzymes potentially involved in ethane activation, reference databases with sequences of curated methyl-coenzyme M reductases and (methylalkyl)succinate synthases were compiled (Supplementary Table 4 ). BLASTp was performed against the reference database using protein sequences that were predicted from the assembled metagenome with relaxed stringency ( e = 1 × 10 −5 ). The resulting hits were further vetted by phylogenetic analysis: sequences were aligned with reference enzymes using MUSCLE v.3.8.31 40 , and maximum likelihood trees were built using RAxML v.8.2.9 41 using the PROTGAMMALG model and 100 rapid bootstraps. Metagenomic binning was conducted with MaxBin v.2.2.4 42 , which classifies scaffolds (>1,000 bp) based on tetranucleotide frequency and read coverage. 16S rRNA genes were extracted from bins using RNAmmer v.1.2 43 and aligned to the Silva database release 132 using the ARB software package 44 for phylogenetic classification. Bins corresponding to Methanosarcinales and SEEP-SRB1 were selected and further refined by two rounds of read mapping, reassembly and binning. Read mapping was done using the bbmap tool of the BBMap v.38.00 package, with a minimum identity (‘minid’ option) of 90 and 97% in the first and second round of refinement, respectively. The mapped reads were reassembled in SPAdes using the same settings as for the bulk assembly, followed by binning in MaxBin. The refined SPAdes contigs were scaffolded with the nanopore long reads using npScarf 45 (minContig = 1000, japsa v.1.7-05b), and further polished with the Illumina reads using Pilon v.1.23 46 . The completeness and/or contamination of the refined bins were estimated using CheckM v.1.0.11 47 and AMPHORA2 48 . The refined bins were used as draft genomes, which were annotated with RAST 49 , KEGG 50 , Pfam 51 and EggNOG 52 databases, after gene prediction using prodigal (-p single option). Predicted genes related to the proposed ethane-oxidation pathway and electron transfer were manually curated by comparison with genes that encode enzymes with confirmed function. Encoded proteins with a similar length (<30% deviation), domain composition and conserved functional sites were retained. 16S rRNA genes and tRNA sequences in draft genomes were extracted using RNAmmer v.1.2 and tRNAscan v.2.0 53 , respectively. Phylogenetic analyses For phylogenetic analyses, representative full-length 16S rRNA gene sequences were selected from the SILVA database release 132. Alignments were generated using the SINA v.1.3.1 software 35 , and refined by filtering out the columns that contained more than 95% gaps using trimAl v.1.2rev59 54 . Maximum likelihood trees were calculated with RAxML and the GTRCAT model. Then 100 rapid bootstrap analyses were performed to determine the support value for each branch. For McrA phylogenetic analyses, sequences were aligned with MUSCLE, followed by removal of the ambiguous sites using trimAl ( - automated1 option). Phylogenetic trees of McrA were constructed based on the trimmed alignment using RAxML with the PROTGAMMALG evolutionary model and empirical base frequencies. Over 200 bootstrap replicates were conducted to generate branch support values, according to autoMRE bootstopping criteria. Probe design The specific oligonucleotide probe pETARCH669 (targeting Eth-Arch1) and helpers hETARCH589 and hETARCH627 (Supplementary Table 5 ) were designed using the Probe Design tool of the ARB v.6.0.2 software package 55 . Probe specificity was checked against the SILVA database 56 and the Ribosomal Database Project 57 . The stringency of the formamide concentration was determined in hybridization assays with increasing formamide concentrations from 0 to 60% (10% increments). Microscopic inspection showed the brightest fluorescence signal for formamide concentrations between 20 and 35%; a second stringency assay was performed with formamide concentrations from 20 to 45%, at 5% increments. CARD–FISH For CARD–FISH, 1-ml samples of Ethane12 cultures were fixed for 17 h at 4 °C with 1 ml of 4% paraformaldehyde (electron microscopy grade; Electron Microscopy Sciences). Volumes of 150, 250 and 500 μl were filtered on gold–palladium-coated filters (0.22-μm pore size, GTTP type, Millipore), washed three times with 1× PBS, dehydrated for 1 min with 80% ethanol and dried at room temperature. The filters were stored at −20 °C. Hybridizations were performed as described elsewhere 58 . Filters were coated with 0.2% low-melting-point agarose kept at 48 °C (Biozym Scientific) using a spin-coater type SCI-40 (LOT-QuantumDesign) at 50 r.p.m. Bacteria were permeabilized with lysozyme (10 mg ml −1 in 0.05 M EDTA pH 8.0, 0.1 M Tris-HCl pH 7.5) for 30 min at 37 °C. Archaea were permeabilized with 0.1 M HCl for 1 min, followed by incubation with proteinase K (15 μg ml −1 ) for 5 min at room temperature. Endogenous peroxidases were inactivated by incubation in 0.15% H 2 O 2 in absolute methanol (30 min, room temperature). The filters were hybridized for 2.5 h at 46 °C in standard hybridization buffer 58 . The HRP-probe concentration was 0.166 ng ml −1 . Probes used and corresponding hybridization conditions are shown in Supplementary Table 5 . Hybridized filters were incubated for 15 min at 48 °C in prewarmed washing buffer. CARD was performed for 15 min at 46 °C in the dark in standard amplification buffer 58 containing either 1 μg ml −1 Alexa Fluor 488- or Alexa Fluor 594-labelled tyramides. Tyramides were prepared from the corresponding succinimidyl esters, Alexa Fluor 488 NHS Ester and Alexa Fluor 594 NHS Ester (Thermo Fisher Scientific), as previously described 59 . The hybridized cells were further stained for 10 min with 1 μg ml −1 of 4′,6′-diamidino-2-phenylindol (DAPI). For fluorescence microscopy, the filters were embedded in a 4:1 (v/v) mixture of low fluorescence glycerol mountant (Citifluor AF1, Citifluor) and mounting fluid VectaShield (Vecta Laboratories). Hybridizations were evaluated by fluorescence microscopy using an Axio Imager.Z2 microscope (Carl Zeiss) with a 100× Plan-Apochromat objective (1.4 NA) and filter sets for DAPI, Alexa Fluor 594 and Alexa Fluor 488. Dual hybridizations were performed using a combination of HRP-pETARCH669 + HRP-SEEP1f-153 or HRP-pETARCH669 + HRP-DSS658. For both hybridizations, Alexa Fluor 594 was used for the hybridization of Eth-Arch1, whereas Alexa Fluor 488 was used for the hybridization of Eth-SRB1/2. The dual CARD–FISH procedures were similar to those described above except that cell-wall permeabilization was done sequentially for bacteria and archaea, respectively. Before the second hybridization, the HRP introduced in the first hybridization was inactivated by incubation for 10 min at room temperature with 3% H 2 O 2 . Standard mounting and epifluorescence microscopy was used for visualization. Helium ion microscopy Culture samples of 1 ml were withdrawn with N 2 -flushed syringes and fixed for 12 h at 4 °C in 3% glutaraldehyde prepared in 0.2 M sodium cacodylate buffer. The fixed samples were filtered on polycarbonate filters (0.22-μm pore size), rinsed twice with 0.2 M sodium cacodylate buffer, washed with deionized water for 3 min and post-fixed for 45 min at room temperature in 1% osmium tetroxide solution in 0.2 M sodium cacodylate buffer. The samples were dehydrated in an ethanol series (30, 50, 70, 80, 90 and 100%; 3 min each) and critical point dried for 20 exchange cycles using a LEICA EM CPD300 Critical Point Dryer (Leica). Filter pieces of about 2.5 mm 2 were cut, glued on standard scanning electron microscopy stubs with conductive silver epoxy (ACHESON DAG 1415, Plano) and placed on the helium ion microscope sample holder. Helium ion microscopy imaging was done with a He-ion landing energy of 25 kV and a beam current of about 1.3 pA. For imaging, secondary electrons were detected using an Everhard–Thorley detector; image resolution, measured directly on the sample (edges of filter pores), was <3 nm. Acquired images were minimally post-processed in ImageJ v.1.48 by adjusting brightness and contrast. Shotgun proteome analysis For protein extraction, cells from 30-ml culture volumes were collected by centrifugation (16,000 g , 4 °C), washed with 100 mM ammonium bicarbonate buffer and suspended in 30 μl ammonium bicarbonate (50 mM). Cells were lysed by three freeze–thaw cycles (liquid nitrogen and 37 °C), and incubated with 50 mM dithiothreitol for 1 h at 30 °C. The reduced proteins were alkylated with 200 mM iodacetamide in the dark for 1 h at room temperature, and digested with 0.6 μg of trypsin (Promega) for 10 h at 37 °C with shaking (400 r.p.m.). Peptides were purified with ZipTip-C18 columns (Millipore). The desalted peptides were analysed using an LTQ-Orbitrap Fusion mass spectrometer (Thermo Fisher Scientific) in tandem with a nanoUPLC system (nanoAquity, Waters) 60 . The MS/MS spectra were searched against the annotated metagenomes of Eth-Arch1, Eth-SRB1 and Eth-SRB2 using the Sequest and Amanda algorithms in Proteome Discoverer (v.2.2.0.388, Thermo Fisher Scientific). The mass tolerances of precursor- and fragment-ion were set to 3 p.p.m. and 0.1 Da (Sequest), and 10 p.p.m. and 0.2 Da (Amanda). Proteins were considered to be identified when two full tryptic peptides were recovered at a false-discovery rate of 0.05. The relative abundance of proteins was calculated based on the precursor-ion intensity of the mapped peptides compared to the sum of all identified peptides for the respective organism, using ‘Feature Mapper’ and ‘Precursor Ions Quantifier’ modules in Proteome Discoverer. The maximum retention time shift between replicate Orbitrap runs was set to 10 min. The mass spectrometry data were deposited to the ProteomeXchange Consortium via the PRIDE partner repository 61 . Synthesis of authentic standards To synthesize ethyl-coM, 4 g of sodium 2-mercaptoethanesulfonate (coenzyme M, purity ≥ 98%; Sigma-Aldrich) was dissolved in 30 ml of 30% ammonium hydroxide solution in a serum bottle. Twice the molar amount of bromoethane (purity ≥99%; Sigma-Aldrich) was added and the bottle was sealed and mixed at 300 r.p.m. for 4 h at room temperature. The residual bromoethane was removed by sparging with N 2 . Ethyl-CoM ( m / z = 168.9999) was the major mass peak identified by FT–ICR–MS; major m / z peaks indicating free CoM, CoM dimers or bromoethane were not detected. The standard was kept at 4 °C without further purification. Extraction of metabolites Cells from 30-ml culture volumes were collected by centrifugation (10 min, 16,000 g , 4 °C), washed twice with 100 mM ammonium bicarbonate buffer, and resuspended in 1 ml of acetonitrile:methanol:water (40:40:20, v/v). Around 0.3 g glass beads (0.1-mm diameter, Roth) was added to each tube. Cells were lysed using a bead-based homogenizer (PowerLyzer 24 bench, MO BIO Laboratories) operated for 5 cycles of 50 s reciprocal shaking at 2,000 r.p.m. and a 15-s pause. The extracts were separated from cell debris and glass beads by centrifugation (10 min at 21,000 g , 4 °C), and stored at −20 °C until analysis. Mass spectrometry of cell extracts and standards Synthetic ethyl-coM (approximately 10 μg ml −1 ) and extracted cellular metabolites were measured with ultra-high-resolution FT–ICR–MS (SolariX XR 12T, Bruker Daltonics) with negative electrospray ionization (Apollo II ESI source capillary voltage: 4.5 kV) in direct infusion mode (4 μl min −1 ). Spectra were recorded with an 8 MWord time domain (0.84 s transient length) and 2 s accumulation time in magnitude mode between 37 and 1,000 m / z , resulting in a mass resolution of approximately 450,000 at m / z 200. The instrument was linearly calibrated with NaTFA cluster between 113 and 521 m / z , resulting in an average root-mean square error of the calibration masses of 32 p.p.b. ( n = 4). No lock mass or internal calibration were used for the sample mass spectra, resulting in an ethyl-coM standard mass accuracy of <0.01 p.p.m. For each measurement of metabolite extracts, 64 spectra were co-added with quadrupole preselection of the mass window of ethyl-coM (169 ± 10 m / z ). Collision-induced fragmentation was carried out after quadrupole preselection with 7 V collision energy. Fragment masses of 61.0117 (C 2 H 5 S − ), 80.9652 (HSO 3 − ) and 106.9808 (C 2 H 3 SO 3 − ) were used as indicative for ethyl-coM, in accordance with previous studies 9 , 62 . The mass error for the ethyl-CoM mass peak and its derived fragments was <0.05 p.p.m. (ethyl-CoM), <0.2 p.p.m. (ethyl-CoM-derived bisulfite) and <0.05 p.p.m. (ethyl-CoM-derived ethenesulfonate). Fragment information of the ethyl-coM standard was used to implement a LC–MS/MS method to confirm the presence of ethyl-coM in extracts. A triple quadrupole mass spectrometer (Xevo TQ-S, Waters Corporation) in negative electrospray ionization mode (capillary voltage: 1 kV) was used in multiple-reaction monitoring mode. All three ethyl-coM transitions ( m / z 169 to 107, m / z 169 to 81 and m / z 169 to 61) were initially optimized (cone voltage 52 V and collision energy 14–16 V) by direct infusion of standard solution (approximately 10 ng ml −1 ) into the mass spectrometer. The mass spectrometer was coupled to an ultra-high performance liquid chromatograph (ACQUITY I-Class, Waters) equipped with a reversed phase column (ACQUITY UPLC HSS T3 1.8 μm; 2.1 mm × 100 mm, Waters; 30 °C) and run with a binary gradient (eluent A: 2 mM ammonium acetate in 95% H 2 O and 5% methanol; eluent B: 2 mM ammonium acetate in 75% methanol, 20% acetonitrile and 5% H 2 O) at a flow rate of 0.25 ml min −1 . Cell extracts (1 ml) were evaporated to dryness and dissolved in 200 μl 95% H 2 O and 5% methanol. For each analysis, 10 μl was injected into the UPLC. Retention time and presence of all three multiple-reaction monitoring transitions were used as quality criteria. Homology modelling of methyl-coenzyme M reductase McrA and McrB sequences from methanogenic and methanotrophic archaea, Ca . A. ethanivorans and Ca . Syntrophoarchaeum butanivorans were aligned with MUSCLE. Tertiary structures of McrA, McrB and McrG from Ca . Argoarchaeum and Ca . S. butanivorans were predicted using SWISS-MODEL ( ) with the default parameters 63 . The models have qualitative model energy analysis 64 scores ranging from −3.81 to −0.44 (Supplementary Table 6 ). The modelled structures were superimposed on the crystal structure of M. marburgensis MCR. The active sites of MCR were visualized and exported as images using MacPyMOL v.1.7.4 ( ). Nano-focused secondary ion mass spectrometry Ethane12 cultures (15 mM sulfide) were transferred under anoxic conditions to Oak Ridge centrifuge tubes; cells were collected by centrifugation (13,300 g , 12 °C; ROTINA 380R, Hettich), suspended in fresh ASW medium and provided with ethane. Samples were collected after 95, 110 and 120 days of incubation, fixed with 2% paraformaldehyde and added on gold–palladium-coated polycarbonate filters without further treatments. Filter pieces (10-mm diameter) were analysed with a NanoSIMS-50L instrument (CAMECA, AMETEK) in negative extraction mode using Cs + as the primary ion source. Areas of 100 × 100 μm 2 were pre-implanted with a Cs + beam of 200 pA and 16 keV for 10 min to equilibrate the working function for negative secondary ions. Fields of view of 30 × 30 μm 2 were measured at 512 × 512-pixel resolution with 2 ms dwell time per pixel. Secondary ion species ( 12 CH − , 16 O − , 12 C 14 N − , 13 C 14 N − , 32 S − , 31 P − and 31 P 16 O 2 − ) were collected in parallel. A mass resolving power ( M /Δ M ) between 8,000 and 12,000 was achieved as previously described 65 , with the exit slits adjusted to 40 μm. For each field of view, twelve scans were accumulated, corrected for lateral drift and aligned with the Look@NanoSIMS software 66 . Regions of interest were defined for individual cells based on 12 C 14 N − and 32 S − secondary ion count maps 66 . Scanned areas in which identification of cells was ambiguous (for example, cell clusters, overlapping cells) were not considered. To avoid loss of low-molecular-mass compounds 67 or possible alteration of intracellular sulfur after CARD–FISH (that is, H 2 O 2 treatment), assignment of regions of interest as Ca . Argoarchaeum, Eth-SRB1 and Eth-SRB2 was based on their distinct morphologies. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Metagenome sequence data are archived in the NCBI database under BioProject number PRJNA495932 , including the draft genomes of Ca . Argoarchaeum ethanivorans (SAMN10235260), Eth-SRB1 (SAMN10235261) and Eth-SRB2 (SAMN10235262). The 16S rRNA gene amplicon reads have been submitted to the NCBI Sequence Read Archive (SRA) database under the accession number SRR8089822 . The proteomics dataset has been deposited with the ProteomeXchange Consortium identifier PXD011597 . Source Data for the quantitative growth experiments (Fig. 1a ), FT–ICR–MS (Fig. 3b, c ) and LC–MS/MS measurements (Fig. 3d–f ) are provided. All other data are available in the paper or the Supplementary Information .
With a share of up to ten percent, ethane is the second most common component of natural gas and is present in deep-seated land and marine gas deposits all around the world. Up to now, it was unclear how ethane is degraded in the absence of oxygen. A team of researchers from the Helmholtz Centre for Environmental Research (UFZ) have solved this mystery, after more than fifteen years of research work in cooperation with colleagues from the Max Planck Institute for Marine Microbiology in Bremen. In a microbial culture obtained from Gulf of Mexico sediment samples, the scientists have discovered an archaeon that oxidises ethane. The single-celled organism has been named Candidatus Argoarchaeum ethanivorans, which literally means 'slow-growing ethane eater'. In an article now published in the journal Nature, the researchers describe the metabolic pathway of ethane degradation. The researchers had to demonstrate a great deal of patience in solving the mystery of anaerobic degradation of saturated hydrocarbons. In 2002, UFZ microbiologist Dr. Florin Musat, who at that time was conducting research at the Bremen-based Max Planck Institute for Marine Microbiology, received a sediment sample originating from the Gulf of Mexico. The sample had been collected from natural gas seeps at a water depth of more than 500 metres. It took over ten years of cultivation effort to obtain sufficient quantities of the culture containing the archaeon – as the basis for detailed experiments to decode the structure and metabolism of the microbial community. During his regular measurements, Florin Musat recognized that oxidation of ethane was coupled to reduction of sulphate to hydrogen sulphide. "For quite a long time, we thought that the anaerobic degradation of ethane was carried out by bacteria in a similar way to the degradation of butane or propane, but we were unable to identify metabolic products typical for a bacterial mechanism of oxidation," says Musat. In order to uncover the secrets of ethane oxidation, Musat, who has been working at the UFZ since 2014, exploited the possibilities offered by the ProVIS technology platform. The Centre for Chemical Microscopy (ProVIS) combines a large number of large devices, allowing efficient, rapid and sensitive chemical analyses of biological samples, structures and surfaces at nanometer scale. For example, Musat's team used fluorescence microscopy to show that Candidatus Argoarchaeum ethanivorans makes up the dominant share of the culture at around 65 percent of the total cell number, whereas two sulphate-reducing Deltaproteobacteria make up about 30 percent. The metabolites and proteins were characterised by high-resolution mass spectrometry techniques and the chemical composition and the spatial organisation of individual microorganisms were determined by Helium-ion-microscopy and NanoSIMS. Using these methods, the researchers demonstrated that the archaeon is responsible for the oxidation of ethane to carbon dioxide, and the accompanying bacteria for reducing sulphate to sulphide. This fluorescence microscopy image shows Candidatus Argoarchaeum ethanivorans in magenta, and the sulfate-reducing bacteria in cyan. Credit: Niculina Musat / UFZ Furthermore, they observed that Candidatus Argoarchaeum ethanivorans does not form aggregates with the partner bacteria during oxidation of ethane, in contrast to cultures degrading methane, propane or butane. "The archaeon and the two types of bacteria grow mostly as free cells. Intercellular connections by nano-wires that would mediate the transfer of electrons, as shown with other cultures, are missing," says Musat. For this reason, an exciting question remains: how do Argoarchaeum and the bacteria interact with each other? Metagenome analyses revealed that the archaeon does not possess known genes for sulphate reduction. This means that the electrons from ethane oxidation have to be transferred to the sulphate-reducing bacteria. Investigations conducted by NanoSIMS suggested that this transfer could potentially occur through sulphur compounds. "The archaea gain energy from the oxidation of ethane in an obviously complex syntrophy (community of cross-feeders) with their sulphate-reducing partners," says Musat. In their hunt for the mechanism of electron transfer, Musat's team investigated the culture using a helium-ion-microscope. This analysis led to an unexpected finding: Candidatus Argoarchaeum forms small cellular vesicles, which remain attached in unusual tiny clusters, indicating that the archaea divide by budding. Finally, in the genome of Candidatus Argoarchaeum ethanivorans, the scientists identified all genes necessary for a functional methyl-coenzyme M reductase-like enzyme, that catalyses the first step in the anaerobic degradation of ethane. Using ultra-high resolution mass spectrometry, they were also able to find the product of this enzyme, ethyl-coenzyme M. Further genome and proteome analyses identified the genes and enzymes for the following reactions, thus deciphering the complete metabolic pathway. Florin Musat at the ultra-high resolution mass spectrometer. This instrument was essential to unlock the metabolic pathways of Candidatus Argoarchaeum ethanivorans. Credit: André Künzelmann / UFZ To date, research on anaerobic oxidation of ethane has been primarily fundamental. But taking one step further, the researchers' findings could also be of use for industrial applications. "We are now aware of the mechanisms underlying the degradation of short-chain hydrocarbons by 'alkyl'-CoM reductases, and we assume that the reverse reactions may be feasible. If demonstrated, this means biotechnologies to produce hydrocarbons using these or similar microorganisms," says Musat. This could mark the beginning of new biotechnological applications to produce synthetic fuels, like the energy-rich butane, for example. Butane contains more energy per litre and can be much more easily liquefied than methane – a concept that Florin Musat and his team will keep an eye on for future research.
10.1038/s41586-019-1063-0
Earth
Reconstructing the history of mankind with the help of fecal sterols
E. Argiriadis et al. Lake sediment fecal and biomass burning biomarkers provide direct evidence for prehistoric human-lit fires in New Zealand, Scientific Reports (2018). DOI: 10.1038/s41598-018-30606-3 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-018-30606-3
https://phys.org/news/2018-10-reconstructing-history-mankind-fecal-sterols.html
Abstract Deforestation associated with the initial settlement of New Zealand is a dramatic example of how humans can alter landscapes through fire. However, evidence linking early human presence and land-cover change is inferential in most continental sites. We employed a multi-proxy approach to reconstruct anthropogenic land use in New Zealand’s South Island over the last millennium using fecal and plant sterols as indicators of human activity and monosaccharide anhydrides, polycyclic aromatic hydrocarbons, charcoal and pollen as tracers of fire and vegetation change in lake-sediment cores. Our data provide a direct record of local human presence in Lake Kirkpatrick and Lake Diamond watersheds at the time of deforestation and a new and stronger case of human agency linked with forest clearance. The first detection of human presence matches charcoal and biomarker evidence for initial burning at c. AD 1350. Sterols decreased shortly after to values suggesting the sporadic presence of people and then rose to unprecedented levels after the European settlement. Our results confirm that initial human arrival in New Zealand was associated with brief and intense burning activities. Testing our approach in a context of well-established fire history provides a new tool for understanding cause-effect relationships in more complex continental reconstructions. Introduction Humans have altered landscapes around the world for thousands of years through their use of fire 1 . However, the intensity and extent of anthropogenic burning in many regions remains unresolved 2 . Most reconstructions of past fire regimes focus on variations in the concentration of charcoal particles preserved in the sediments of lakes and other natural wetlands. Such records disclose the occurrence of fire and variations in the frequency and timing of biomass burning, but they are unable to discern whether ignitions originate from natural or human causes. Finding methods for disentangling naturally caused fires from those deliberately started by people remains one of the great challenges in paleofire science 1 , 2 . Times and places where past fire activity and vegetation depart from trends that can be explained by climate alone are often used as indirect evidence for anthropogenic burning. Identifying past human presence in particular watersheds has generally relied on archeological data, but few locations offer a clear archeological signal of local fire use. As a result, linking past changes in vegetation and fire with human activity is largely inferential 3 , 4 . One of the most dramatic examples of prehistoric anthropogenic burning occurs in New Zealand, where charcoal and pollen records provide incontrovertible evidence of unprecedented fire activity and native forest decline coinciding with the arrival of people at c. AD 1280 5 , 6 , 7 , 8 . Results from modeling studies that examine vegetation-fire feedbacks attribute the rapid environmental change to small-scale positive feedbacks between fire and the post-fire establishment of high flammability shrublands that enable subsequent burning 9 , 10 . Humans, as a new source of ignition, thus triggered a conversion of forest to a more fire-prone shrubland/grassland. This reconstruction, however, is based on charcoal and pollen data alone, and in the absence of archeological evidence of burning it is not possible to determine if humans were present in a particular watershed at the time of a fire event, or if the timing of their activities coincided with the loss of forest 11 , 12 . Here we provide direct evidence of brief but intense intervals of local human activity in two New Zealand watersheds at the same time as other proxies show fire and deforestation. We utilize specific molecular markers to provide a continuous record of human presence, fire activity and land use in the South Island of New Zealand during the last millennium. We determined polycyclic aromatic hydrocarbons (PAHs), monosaccharide anhydrides (MAs), fecal and plant sterols fluxes in sediment cores from Lake Kirkpatrick (45.03° S, 168.57° E) and Lake Diamond (44.65° S, 168.96° E) (Fig. 1 ) to compare with pollen and charcoal records and paleoclimate information. The three MA compounds (levoglucosan, mannosan and galactosan) are formed from cellulose and hemicellulose combustion 13 and the seventeen PAHs (Table S1 ) are tracers for incomplete combustion of organic matter 14 . MAs and PAHs can travel medium to long distances in association with the fine fraction of aerosol particles 15 , 16 and ultimately reach lake sediments through wet and dry deposition 17 . Their signal in lakes complements the macroscopic charcoal (pieces > 125 μm) signal of local fires by providing fire information at regional scales. In Lake Kirkpatrick and Lake Diamond, we also employ two 5β-stanols originating from human feces (coprostanol and epi-coprostanol) 18 to trace human presence in the catchments. Two Δ 5 -sterols (cholesterol and sitosterol) and two 5α-stanols (cholestanol and sitostanol) are used as additional land-use proxies, as these markers account for terrigenous input to the lake and the chemical conditions of sediments 19 , 20 (Table S2 ). Figure 1 Map of the sampling locations. ( a ) Diamond Lake in the Lake Wanaka area and sampling location. ( b ) Lake Kirkpatrick in the Lake Wakatipu area and sampling location. Contains data sourced from the LINZ Data Service licensed for reuse under CC BY 3.0 ( ). Full size image Fire Record Prior to initial human settlement c. AD 1280, fluxes of charcoal and biomarkers in Lake Kirkpatrick and Lake Diamond were nearly undetectable (Fig. 2 ). The lack of large charcoal particles (>125 µm in diameter) testifies to the near-absence of fire in the surrounding Lophozonia -podocarp forests which is typical of records from the region 5 . The presence of extremely low values of molecular fire tracers (PAH total flux < 1 ng cm −2 yr −1 , MA total flux < 4 ng cm −2 yr −1 ) prior to human arrival is attributed to small fires in drier settings in New Zealand or to background atmospheric deposition from distant fires in southeastern Australia and post-depositional processes 21 , 22 , 23 . Figure 2 Multi-proxy comparison. ( a – c ) Fecal sterols, total PAH and total MA fluxes (ng cm −2 yr −1 ) in Lake Kirkpatrick (this study). ( d , e ) Charcoal flux (pieces cm −2 yr −1 ) and pollen percentages in Lake Kirkpatrick (McWethy et al . 6 ). Full size image The fluxes of all MAs and PAHs abruptly and simultaneously increase in sediments at c. AD 1345–1365 and mark a period of intense or multiple fire events. The peaks occur shortly after Māori arrival and during the Initial Burning Period, which has been identified from charcoal records from Lake Kirkpatrick and other South Island lakes 5 , 24 , 25 . Levoglucosan, mannosan and galactosan records clearly indicate that combustion of plant biomass reached a maximum at c. AD 1350 with fluxes of 390, 278 and 66 ng cm −2 yr −1 , respectively. Relative proportions of the three isomers are consistent with emission factors typical of conifer burning 26 . The PAH pattern (Fig. S1 ) is consistent with typical PAH profiles obtained from the combustion of biomass including several types of hardwood 27 and softwood 28 , 29 , 30 . Low molecular weight compounds (Table S1 ), such as naphthalene, acenaphthylene and acenaphthene (128–154 g mol −1 ), are poorly represented, which is not surprising as they are commonly present in the gaseous phase and relatively more water-soluble and prone to biodegradation than heavier PAHs, which absorb to atmospheric particles 31 . The PAH distribution is dominated by 3- and 4-ring molecules (166–228 g mol −1 ), in particular phenanthrene, fluorene, fluoranthene and pyrene (Fig. S1 ). These PAH compounds are produced during biomass burning and are involved in the formation of atmospheric particles, eventually incorporated into lake sediments as a result of aerial deposition and surface runoff 17 . At Lake Kirkpatrick, phenanthrene accounted for 43% of total PAHs on average, with concentrations ranging from a few nanograms per gram (dry weight) to a maximum of 212 ng g −1 , corresponding to a flux of 180 ng cm −2 yr −1 , during the IBP. Heavier compounds (252–278 g mol −1 ) were present only in small to negligible concentrations, and this was not surprising as they are mainly produced by higher temperature processes (e.g. fossil fuel combustion) and associated with coarse particles, less likely to travel far from the source area 17 , 32 , 33 . The abundance and distribution of medium-weight PAHs during the Initial Burning Period is consistent with sustained fires characterized by low oxygen availability and a high flaming to smoldering combustion ratio 34 , 35 . Based on the scarcity of high molecular weight PAHs, on the thermal stability of detected compounds 36 and on burning experiments 37 , 38 , we infer a maximum combustion temperature averaging 400–500 °C. This temperature range was found to maximize the production of 3–4 ring PAHs from biomass combustion 32 , 39 . Concentrations and trends detected for medium-weight PAHs thus suggest an infrequent low-intensity natural fire regime before the arrival of humans and high-intensity or high-frequency fires during the Initial Burning Period. The peak in retene, a tracer of combusted coniferous wood and the associated degradation of abietic acid 30 , in the L. Kirkpatrick core (101–112 cm depth) implies an abundance of softwood fuel (Fig. S2 ). However, the levoglucosan to mannosan ratio suggests an increased hardwood to softwood fuel ratio 40 at ~AD 1348–1394 (101–112 cm), ~AD 1790–1805 (33–36 cm) and ~AD 1924–1949 (9–13 cm) (Fig. S2 ). The observed retene record is consistent with the combustion and/or the post-depositional reduction of diterpenoids from softwood species 41 . After c. AD 1350, the pollen data 5 , 6 suggest a shift in the composition of vegetation from native Lophozonia -podocarp forest towards more fire-adapted shrubs (e.g., Leptospermum ), grasses and bracken fern ( Pteridium esculentum ). The biomarker reconstruction agrees with the new fuel types that replaced the podocarp-hardwood forests and the change in fire regime characterized by infrequent or smaller low-intensity fires based on low levels of macroscopic charcoal (Fig. 2 ) 6 . Such fires would result in lower temperatures and less oxygen depleted conditions, limiting the production of MAs and PAHs, which is strongly dependent on fuel and burning conditions 30 , 35 , as observed in the post-settlement record. Anthropogenic Land Use Record Unlike the fire tracers, sterols primarily reach lake sediments through runoff 42 , and thus describe inputs from the local watershed. The evidence of human presence within the Lake Kirkpatrick and Lake Diamond catchments is provided by coprostanol - the most abundant sterol in human feces 18 - and epi-coprostanol resulting from the epimerization of coprostanol 19 . In addition, cholesterol and cholestanol are produced by vertebrates, especially mammals including humans 43 , whereas sitosterol (the most abundant sterol in land plants 44 , 45 ) and sitostanol come from the input of terrigenous organic matter to the lake. The degree of sterol to stanol conversion infers redox conditions of the basin 46 (see supplementary information). Prior to human settlement, fluxes of fecal sterols in both lakes were near zero, although we measured small background fluxes (at the sub-ng cm −2 yr −1 level) of coprostanol and epi-coprostanol in all pre-Maori samples. Small amounts of coprostanol were previously detected also in anaerobic soils devoid of a fecal input and related to the presence of microbial species capable of converting cholesterol to 5β-stanols 47 , 48 , which would provide an explanation for the presence of a natural background in anoxic sediments. The substantial increase at c. AD 1345–1365 (106–116 cm depth) in Lake Kirkpatrick and at c. AD 1310–1380 (42–55 cm depth) in Lake Diamond (Fig. 3 ) likely originated from human waste. Figure 3 Fire and human presence at Lake Kirkpatrick and Lake Diamond. ( a – c ) Sterols in Lake Kirkpatrick (this study). ( d ) charcoal flux in Lake Kirkpatrick (McWethy et al . 6 ). ( e – g ) Sterols in Lake Diamond (this study). ( h ) charcoal flux in L. Diamond (McWethy et al . 25 ). All sterols fluxes are shown as ng cm −2 yr −1 and charcoal fluxes as pieces cm −2 yr −1 . Full size image A short-lived peak in fecal sterols occurred with the onset of the Initial Burning Period in the Lake Kirkpatrick record, and slightly preceded it at Lake Diamond 25 . The brevity of the peaks at both sites suggests that human activity in the watershed lasted only a few decades. After a large increase in fire activity during the 1300 s, the charcoal influx and fecal sterols values decreased dramatically but remained at about twice the flux of the pre- Māori period. This decline between c. AD 1400 and 1800 likely marks a reduced presence of people in the watershed prior to European colonization, although humans may have been present sporadically or in low numbers. An increase in sedimentation rate at Lake Kirkpatrick at ~AD 1340–1350 (108–118 cm depth) and Lake Diamond at ~AD 1300–1450 (35–56 cm depth) is consistent with increased erosion following forest clearance. In addition, the high levels of sitostanol imply greater terrigenous input to the lake during and after the Initial Burning Period, as a consequence of increased erosion, reworking of burnt trees on the landscape as they slowly decompose, and runoff. At Lake Kirkpatrick, sitosterol and cholesterol fluxes were replaced by a rise in the reduced compounds sitostanol and cholestanol (Fig. 4 ) immediately after the Initial Burning Period, suggesting a high level of organic input resulting in reducing chemical conditions. Figure 4 Reduction of Δ 5 -sterols in sediments of Lake Kirkpatrick. ( a ) Sitosterol. ( b ) Sitostanol. ( c ) Cholesterol. ( d ) Cholestanol. All compounds are plotted as fluxes (ng cm −2 yr −1 ). The favored reaction pathway converts Δ 5 -sterols (Sit, Chl) to 5α corresponding stanols (5α-Sit, 5α-Chl). Full size image These erosion-related biomarkers were not present in Lake Diamond sediments, where all sterols and stanols increased between AD 1310 and 1380 (Fig. 2 ). The differences in water depth and catchment steepness between sites may result in lower sedimentation rate and thus limited reducing conditions at Lake Diamond. All sterol markers in Lake Kirkpatrick increased dramatically during the decades of European settlement (c. AD 1800-present), despite few or no changes in local fire activity and sedimentation rate. This increase is therefore ascribed mostly to the intensification of land use for sheep grazing 43 and associated growing human presence in the Lake Kirkpatrick watershed. The lack of a 19 th century increase in sterols at Lake Diamond is consistent with the absence of sheep grazing and with the scarce anthropic pressure in this particular watershed. Human Colonization and Land-Use Activity in the South Island of New Zealand Archeological and paleoenvironmental records indicate that Māori rapidly traveled throughout the North and South Islands of New Zealand soon after their arrival ~AD 1280 6 , 7 , 8 , 9 , 22 . North Island Māori populations cultivated a number of food crops, including kumara ( Ipomoea batatas ) and taro ( Colocasia esculenta ), which was likely associated with initial clearing with fire 49 . The inability to grow crops in the interior and southern South Island necessitated foraging for a wide array of forest, wetland and marine resources and the use of fire to promote key carbohydrate resources, such as fire-adapted bracken fern 50 . Comparing the different proxies of humans, fire and vegetation reveals their relative source areas. The macroscopic charcoal and sterols data at Lake Kirkpatrick and Lake Diamond have a watershed source and jointly show the initial period of forest clearance (the Initial Burning Period) (Fig. 2 ) and the subsequent period of low-intensity or infrequent burning, and partial forest recovery c. AD 1500 to 1700 (Fig. 2a–e ). The fire biomarkers – MAs and PAHs – show high levels of intense burning soon after human arrival, although their source area likely encompasses the whole South Island or beyond. The low charcoal influx and fire biomarker levels after the Initial Burning Period at L. Kirkpatrick suggest smaller and lower temperature fires until the arrival of Europeans. Not all peaks in fire intensity, recorded by PAH and levoglucosan (Fig. 2b,c ), coincide with charcoal peaks (Fig. 1d ) suggesting that the biomarkers may have a wider source area than the charcoal. Declines in fecal sterols flux (Fig. 2a ) from c. AD 1400 to c. 1800 suggest reduced anthropogenic pressure in the watershed, even when forest clearance through fire was still occurring elsewhere on the island. European settlement at the beginning of the 19 th century led to further forest conversion to scrubland and pasture in the South Island. However, fire biomarker concentrations do not significantly increase at the same time as pollen evidence of land use (Fig. 2b–d ). The low levels of fire biomarkers, as compared to changes in pollen assemblages, imply that land-use practices did not require burning (i.e., grazing), the type of fuel shifted from forest to seral vegetation (including Leptospermum , bracken and grasses), or that the two proxies have different source areas. The initial burning of forests by early Māori settlers therefore led to a dramatic transformation of the vegetation that continued again in the 19 th and 20 th centuries by Europeans. Recent size estimates of the initial founding group suggest it was composed of approximately 500 individuals 51 . It is remarkable that these small populations were able to convert c. 40% of New Zealand’s forests to grass and shrubland within 1–2 centuries of their arrival 7 . Our records of sterols and multiple fire markers, so closely matching paleoecological evidence, show that initial Māori presence in particular watersheds was brief and transient, and open vegetation was maintained by subsequent low-intensity fires and sporadic human use of the catchments until the arrival of Europeans. The successful demonstration of biomarker detection in New Zealand sediments, where the anthropogenic nature of fires is undisputed, illustrates the power of this approach to resolve the role of humans in biomass burning, in regions where fire drivers and the timing of human arrival are still debated. Methods Two sediment cores were retrieved at Lake Kirkpatrick (195 cm) and Lake Diamond (160 cm) in 2009 using 7 cm polycarbonate tubes (Klein corer) 5 , 6 . The cores were split for archival purposes and the working half was sectioned into 1 cm thick slices. Three aliquots from each sample were processed separately for the analyses of charcoal, pollen (ref. 5 , 6 ) and organic tracers (this study), respectively. Chronology was obtained by Accelerated Mass Spectrometry (AMS) 14 C dates based on twig charcoal and plant macrofossils, and calibrated with BChron 5 , 6 , 52 . 72 samples were obtained in the 6–135 cm section of the Lake Kirkpatrick (0–191 cm) core and 49 samples were obtained for Lake Diamond (0–160 cm) in the 5–147 cm section. Wet samples were dried in the desiccator with silica gel until they maintained a constant weight and were then hand milled and homogenized in a ceramic mortar. Samples were stored at room temperature in sealed vials until extraction, which was performed with an ASE 200 (Accelerated Solvent Extraction, Dionex Thermo Fisher Scientific ). Each sample was dispersed with diatomaceous earth and spiked with a known amount of a 13 C-labeled internal standard solution for the quantification of the analytes ( 13 C 6 -Cholesterol, 13 C 6 -Acenaphtylene, 13 C 6 -Phenanthrene, 13 C 4 -Benzo( a )pyrene, 13 C 6 -Levoglucosan) and extracted twice at 150 °C and 1500 psi with dichloromethane (L. Diamond) or with a dichloromethane:methanol = 9:1 ( v/v ) mixture (L. Kirkpatrick). Extracts were concentrated in a centrifugal evaporator (Genevac EZ-2 Solvent Evaporator) up to ~0.5 mL and purified onto disposable solid phase extraction silica tubes ( Supelco DSC-Si 12 mL, 2 g bed weight), previously conditioned with 40 mL of dichloromethane (DCM). The clean-up and fractionation of samples were achieved by eluting the samples with 70 mL of DCM followed by 20 mL of methanol (MeOH), adapting previously published procedures 24 , 46 , 53 . PAH and sterols were collected in the DCM fraction (F1), while MA were contained in the polar MeOH fraction (F2) and treated separately. F1 was concentrated to 100–200 μL and PAHs were analyzed through gas chromatography-mass spectrometry (GC-MS). After the analysis, 100 μL of BSTFA + 1% TMCS (N,O- Bis(trimethylsilyl)trifluoroacetamide + 1% Trimethylchlorosilane) were added to the samples to allow derivatization at 70 °C for 1 h. After a stabilization period of 24 h at room temperature, sterols were analyzed by GC-MS. F2 was evaporated to dryness, redissolved in 0.5 mL of ultrapure water and centrifuged before analysis by ion chromatography-mass spectrometry (IC-MS) 24 . The GC-MS and IC-MS methods, along with instrumental setup and target and qualifier mass-to-charge ratios employed are reported elsewhere 24 , 53 , 54 . Analytes were quantified based on internal standards through the isotope dilution technique. Results were calculated and corrected by the instrumental response factors, obtained by the repeated analysis of standard solutions containing all analytes and 13 C internal standards at 100 pg µL −1 for PAH and 1 ng µL −1 for MA and sterols. Fluxes were calculated using the accumulation rate and the dry density following Menounos (1997) 55 . Several procedural blanks were also analyzed in order to quantify possible contamination from the laboratory equipment. Absolute quantities were corrected for the blank plus three times the standard deviation and divided by the dry weight of samples to obtain concentrations. The average accuracy was 89 ± 16% for PAH, 115 ± 17% for sterols and 99 ± 11% for MA. Precision ranged between 2 and 23% for PAH, 12–24% for sterols 19–33% for MA. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
It is now possible to tell the story of mankind's presence and evolution on the planet by analyzing trends in soil and sediment accumulation of fecal sterols, chemical compounds which are crucial in human physiology. Scientists at Ca' Foscari University of Venice and the Institute for the Dynamics of Environmental Processes of the National Research Council (CNR-IDPA) have identified and dated traces of sterols within the sediments of two New Zealand lakes, thus proving the presence of the Maori people who, starting from around 1280, colonized the two oceanic islands and cleared them of forests in just a few decades to make space for fields and pastures. The study has just been published in the scientific journal Scientific Reports. The analyses were carried out in the laboratories of Venice on cores of sediment taken from lakes Diamond and Kirkpatrick, located on New Zealand's South Island. By analyzing the microparticles of coal and pollen, researchers had already found evidence of significant forest fires as well as of sudden changes to the New Zealand landscape during the fourteenth century, when the deforested areas made space available for grass and shrubs to grow quickly and in a manner that was without precedent. Archaeological and paleoecological evidence quite conclusively attribute the deforestation to the Maori people, but this new study provides definitive scientific proof of their arrival in the area and of the enormous impact that a group of so few individuals had on the native forest in a very short time, to the extent that it was irreversibly jeopardized. In addition, the research demonstrates the validity of the method tested by the Italian researchers for reconstructing the history of humankind's presence in a given region. "Lakes collect traces of the feces of populations that have lived in surrounding areas, and these are deposited on the lake floors," explains Elena Argiriadis, postdoc at the Department of Environmental Sciences, Informatics and Statistics at Ca' Foscari, one of the authors of the study, "offering a continuous recording of the centuries of human presence. The concentration of coprostanol, the sterol most abundant in human feces, graphs a trend which over time nearly matches that of fire-related biomarkers, with a peak between 1345 and 1365 approximately, and is consistent with the profound environmental transformation which took place in New Zealand following the arrival of theMaori." Lake Diamond, New Zealand, ph. Credit: Dave McWethy / Montana State University "This research is part of a series of studies on mankind's impact, through our history and prehistory, on environment and climate, analyzing biomarkers archived within ice or sediment extracts from all over the planet (the Early Human Impact project, funded by the European Research Council)," explains Carlo Barbante, professor of Analytical Chemistry at Ca' Foscari and director of the CNR-IDPA. "Traces of human excrement also tell of the Europeans' arrival on the southern island of New Zealand, starting in the 1800s. The exponential growth in the concentration of fecal sterols vividly demonstrates the rapid increase in the population of the area, which has been ongoing since the beginning of the nineteenth century. The method can now be applied to nearby lake sediments and soils, in which the history of human settlement is not so well-documented as in the case of New Zealand, helping to map the movements of populations over time."
10.1038/s41598-018-30606-3
Biology
The fiddlers influencing mangrove ecosystems
Jenny Marie Booth et al. Fiddler crab bioturbation determines consistent changes in bacterial communities across contrasting environmental conditions, Scientific Reports (2019). DOI: 10.1038/s41598-019-40315-0 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-019-40315-0
https://phys.org/news/2019-03-fiddlers-mangrove-ecosystems.html
Abstract Ecosystem functions are regulated by compositional and functional traits of bacterial communities, shaped by stochastic and deterministic processes. Biogeographical studies have revealed microbial community taxonomy in a given ecosystem to change alongside varying environmental characteristics. Considering that stable functional traits are essential for community stability, we hypothesize that contrasting environmental conditions affect microbial taxonomy rather than function in a model system, testing this in three geographically distinct mangrove forests subjected to intense animal bioturbation (a shared deterministic force). Using a metabarcoding approach combined with sediment microprofiling and biochemistry, we examined vertical and radial sediment profiles of burrows belonging to the pantropical fiddler crab (subfamily Gelasiminae ) in three contrasting mangrove environments across a broad latitudinal range (total samples = 432). Each mangrove was environmentally distinct, reflected in taxonomically different bacterial communities, but communities consistently displayed the same spatial stratification (a halo effect) around the burrow which invariably determined the retention of similar inferred functional community traits independent of the local environment. Introduction Biogeography of microorganisms is determined largely by stochastic processes 1 , 2 , 3 , but deterministic processes, essentially related to their ecological niche, are now known to play a significant role in shaping community composition in a given system 4 , 5 , 6 . The unsurmountable importance of microorganisms as the drivers of global biochemical cycles 7 is reflected in their broad taxonomic and metabolic functional diversity 8 . For many of their diverse metabolic functions, microbial communities exhibit functional redundancy, whereby different taxa are able to perform the same metabolic function 9 , 10 , 11 , for example nitrogen cycling in soil 12 and methanogenesis in bioreactors 13 . For this reason, more recent studies of microbial community structure have tended to focus on functional structure rather than taxonomic structure, with a general consensus of the decoupling of some metabolic functions with taxonomic composition in various environments 14 , 15 , i.e., the conditions of the environment hold more weight in shaping functional group distribution than that of taxonomic composition. Microbial community traits have been compared from the local to continental scale with diversity in both taxonomy and functionality being attributed, across the range of spatial scales, to native environmental conditions 8 . However, due to constraints such as the availability of electron acceptors and heterogeneity in, for example, pH, temperature and salinity 16 , environmental drivers do not explain well the variation in taxonomic composition observed in systems with similar environmental conditions: a pattern which has been observed across many marine and terrestrial systems 10 , 12 , 17 . In a system with the same physico-biochemical features (bromeliad aquatic plants), Louca et al . 10 observed intersystem taxonomic, but not functional, variations in the microbial communities that were not completely explained by the different environmental characteristics. Recently, variation in oceanic environmental conditions was described to be responsible for structuring the function of the marine microbial community, while these conditions were only able to weakly explain taxonomic variation within functional groups 15 . Ecological resilience, and the capacity of a system to adapt to change, is positively influenced by higher functional community diversification, such as denitrifiers or carbon degraders, and increased taxonomic diversification within these functional groups 18 , 19 . Here we hypothesize that in the same model system under contrasting local environmental conditions, such as those occurring across large biogeographical ranges, broad and fine scale microbial functional and interaction patterns, rather than taxonomy, should be conserved around a consistent source of selective pressure. To test this hypothesis, we studied mangrove sediments subjected to the deterministic selective pressure of intense animal bioturbation, a process known to enhance biological activity and modify physical and chemical properties of sediment 20 , 21 , 22 . Mangrove forests are sites of strong environmental selection due to the intrinsic characteristics of intertidal environments and the prevalence of bioturbating organisms that, through the creation of burrows, impose a selective pressure at the interface of aquatic and terrestrial habitats 23 associated with microbial hotspots 24 . Thus, mangrove sediment is an ideal system to explore spatial changes in microbial community traits. Using a fiddler crab burrow as our model, we extended our study over a broad geographical (latitudinal) range to encompass contrasting environments. Results Our geographic range encompassed the sites Thuwal, Farasan and Mngazana (Fig. 1a ), in which we radially sampled sediment ‘Fractions’ around burrows and in surface, subsurface and deep sediment. The mangrove stands in Thuwal, Farasan and Mngazana displayed diverse sediment characteristics in terms of biogeochemistry, metal content and grain size (Supplementary Fig. S1 , Supplementary Table S1 and Supplementary File S2 ). Accordingly, PCoA of bacterial OTU composition segregated the three geographical sites into distinct groups, accounting for 49.9% of dissimilarity in community composition between sites (Fig. 1b ). A significant effect of ‘Site’ ( P = 0.0001) and ‘Burrow’ ( P = 0.0138) was observed on biogeochemistry (PERMANOVA, Supplementary Table S1 ). Sulphate, nitrite and nitrate contributed 76.4% to dissimilarity between Farasan and Thuwal sediment (SIMPER), with nitrite and nitrate being higher in Farasan and sulphate being higher in Thuwal. POC, PON, phosphate and silicate contributed 55.6% to dissimilarity between Mngazana and Thuwal (SIMPER), with the former being more abundant in Thuwal, while PON, phosphate and silicate were more abundant in Mngazana. PIC, phosphate, PON and silicate contributed 67.2% to dissimilarity between Mngazana and Farasan (SIMPER), with PIC being more abundant in Farasan, and phosphate, PON and silicate being more abundant in Mngazana. Figure 1 Study site variation. ( a) Map of study sites: a - Thuwal, b - Farasan, c - Mngazana; (b) Principal coordinates analysis of total bacterial OTU assemblages categorized by site (n = 384); (c) Distance-based redundancy analysis (db-RDA) showing significant biogeochemical drivers of bacterial community composition at each site. Full size image Site-specific bacterial assemblages correlated with site-specific physico-chemical characteristics (Fig. 1c , Supplementary Fig. S2 , Supplementary Table S2 ). Across the three mangrove forests, POC, PON, PIN, nitrate, nitrite and silicate significantly explained bacterial community variability amongst sites (Fig. 1c , DistLM, AICc = 1128.9, R 2 = 0.44). A significant ‘Site’ × ‘Depth’ × ‘Burrow’ interaction was observed on bacterial OTU assembly; at each site, bacterial communities in surface, subsurface and deep sediment displayed significantly different OTU composition amongst depths (GLM, df = 4,366, Dev = 17397, P = 0.014; Fig. 2a–c ). Comparison of bacterial composition of the different sediment fractions at different depths revealed a significant effect of ‘Fraction’ across all sites at all depths, with bulk sediment consistently segregating from burrow sediment ( P < 0.05 in all cases; Fig. 2d–l , Supplementary Table S3 ). Figure 2 OTU variation amongst sediment fractions. Canonical Analysis of Principal coordinates (CAP) of OTU abundance in surface, subsurface and deep sediment for Thuwal (a) , Farasan (b) and Mngazana (c) . Canonical Analysis of Principal coordinates (CAP) of the bacterial OTU abundance at each ‘Site’, ‘Depth’ and ‘Fraction’ (d–l) . Taxonomy bar charts show the relative contribution of different taxa to overall bacterial community composition in burrow and bulk sediment in Thuwal, Farasan and Mngazana in (m) surface, (n) subsurface and (o) deep sediment. Full size image The factor ‘Site’ explained up to 42% of the entire bacterial betadiversity variability. However, at each site the ‘Depth’ accounted for 11% (Thuwal), 25% (Farasan) and 24% (Mngazana) of variability. The variability explained by ‘Burrow’ accounted for 4% (Thuwal), 3% (Farasan) and 4% (Mngazana) of total variability. ‘Site’ and ‘Burrow’ significantly affected OTU alpha diversity (Shannon Index; PERMANOVA, F 2,362 = 17.54, P = 0.0001 and F 1,362 = 6.4, P = 0.015, respectively) and richness (Species richness index; PERMANOVA F 2,362 = 20.83, P = 0.0001 and F 1,362 = 12.82, P = 0.0005, respectively). Mngazana had higher diversity and richness (H’ ± SE: 5.63 ± 0.03 and S ± SE: 91.1 ± 1.1, respectively) than both Farasan (5.39 ± 0.05 and 84.7 ± 1.8, respectively) and Thuwal (4.98 ± 0.1 and 71.9 ± 2.3, respectively). Bulk sediment displayed lower values for both parameters compared to burrow sediment, while no effect of ‘Depth’ on either diversity or richness was observed ( P > 0.05). Bacterial communities across sites, depths and fractions were comprised of the same dominant phyla with different overall contributions (Supplementary Fig. S3 ). Site-specific patterns in community composition were observed (Fig. 2m–o ). Farasan and Mngazana had a larger contribution of Actinobacteria (1% to 7%) and Acidobacteria (1% to 6%) in surface and subsurface sediment compared to Thuwal (0.1% to 1% and 0.1% to 3%, respectively). Mngazana had a reduced contribution of Cyanobacteria to community composition compared to Thuwal and Farasan. Thuwal also had a larger contribution of Spirochaetes and a smaller contribution of Planctomycetes compared to other sites. Notably, Farasan had a smaller contribution of Deltaproteobacteria and larger contribution of Alphaproteobacteria compared to Thuwal and Mngazana. Furthermore, although only a small contribution of Epsilonproteobacteria and Betaprotobacteria was observed across sites, they had a greater contribution to the overall community in Thuwal and Mngazana compared to Farasan. Rare OTUs (with a contribution of less than 1% to total community composition) had a greater overall contribution in Thuwal than the other sites (up to 5.9%). Differences between burrow and bulk sediment were observed at all sites (Fig. 2m–o ). Generally, burrow sediment had a higher contribution of Cyanobacteria , Verrucomicrobia , Spirochaetes and Bacteroidetes than bulk sediment, whereas bulk sediment had a higher contribution of Delta- and Gammaproteobacteria . Thuwal and Farasan had a consistently smaller Alphaproteobacteria component in bulk sediment compared to burrow sediment across all depths. Ternary plot analysis revealed an overall higher density of OTUs in burrows compared to their bulk sediment counterparts (Supplementary Fig. S4 ). Notably, more OTUs were shared between the surface and deep in the burrow compared to bulk sediment at all sites, and Thuwal in particular displayed the same degree of shared OTUs between these two depths. Thuwal bulk sediment displayed the least sharing of OTUs between depth levels. Farasan had very few OTUs shared between the surface and deep in either the burrow or bulk sediment. At Mngazana, no OTUs were exclusively enriched in the subsurface, either in the burrow or bulk, but a large number of OTUs were unique to the deep. Bulk and burrow sediment communities hosted discriminant taxa at each site (LDA, LEfSe; Supplementary File S3 ). A greater number of significantly differential taxa between bulk and burrow sediment were observed in Thuwal compared to the other mangroves (Supplementary Fig. S5 ). In Thuwal, Proteobacteria , Chloroflexi and Actinobacteria contributed the greatest proportion of differentially abundant OTUs in burrow sediment. In Farasan, the phyla Planctomycetes and SAR406 were discriminately more abundant in burrow than bulk sediment, and a large contribution of phyla from Proteobacteria , Chloroflexi , Acidobacteria and Actinobacteria were differentially more abundant in bulk sediment than burrow (Supplementary Fig. S5 ). In Mngazana, Actinobacteria , TM7 and Verrucomicrobia were differentially more abundant in burrow than bulk sediment, and Proteobacteria and GNO4 phyla were more abundant in bulk than burrow sediment (Supplementary Fig. S5 ). A large contribution of sulphate-reducing bacteria from Deltaproteobacteria , Firmicutes and Nitrospirae to the overall bacterial community, being more abundant in bulk sediment, was observed (LDA effect sizes; Supplementary File S3 ). We detected three species of known cable bacteria belonging to the family Desulfobulbaceae : Desulfopila aestuarii , Desulfobulbus mediterraneu s and Desulfobulbus rhabdoformis (>97% similarity), with differential abundances at the three sites. At Mngazana, Desulfopila aestuarii was the most abundant of the three species, while Desulfobulbus mediterraneu s was the most abundant at both Thuwal and Farasan. Network co-occurrence analysis revealed a significantly different structure between burrow and bulk sediment at each site (Fig. 3 and Supplementary File S4 ). Over centrality parameters significantly varied for ‘Site’ (degree of connection: GLM, Chi-square = 124.78, d.f = 2, P < 0.0001, Fig. 3g ; closeness centrality: GLM, Chi-square = 124.78; d.f = 2; P < 0.0001, Fig. 3h ). A significant variation in the connectivity of Proteobacteria and Bacteroidetes , consistently decreasing in bulk sediment in all three sites was observed, while for the other taxa there were site-specific variations (Fig. 3i–k ). Proteobacteria and Bacteroidetes had a higher degree of connection in Thuwal and Mngazana, and the clustering coefficient was also higher at these two sites, compared to Farasan (Supplementary Table S4 ). The number of interactions was highest in Mngazana (burrow: 3039, bulk: 2655) and lowest in Farasan (burrow: 134, bulk: 153) and modularity was higher in Thuwal than other sites. The clustering coefficient was higher, and equal, in burrow sediment in Thuwal and Mngazana (0.34) than in bulk sediment (0.28 and 0.11, respectively), while Farasan had the lowest values. Network centralization was also much lower in Farasan than the other two sites. Co-presence interactions were consistently higher than mutual exclusion interactions in both bulk and burrow sediment at each site. Figure 3 Co-occurrence network analysis of bacterial OTUs in burrow and bulk sediment. Interaction among OTUs in bulk and burrow sediment at (a , b) Thuwal, (c , d) Farasan and (e , f) Mngazana. Boxplots represent the overall degree of connection (g) and closeness centrality (h) of the nodes of each network. (i–k) Phylum degree of connection in burrow and bulk sediment at each site. In each network, the nodes correspond to the OTUs present and are coloured according to phylum affiliation (>97%). Full size image Contrary to bacterial community composition, no significant interaction of ‘Site × Depth × Burrow’ on predicted metabolic function ( P > 0.05), but instead a main effect of ‘Site’, ‘Depth’ and ‘Burrow’, was observed (Supplementary Table S5 and Supplementary Fig. S6 ). Functions significantly differed according with ‘Site’, for example denitrification was more prevalent in the South African mangrove, while nitrogen fixation and cyanobacteria were more abundant in the Red Sea mangroves. Other functions, instead, were consistently conserved across all study mangroves relative to bioturbation; for example, bacteria performing phototrophy, anoxygenic photoautotrophy and chemoheterotrophy were consistently higher in the burrow than in the bulk sediment in each mangrove. Microbial activity, measured in Thuwal, consistently decreased from surface to deep sediment across all burrow fractions and in bulk sediment (Fig. 4a ). A significant effect of ‘Fraction’ was observed only in the surface and deep sediment; microbial activity increased towards bulk sediment in the surface (ANOVA, F 5,30 = 2.598, P < 0.05), while the opposite trend was observed in deep sediment (ANOVA, F 5,30 = 5.056, P < 0.01). Figure 4 Sediment microbial activity, oxygen concentration and redox. ( a) Fluorescein diacetate analysis. Boxplots with standard error of mean showing amount of Fluorescein produced by microbial activity for each sediment fraction in surface, subsurface and deep sediment at Thuwal (n = 6 for each fraction at each depth level). (b) Oxygen concentration microprofiles and (c) redox microprofiles of Thuwal burrow and bulk sediment along a gradient from the burrow wall to bulk. N = 3 for each replicated distance from burrow wall. Full size image In Thuwal, oxygen concentrations around the burrow decreased from approximately 300 µmol L −1 to 0 µmol L −1 at approximately 5 mm depth in all sections of the burrow wall and bulk sediment (Fig. 4b ). Interestingly, random pulses of oxygen between 10 and 40 mm depth were consistently recorded in fractions 1–5 around the burrows, but not in the bulk sediment. Redox potential profiles differed between the burrow fractions and the bulk sediment. In fractions 1–5 around the burrow redox was always positive (between 0 and more than 200 mV) and comparatively stable down to depths of more than 40 mm; while it progressively decreased in the bulk sediment to negative values (around −200 mV) already below 15 mm depth, reaching −400 mV below 30 mm (Fig. 4c ). Discussion We show that, despite the main drivers of geographic location and depth, bioturbation creates a fine spatial scale selective pressure that determines a significant consistent rearrangement of sediment bacteria community assemblages and interactions along a burrow-induced gradient. These patterns invariably occur across a broad geographical scale in mangroves with contrasting ecology and sediment biochemistry. Sediment bacterial functional rearrangements were independent of the biogeographical community taxonomy differences and those in biogeochemistry at each site and highlight a functionally-conserved altering effect of the burrows under various contrasting sediment environmental conditions. Taxonomic composition, expectedly, differed across mangroves, reflected in differential functional community composition, e.g. Cyanobacteria were more abundant in the Red Sea mangroves, whereas denitrifying bacteria were more abundant in the South African mangroves in accordance with a higher input of nitrogen. However, while we detected a significant effect of the interaction of geographic location, depth and sediment type (i.e. burrow vs. bulk) on the taxonomic composition and network interactions of the bacterial communities in the mangroves studied, in terms of functional structure this interaction was not significant. Despite the differences imposed by diverse physico-chemical characteristics in terms of function, we measured a consistent effect of the burrow halo on metabolic function across geographic locations, regardless of the specific function, with certain functions being consistently more abundant in burrow sediment compared to bulk sediment such as phototrophy, anoxygenic photoautotrophy and chemoheterotrophy. While this type of analysis provides only broad functional resolution and cannot resolve the functional complexity, this approach is able to broadly screen potential functions of the bacterial community and adds another dimension to previous studies of macrofaunal burrowing effects in intertidal sediments, such as polychaetes 25 and shrimps 26 . Undoubtedly macrofaunal bioturbation stimulates bacterial activity and increases the number of heterotrophic niches available for bacteria 27 . The bacterial community composition of the South African mangroves was particularly affected by POC, PON, PIN, nitrate, silicate and iron (all particularly high in sediment compared to the Red Sea mangroves), explained by the high input of nutrients and metals in the riverine setting and the absence of run-off and other freshwater inputs in the Saudi Arabian fringing mangroves. Nitrogen fixation and denitrification processes were enhanced in the South African mangroves mainly due to riverine input of nitrogen, particularly enriched in rural areas 28 . Mangroves in the Red Sea instead are nutrient limited 29 and the only nitrogen input results from bacterial activity enhanced by the tide and the cyanobacterial mats that can fuel nitrogen fixation and therefore nitrogen input to the system 30 . Cuellar-Gempeler and Leibold 31 recently showed that multiple colonist pools exist in fiddler crab burrows in intertidal sediment, corroborating the diverse sediment environment conditions at different depths and diverse selection pressures detected in this study. Similar to the rhizosphere, where changes in soil microbial community structure and complexity are mediated by the plant root effect 32 , burrows influence the overall composition of the surrounding sediment microbiome enhancing network complexity. While the community assemblage is a defining characteristic of the burrow, so too is the network complexity, and a greater proportion of co-presence interactions in the burrow than in the bulk sediment suggests the creation of microniches that can support a more connected bacterial community. Although beyond the scope of this study, bacteria dynamically interact with other sediment microorganisms, namely archaea and fungi, to form complex networks responsible for controlling organic matter decomposition and nutrient availability 33 . For example, in the mangrove sediments in our study it is likely that methanogenic archaea and sulphate-reducing bacteria form syntrophic communities together to degrade organic matter 34 . Interestingly, the bacterial network properties of sediment were similar for Thuwal and Mngazana, but not for Farasan that had a poorly connected sediment bacterial community. Indeed, this finding is also supported by a reduced number of shared OTUs at all depths in Farasan sediment, which may be attributable to the peculiar environmental setting of fossil coral bedrock. The sediment derived from fossil corals is known to be nutrient-poor with comparatively few microorganisms 35 . Sediment oxygen content dropped rapidly to zero at approximately 5 mm depth, as previously observed in some mangrove sediments 36 . Unlike many marine animals that actively irrigate their burrows during high tide (e.g. polychaetes, shrimps and bivalves), fiddler crabs plug their burrows to allow air-breathing at high tide 37 , thus trapping air and creating aerobic microniches along the burrow walls that are retained throughout submersion (Supplementary File S5 ). Despite continuous burrow exposure to air, we did not observe any substantial augmentation of oxygen concentration through the burrow wall. We did, however, observe pulses of oxygen, probably due to the presence of infaunal burrowers that exploit the main crab burrow for shelter and dig small tunnels through the walls 38 , often reaching the same concentration as surface sediment in bioturbated sediment which were absent in bulk sediment. This immediately challenges the concept that mangrove sediment is highly anoxic, particularly if we consider the high abundance of burrows and roots in mangroves that reach depths much greater than those sampled in this study 39 . Oxygen is rapidly consumed by microbial communities below the sediment surface 40 and burrows essentially extend the oxic/anoxic interface, producing a millimetre-thin layer of oxygenated sediment at depth which creates a zone of increased biogeochemical reactions and microbial activity 41 . Indeed, we observed that the microbial activity in the deep was highest at the burrow wall and decreased toward bulk sediment, which is further supported by increased CO 2 efflux rate from crab burrows compared to surrounding sediment 42 . Not only does the oxic zone affect the distribution of aerobic and anaerobic taxa 43 , but it has an overall cascade effect on the bacterial respiration pathways and community assemblages 41 . The oxidizing effect of the burrow affects sediment redox potential and we observed this feature to be consistently positive to at least 40 mm depth and negative in bulk sediment (around −200 mV below 15 mm depth, dropping to −400 mV below 30 mm depth), which is in accordance with previous studies of fiddler crab burrows 44 . The unsteady-state redox of mangrove sediment was highlighted in a recent study of mangrove sediment cores which presented evidence that oxygen input causes sudden and significant reoxidation of reduced sulphur 45 . Sulphate-reduction is a major respiration pathway in anaerobic mangrove sediment, which accounts for almost 100% of sediment CO 2 emission in some mangrove systems 46 . Mangrove sediment sulphate-reducing bacteria are diverse 36 and in this study we detected more abundant and diverse sulphate-reducing taxa (i.e., Deltaproteobacteria and Nitrospirae ) in the bulk than the burrow sediment. This accords with our recorded absence of oxygen below the surface in bulk sediment. Indeed, the importance of bioturbation in sulphate reduction rates has been highlighted 47 , showing effective prevention of sulphide accumulation in mangrove sediment bioturbated by fiddler crabs due to sulphide reoxidation 48 . In Mngazana and Farasan mangroves, members of the family Desulfobulbaceae may have important ecological roles in sediment sulphide oxidation. Notably, active cable bacteria belonging to this family have been recorded in mangrove sediment 49 . Due to their high filament densities, this group is responsible for long-distance transport of electrons in deep sediment to the surface that are harvested by sulphide oxidation 50 . Bioturbation has been suggested to constrain the distribution of this group due to the cutting of filaments during sediment reworking 51 , which may explain the discriminant abundance of this group in bulk sediment. The burrow effect that we describe above is more complex than the effect of the structure itself, and it is essential to consider the ecology of the burrow host. Fiddler crabs have a strict fidelity to their burrows and thus perform all of their activities, including surface grazing, within a few centimetre radius of their burrow 52 . Organic content has been shown to be the strongest physical sediment characteristic affecting where fiddler crabs forage, linked to the higher abundance of microorganisms associated with these patches 53 , and accordingly we recorded a halo of reduced bacterial activity in grazed surface sediment. Sediment is continuously reworked by fiddler crabs during burrow maintenance, bringing pellets from inside the burrow to the surface (excavation pellets; Supplementary File S6 ). Indeed, in all sites we observed that a large number of OTUs were shared between the surface and the deep burrow sediment, which was absent in bulk sediment. We also observed that Cyanobacteria had a comparatively large contribution to community composition in the deep at burrow walls, indicating transport of surface sediment to the deep. This sediment “mixing” is likely to be one of the main factors responsible for the differences between burrow and bulk sediment we observed in this study. To comprehend the impact of burrowing on the mangrove sediment environment, in terms of the modification of both taxonomic and functional diversity, we considered the density of burrows by focusing on those of fiddler crabs. Each burrow was a 1 cm × 5 cm void in the sediment, and we determined a 10 cm diameter halo of influence with an area of 78.5 cm 2 for one burrow. Based on our estimates, at a density of 25 to 41 burrows per m 2 , this equates to an area of fiddler crab burrow influence ranging from 1, 962 to 3, 218.5 cm 2 per m 2 of sediment to a depth of 5 cm. If we extend this to an ecosystem scale, this influence accounts for approximately 20 to 35% of mangrove sediment. In Kenyan mangroves, densities of fiddler crabs up to 100 per m 2 have been reported 54 , which raises this percentage of sediment to 78.5% of the total mangrove extension. Consequently, their burrows are imposing selective pressure on a large portion of mangrove sediment. Furthermore, this is surely an underestimation because this calculation is exclusively restricted to fiddler crab burrows, of which we did not investigate the entire burrow shaft (reaching up to 20 cm of depth). Bioturbation by other crab species and other animals including ants, shrimps and mudskippers is also extensive in mangroves and contribute to form more complex and deeper burrow structures 55 . We can therefore predict that the described bioturbation effect has a large overall impact on the mangrove ecosystem, by altering the nature of the sediment microbiome, which ultimately governs environmental processes, such as carbon and nitrogen fluxes, in this coastal ecosystem. Conclusions Here we demonstrate that macrofaunal burrows in mangrove sediment apply a consistent radially-distributed selective pressure, a halo effect, under contrasting physico-chemical conditions, across a broad latitudinal range. This halo effect controls the diversity, interactions and function of sediment microbial assemblages (Fig. 5 ). While taxonomic community structure was not retained across the large geographical range we examined, due to the local diverse environmental conditions, the selective pressure of the burrow invariably determines the retention of similar functional community traits independently of the local environment. This study is a baseline for further investigation of the role of sediment microorganisms in the overall functioning of the ecosystem, highlighting the necessity for a fingerprint to footprint approach 56 . Crab burrows act as a hotspot of diversity and functionality that can increase ecological resilience through functional redundancy and we believe these structures can be of pivotal interest in restoration and rehabilitation projects 57 . However, we highlight the need to include other components of the microbiome, namely archaea, fungi and microeukaryota 58 , 59 , and also other components of the ecosystem such as the burrow host 60 , whose interactions can boost and modulate the entire sediment biological function. Figure 5 Generalised conceptual model of the burrow halo. The overall effects of the burrow on the sediment environment and bacterial community are displayed on either side of the burrow model. Blue, green and grey represent the surface, subsurface and deep sediment layers, respectively. Within each layer, the dark-shade indicates features of the bacterial community or sediment environment that increase toward the burrow; while the light-shade indicates the increase of these features toward bulk sediment. The mixing of sediment along the burrow funnel due to crab activity is indicted. Full size image Methods Study sites and sampling Sampling was performed across a large latitudinal range in three mangrove stands: the middle of the Red Sea (Thuwal, 22°33′N 41°24′E, April 2016), nearby the Aden gulf where the Red Sea enters the West Indian Ocean (WIO) (Farasan Island, 16°20′N 41°24′E, March 2015) and at the southern extension of mangrove distribution in the WIO in South Africa (Mngazana, 31°42′S 29°25E, April 2016) (Fig. 1a ). The mangroves in each study site have contrasting characteristics, in terms of temperature, precipitation, tidal range, geomorphological setting and floral composition (Supplementary Table S6 ). Sampling was carried out following the same design at each site at low tide, when burrows were uncovered, during the period of Spring Tide. Burrow density was assessed at each site, within the zone occupied by fiddler crabs, one of the dominant bioturbators in each mangrove 61 (Supplementary Fig. S7 ), considering burrows measuring 10 mm diameter (mean burrow density per m 2 ± Standard Error (SE): Mngazana 42 ± 5, Farasan: 32 ± 2 and Thuwal: 25 ± 4). Eight active burrows belonging to male crabs were selected randomly along a 200 m long transect, being suitable only if they were more than 30 cm from another burrow, plant or pneumatophore and collecting a total of 18 samples per burrow (15 burrow, 3 bulk) for a total of 144 samples per site. The design incorporated radial sampling 40 from the burrow wall to 4.5 cm distance at three depth levels: surface (0–0.5 cm), subsurface (0.5–1.0 cm) and deep (5–5.5 cm) (Supplementary Fig. S8 ). Bulk sediment was sampled at the three depths at a distance >30 cm away from another source of bioturbation. Taking into consideration the non-vertical nature of fiddler crab burrows down towards the burrow chamber 55 , we restricted our sampling to the upper portion of the burrow and verified their verticality through the fine-scale dissection method of sediment sampling we adopted. For each burrow sampled, sediment was collected for DNA extraction and biogeochemical analysis and stored at −20 °C, while sediment for microbial activity analysis was processed in the laboratory within 30 min of sampling. Biogeochemical, metal and grain size analyses were performed in GEOMAR (Kiel, Germany) following established protocols detailed in Supplementary File 1 . DNA was extracted from a 0.4 g sub-sample of each 432 sediment samples and the V4-V5 hypervariable regions of the 16S rRNA gene were amplified by PCR using specific primers (341F, 785R). Library preparation was carried out with the 96 Nextera XT Index Kit (Illumina ® ) following the manufacturer’s instructions. PCR products were sequenced using the Illumina ® MiSeq platform with pair-end sequencing at the Bioscience Corelab, King Abdullah University of Science and Technology. Paired end reads measured an average 500 bp in length. Details of raw data processing are provided in Supplementary File S1 . Fluorescein diacetate (FDA) hydrolysis assay was used to assess total microbial hydrolysing activity in sediment 62 in the Thuwal mangrove; the amount of fluorescein released from each sediment sample was calculated referring to a standard curve (see Supplementary File S1 ). Oxygen and redox (Eh) were measured with microsensors (Ox-200 and Redox-200 microelectrodes with a tip diameter of 200 μm, UNISENSE, Aarhus, Denmark) in sediment cores extracted at low tide (during daylight) in the Thuwal mangrove. Sediment cores were taken around a central crab burrow using PVC cores (diameter 15 cm) and bulk sediment cores were taken in unbioturbated sediment. Microsensors performed vertical measurements (at a resolution of 200 μm) into sediment cores at interval distances away from the burrow wall (0.5, 1, 1.5, 3 and 4.5 cm; following the experimental design) and to a depth of 5 cm. Microsensor signals were recorded directly using the SensorsTrace Suite software (Unisense). Further details are provided in Supplementary File S1 . Statistical analysis Biogeochemical, metal and grain size analysis Prior to analyses, variables with high multi-collinearity (correlation coefficient > 0.85) were removed, retaining: particulate organic carbon (POC), particulate organic nitrogen (PON), particulate inorganic nitrogen (PIN), particulate inorganic carbon (PIC), nitrate, silicate, phosphate and sulphate (biogeochemical) and iron (Fe), lead (Pb) and uranium (U) (metals). Homogeneity of multivariate dispersion was verified for each factor with the distant-based test (PERMDISP) and 3-way PERMANOVA (9999 permutations, Euclidean distance) was used to test differences in biogeochemistry, metal content and grain size amongst the factors (fixed, orthogonal) ‘Site’ (3 levels: Thuwal, Farasan, Mngazana), ‘Depth’ (3 levels: surface, subsurface, deep) and ‘Burrow’ (2 levels: burrow, bulk). SIMPER analysis determined which variables contributed most to variation in biogeochemistry, metal content and grain size within each ‘Site’. Grain size frequencies were analysed using R package “G2sd”. Differences in sediment phi (continuous response variable) were tested among the Factor ‘Fraction’ (6 levels: 1, 2, 3, 4, 5, bulk; Supplementary Fig. S8 ) for each ‘Site’ and ‘Depth’ using the “aov” function in R. Distance-based multivariate analysis for a linear model (DistLM) was used to determine the biogeochemical variables, metals and grain sizes responsible for explaining community composition variation amongst sites with significance provided by the corrected Akaike information criterion (AICc) 63 . Bacterial function analysis The FAPROTAX database was used to assign bacterial OTUs to known metabolic or ecological functions . Nine of the most abundant and representative functions were selected for further analysis: aerobic nitrite oxidation, denitrification, nitrogen fixation, cyanobacteria, anoxygenic photoautotrophy, oxygenic photoautotrophy, photoheterotrophy, phototrophy and chemoheterotrophy. Bacterial community and diversity analysis Principal Coordinates Analysis (PCoA) using a Bray-Curtis dissimilarity matrix, was used to visualize the diversity in OTU abundance between sites. Differences amongst samples were tested using a multivariate generalized linear model (GLM) with a negative, bimodal error distribution in the R package “mvabund”, considering OTU as the multivariate response variable and the categorical factors (fixed, orthogonal) ‘Site’ (3 levels: Thuwal, Farasan, Mngazana), ‘Depth’ (3 levels: surface, subsurface, deep) and ‘Burrow’ (2 levels: burrow, bulk) as explanatory variables. Changes in bacterial community composition between sediment fractions (factor ‘Fraction’ fixed and orthogonal; 6 levels: 1, 2, 3, 4, 5 and bulk, respectively indicating distance from the burrow wall of 0.5, 1.0, 1.5, 3.0, 4.5 and >30 cm as shown in Supplementary Fig. S8 ) were tested using Canonical Analysis of Principal coordinates (CAP). Ternary plots were created based on the mean relative abundance of OTUs in burrow and bulk sediment in each site using the R package “ggtern”. Linear discriminant analysis effect size (LEfSe, ) was performed to identify bacterial taxa discriminately more abundant in the bulk and burrow sediment at each site (Wilcoxon P value: 0.05, LDA > 2). Shannon diversity Index and OTU richness differences were tested with a 3-way PERMANOVA (calculated from the Shannon diversity Index and OTU richness) among the factors ‘Site’ (levels: Thuwal, Farasan, Mngazana), ‘Depth’ (levels: surface, subsurface, deep) and ‘Burrow’ (levels: burrow, bulk). With the same experimental design and test, we compared community functions computed with FAPROTAX, after checking homogeneity of the dispersion (PERMDISP, F 17,366 = 0.45, P = 0.064). A co-occurrence network was built using the routine CoNet in Cytoscape 3.2.1 to search for significantly co-existing or mutually exclusive OTUs between burrow and bulk sediment among the three sites. After removal of rare OTUs (less than 0.1% of sequences per sample), the network was constructed using a combination of the Pearson and Spearman correlation coefficients and the Bray-Curtis (BC) and Kullback-Leibler (KLD) dissimilarity indices. In order to calculate statistical significance of co-occurrence/mutual exclusion of OTUs, the data from edge-specific permutation and bootstrap score distributions with 1000 iterations were normalized; thus, the similarity introduced by only compositionality was acquired. Subsequently, a P value was obtained using pooled variance to z-score the permutated null and bootstrap confidence 64 . Network analyser cytoscape plug-in was used to calculate the topological parameters of the network. Gephi 1.9 was used to compute modularity and to visualize the network layout. A GLM (R package “MASS”) was used to test the centrality measures: degree of connection (extent of taxon connection working as hub) and closeness centrality (extent of influence of a network node), considering ‘Site’ (3 levels: Thuwal, Farasan, Mngazana) and ‘Burrow’ (2 levels: burrow, bulk) as explanatory variables. We used a negative binomial distribution family of the error, since the degree of connection is count data, while a quasipoisson distribution family was applied for closeness centrality. The function ‘varpart’ in the R package vegan 65 was used to explore the variation explained by the three factors. 2-way ANOVA was used to test the difference in fluorescein levels between ‘Depth’ (levels: surface, subsurface, deep) and ‘Fraction’ (levels: 1, 2, 3, 4, 5, bulk). All statistical tests were performed using PRIMER v. 6.1, PERMANOVA+ for PRIMER routines and R software 3.4.1. All parametric tests met the assumptions of normality and homogeneity, or were transformed appropriately and non-parametric tests applied. Data Availability Sequence data generated during the current study are available in the NCBI SRA repository under the BioProject ID: PRJNA339628.
The types of bacteria living in and around fiddler crab burrows vary widely between mangroves, but their functional activities are remarkably similar. The types of bacteria present in and around mangrove fiddler crab burrows in three different geographic locations were compared by KAUST researchers. They found that the crabs' burrowing activity changed the sediment in a way that attracted different types of bacteria across the three locations: however, the bacteria performed similar functions, such as aerobic respiration, and potential ecological services, such as turnover of organic matter. "Mangrove crabs act like ecosystem engineers: Their burrows create radial, halo-like microbiological and geochemical modifications to the surrounding sediment compared with soil that has been left undisturbed," says Jenny Booth, the first author of the study. "This effect was similar in all three locations, with the burrow-dwelling bacteria being taxonomically different but functionally similar," she adds. Microorganisms play important roles in driving global biochemical cycles, such as the nitrogen cycle, in which nitrogen—a building block of proteins and nucleic acids— circulates among the earth, the atmosphere and marine ecosystems. A dense population of fiddler crabs grazes at low tide in Mngazana, South Africa. Credit: Marco Fusi Microbial ecologist Daniele Daffonchio and his team at KAUST's Red Sea Research Centerhypothesized that bacteria present within the same model system had similar functions, rather than similar taxonomy, even when these systems existed in very different local environmental conditions. To test this, they sampled the sediment in and around the burrows created by mangrove-dwelling fiddler crabs in two locations on the Saudi Red Sea and a third in South Africa. The researchers say their findings could be explained by the fact that burrowing leads to similar changes in the sediment regardless of location. Crabs typically bring sediment up from deeper layers onto the surface and vice versa. This sediment mixing changes the biochemical composition of the surrounding sediment, creating a hotspot of oxidative reactions and changing the types of bacteria living there. Burrow sediment, for example, has more bacteria that use oxygen for respiration, while the surrounding bulk soils have more bacteria that employ anaerobic respiration mechanisms. Sediment mixing also increases nutrient availability, and thus bacterial activity, within the burrow soils. Farasan Island mangrove forest in southwestern Saudi Arabia, one of the three study sites. Credit: Marco Fusi The researchers estimate that the halo-like ring of biochemical and microbial changes that extend a small distance around the fiddler crab burrows can influence up to 35 percent of mangrove sediment. In Kenyan mangroves, where burrow density is very high, this effect can influence almost 80 percent of the sediment. "We predict that the bioturbation effect of crabs and similar burrowing species has a large overall impact on mangrove ecosystems by altering the nature of the sediment's microbiome. These changes ultimately govern environmental processes, like carbon and nutrient fluxes, in this coastal ecosystem," says Daffonchio.
10.1038/s41598-019-40315-0
Earth
The dynamics of nitrogen-based fertilizers in the root zone
R. Kumar et al, Strong hydroclimatic controls on vulnerability to subsurface nitrate contamination across Europe, Nature Communications (2020). DOI: 10.1038/s41467-020-19955-8 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-19955-8
https://phys.org/news/2020-12-dynamics-nitrogen-based-fertilizers-root-zone.html
Abstract Subsurface contamination due to excessive nutrient surpluses is a persistent and widespread problem in agricultural areas across Europe. The vulnerability of a particular location to pollution from reactive solutes, such as nitrate, is determined by the interplay between hydrologic transport and biogeochemical transformations. Current studies on the controls of subsurface vulnerability do not consider the transient behaviour of transport dynamics in the root zone. Here, using state-of-the-art hydrologic simulations driven by observed hydroclimatic forcing, we demonstrate the strong spatiotemporal heterogeneity of hydrologic transport dynamics and reveal that these dynamics are primarily controlled by the hydroclimatic gradient of the aridity index across Europe. Contrasting the space-time dynamics of transport times with reactive timescales of denitrification in soil indicate that ~75% of the cultivated areas across Europe are potentially vulnerable to nitrate leaching for at least one-third of the year. We find that neglecting the transient nature of transport and reaction timescale results in a great underestimation of the extent of vulnerable regions by almost 50%. Therefore, future vulnerability and risk assessment studies must account for the transient behaviour of transport and biogeochemical transformation processes. Introduction Despite >15 years of water quality protection implementation under the EU Water Framework Directive (EU-WFD 1 ), the most recent EU-WFD report 2 concludes that the majority of European water bodies do not meet the European Union’s minimum target, with threats coming from a wide range of pollutants. Among these, excess nitrate from agricultural areas was highlighted as a major concern 3 , 4 , 5 , 6 , 7 , 8 . Consequently, the European Nitrate Directive 9 —itself an integral part of the EU-WFD—designates nitrate vulnerable zones (NVZs) as areas at risk from agricultural nitrate pollution and requires prompt actions to improve nitrate management. A number of indices have been developed to delineate these zones 10 , 11 , 12 , 13 . While these indices differ in their conceptual and implementation modes, they are often based on a weighted combination of temporally invariant environmental parameters (e.g., terrain slope, land cover and subsurface properties, mean precipitation). A framework for the delineation of such NVZs based on an integrated understanding of the complex and dynamic interplay between hydrologic transport and biogeochemical turnover is still missing. A major challenge for such a framework is to capture the hydrologic transport capacity or the intrinsic vulnerability to subsurface contamination by diffuse pollutants 11 , 14 . Subsurface transport is particularly elusive and uncertain due to its complex flow patterns. To account for this uncertainty, research has focused on the statistical characterization of transport dynamics through travel-time distributions (TTDs), which capture the journey of water and dissolved solutes through a given subsurface compartment 15 , 16 , 17 , 18 , 19 , 20 . Much work has been based on steady-state TTDs, however, more recently, studies have started to acknowledge the transient nature of TTDs 21 , 22 , 23 , 24 , 25 , 26 , 27 . Typically, such studies have focused on empirical observations at the catchment scale or at a limited number of densely gauged small-scale catchments. While transport dynamics have recently been investigated at larger-scales 28 , there are, however, no studies that systematically examine the transient nature of travel times, identify the main driving forces, and connect them to the reactive behavior of (diffuse) pollutants at regional to continental scales. This information would be relevant for management and decision making. To address this gap, we provide a Europe-wide assessment of hydrologic transport behavior as an integrated measure of the intrinsic vulnerability to subsurface contamination by diffuse pollutants (e.g., nitrate). We then use the case of widespread nitrate contamination across arable lands in Europe to show the unrecognized importance of the transient nature of hydrologic transport in previous vulnerability and risk assessments. Our analysis is based on state-of-the-art continental-scale hydrologic simulations driven by meteorological observations over the period of 1950–2015 combined with the recent theoretical developments for characterizing the transient nature of hydrologic transport dynamics 21 , 22 , 26 , 27 at high spatial and temporal resolutions (0.25 ∘ and daily timescale; see “Methods”). We focus on the root zone because it is the interface between the land surface and deeper subsurface. This zone is the most dynamic and active part of the subsurface and acts as both a hydrologic and a biogeochemical filter, determining the delivery and turnover of surface inputs and the partitioning of flow paths to near and deeper subsurface waters 29 , 30 , 31 . The rooting depth varies across European landscapes depending on, among other geophysical attributes, vegetation types and groundwater table depth 32 . However, our focus in this vulnerability assessment is limited to arable landscapes, we therefore account for the dynamics of the first meter of soil that mostly coincides with the rooting zone for arable lands 33 . We use the dimensionless Damköhler number 34 , 35 , 36 to link the hydrologic and biogeochemical timescales (see “Methods”) and provide an objective measure for the large-scale vulnerability assessment of subsurface nitrate contamination. Our study therefore focuses on Europe-wide vulnerability assessment, i.e., the potential for (excess) nitrate leaching from the root zone to deeper in the subsurface (i.e., vadose zone below rooting depth). We demonstrate the oversimplified (static) nature of previous vulnerability assessment approaches by highlighting the relevance of the transient nature of transport dynamics, and we discuss its ramifications for future assessment and subsequent policy decisions. Our continental-scale analysis demonstrates strong spatiotemporal heterogeneity of hydrologic transport dynamics pronounced throughout the European landscapes, and we show that the (static) vulnerability assessment approach that does not account for such transient features greatly underestimates the extent of vulnerable areas prone to subsurface contamination by excess nitrate leaching. Results and discussion Space-time variability of hydrologic transport times Our continental-scale hydrologic simulations show large space-time heterogeneity in the inferred TTDs, which illustrates the complex, non-linear, and transient nature of transport dynamics in the root zone (Fig. 1 ; see also Supplementary Video). The large spread among the simulated daily TTDs in three exemplary locations (Fig. 1 a–c) illustrates the pronounced (space-time) heterogeneity of hydroclimatic factors (e.g., precipitation, soil-water storage, infiltration and evapotranspiration fluxes), and characterizes the different transport dynamics inferred across Europe. These locations represent the humid, sub-humid (transitional), and semi-arid climate regimes, aridity indices ( ϕ = ratio between mean potential evapotranspiration and mean precipitation) of 0.25 (UK), 1.15 (France), and 3.25 (Spain), respectively. A consistent shift towards longer travel times is noticed when moving from humid to semi-arid regions. The daily travel-time interquartile range (TT IQR ) increases jointly with the median travel-time (TT 50 ) throughout Europe (Fig. 1 d–f and see Supplementary Fig. 1 ). The humid location (UK; Fig. 1 a–d) exhibits marked seasonality with higher TT 50 in summer, which is likely due to a regular seasonal pattern of hydrologic states/fluxes, specifically soil moisture and evapotranspiration. In contrast, the daily dynamics of the TT 50 at the semi-arid location (Spain; Fig. 1 c, f) are more erratic and episodic in nature. This location is characterized by infrequent rainfall that, when combined with high evapotranspiration losses, leads to highly variable soil-water storage. The location in France marks a transition zone between humid and semi-arid locations, with a seasonal pattern during wet years and an erratic pattern during dry years (Fig. 1 b, e). The distinct behavior of the TT dynamics simulated among different locations broadly agrees with past theoretical understandings, even though these previous efforts used synthetic datasets 29 , 37 . Fig. 1: Transient features of the hydrologic transport dynamics. Illustration of the daily travel-time distributions ( a – c ), and the corresponding temporal evolution of the daily median TT 50 and interquartile ranges TT IQR ( d – f ) for three distinct locations in the UK, France, and Spain representing the sub-humid, humid or transitional, and semi-arid hydroclimatic settings across Europe, respectively. Full size image Synthesis of hydrologic transport times across Europe Figure 2 a, b summarizes the spatial patterns of the temporal mean μ (TT 50 ) and standard deviation σ (TT 50 ) of the daily soil-water travel times across Europe. The ranges for both the mean and the standard deviation span factors of 6 to 7 across the continent, as 99% of the values lie between 100 and 700 days for μ (TT 50 ) and 52 and 320 days for σ (TT 50 ). Fifty percent of the study domain has μ (TT 50 ) and σ (TT 50 ) values that exceed 365 and 120 days, respectively. Regions with the shortest soil travel times ( μ (TT 50 ) ≤ 180 days) are located in areas of high and frequent rainfall (e.g., alpine, northern UK and northern Spain— Pyrenees). The longest travel times ( μ (TT 50 ) ≥ 540 days) are found in dry areas with less frequent rainfall in southern Spain and the eastern European regions adjacent to the Black Sea. The spatial patterns (Fig. 2 a, b) suggest a strikingly high spatial similarity between μ (TT 50 ) and σ (TT 50 ). This was confirmed through a linear regression analysis (Fig. 2 d), resulting in R 2 = 0.72 and 0.97 for raw and binned data ( p -value < 0.00001). Consequently, European regions that have, on average, longer soil travel times are also more variable or episodic in time, and vice versa. This result also means that the coefficient of variation (CV) of the daily TT 50 , which is the regression slope between μ (TT 50 ) and σ (TT 50 ), is remarkably consistent across Europe (~0.4). An analogus consistency in CV values has been also reported in a previous study 37 for catchment-scale travel times across different hydroclimatic settings with synthetic datasets. The temporal variability in median travel times is strongly controlled by variability in climate, and we find that the value of TT 50 CV = 0.4 is in good agreement with the temporal variability of daily potential evapotranspiration (CV = 0.34, see Supplementary Fig. 2 ). Fig. 2: Synthesis of the hydrologic transport time dynamics across Europe. The transient feature of the daily median travel times (TT 50 ; blue lines in Fig. 1 d, e, f) is summarized as their temporal mean, μ (TT 50 ) and standard deviation, σ (TT 50 ) ( a , b ) for the period 1985–2015. The strong spatial correspondence between μ (TT 50 ) and σ (TT 50 ) is evident through the point-wise correlation analysis ( d ). The prevailing hydroclimatic feature in the form of aridity index, ϕ in ( c ), the ratio between the mean potential evapotranspiration ( \(\overline{{E}_{p}}\) ) and mean precipitation ( \(\overline{P}\) ), is identified as the dominant factor controlling the spatial heterogeneity of the transient transport characteristics as μ (TT 50 ) and σ (TT 50 ) simulated across Europe ( e , f ). On each of the scatter plots, along with point-wise cloud data, there is also the corresponding bin estimates given as the mean and one standard deviation values of grouped data for every ϕ interval of 0.15. Binning is performed with the aim of seeking generalized relationships (i.e., after reducing the noise in scatter due to outliers) and specifically in ( e , f ) to depict the role of secondary (landscape-related) factors through the representation of the binned standard deviation estimates that are relatively stable and present across the whole range of ϕ values. Full size image The rather well-organized spatial patterns of μ (TT 50 ) and σ (TT 50 ) follow the hydroclimatic gradient observed across Europe (Fig. 2 c), and here, the latter pattern is represented through the aridity index ( ϕ ) that primarily controls the partitioning of incoming rainfall and energy into outgoing water fluxes (i.e., evapotranspiration vs. runoff). Approximately 70–73% of the variance in the Europe-wide estimates of μ (TT 50 ) and σ (TT 50 ) can be explained solely by the spatial heterogeneity of ϕ (Fig. 2 e, f). The spatial correlation structure for these travel-time characteristics and the aridity index are nearly identical (Supplementary Fig. 3 ). The close relation of μ (TT 50 ) and σ (TT 50 ) to ϕ emphasizes that the high soil moisture and frequent rainfall conditions in humid regions lead to a high connectivity and fast displacement of water in the soil column 29 . In contrast, the (semi-)arid regions with infrequent rainfall and high soil-water deficits due to high evapotranspiration losses generally exhibit long travel times. Other site-specific landscape attributes related to terrain, soil and vegetation characteristics show a weaker correspondence to the spatial heterogeneity of μ (TT 50 ) and σ (TT 50 ) compared to ϕ and, therefore, constitute secondary controls (see Supplementary Fig. 4 ). Notably, the combined effect of individual secondary factors appears to be stable across the range of ϕ values, as is visible in the scatter of points in Fig. 2 e, f; as well as by the overlying standard deviation estimates of the respective binned data (for every ϕ interval of 0.15). For the (average) binned estimates, we find an almost perfect linear correlation of ϕ with μ (TT 50 ) and σ (TT 50 ) ( R 2 = 0.96–0.98; p -value < 0.00001). These results are also found for the extremes of the daily TT distributions. For example, TT 10 and TT 90 , which are indicative of young and old water fractions, respectively, also show a high spatial correlation with ϕ (see Supplementary Figs. 5 and 6 ). This result underpins the dominant role of the hydroclimatic factor ( ϕ ) in shaping the dominant features of hydrologic transport timescales inferred across Europe. Vulnerability to nitrate leaching across the cultivated areas of Europe Thus far, our analysis has focused on the hydrologic transport dynamics that represent the intrinsic vulnerability 14 of the system to subsurface contamination. Here, we complement the transport timescales with the biogeochemical turnover timescales of nitrate in soil to determine the extent of vulnerable regions to nitrate leaching across the cultivated areas of Europe. We focus on two competing processes of excess nitrate removal (after consideration of plant uptake) from the soil by contrasting the timescales of nitrate leaching (hydrologic transport) and denitrification (biogeochemical turnover). Denitrification rates are poorly constrained due to a lack of reliable observations, especially at large scales. To acknowledge this uncertainty, we consider different characteristic denitrification timescales 38 , \(\left\langle {{\rm{RT}}}_{{\rm{50}}}\right\rangle\) , defined here as the 50% removal of the initial substrate, to allow comparability with the median transport times (TT 50 ). We consider a range of \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) values between 0.5 and 5 years in our analysis 39 , 40 , 41 , 42 , 43 (see “Methods” for more details). \(\left\langle {{\rm{RT}}}_{{\rm{50}}}\right\rangle\) represents the effective timescale encapsulating relevant environmental factors, such as soil moisture, temperature, and organic carbon content, that affect the site-specific reaction rates 42 , 44 . In the following text, we analyze two cases and conduct a nitrate vulnerability assessment corresponding to the static (time-averaged) and transient behaviors of the hydrologic transport (TT) and denitrification (RT) timescales. We connect the transport and denitrification timescales through the dimensionless Damköhler number (here defined as \({{{D}}}_{{\rm{a}}}=\frac{{{\rm{TT}}}_{50}}{{{\rm{RT}}}_{50}}\) ) that enables us to assess the interplay between these two competing processes across the geographical domain 34 , 35 , 36 , 38 . When D a < 1, transport (leaching) dominates over reaction (denitrification), and vice versa. Our static vulnerability assessment case, based on the range of \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) (0.5–5 years) and the averaged transport times μ (TT 50 ), results in D a values ranging between 0.05 and 4.0 across the majority of European cultivated areas (Fig. 3 a). To interpret this D a number, we rely on a series of prior studies. First, a prior study 34 presents a variety of field observational datasets that demonstrate an empirical (non-linear) relationship between D a and nitrate removal. Subsequent studies 36 , 45 showed how this empirical relationship can be described by a parametric model based on the exponential function. Using the latter, the above value of D a = 0.05 would imply more than 90% of the nitrate leaching from soil, while a value of D a = 4.0 would correspond to less than 10% of the nitrate leaching from soil. Furthermore our results suggest that the majority of the cultivated areas in Europe would be highly vulnerable to nitrate leaching ( D a ≤ 1) under the higher \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) value (≥ 2 years), while very few locations would be classified as vulnerable in the case of the lowest \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) value of 0.5 year due to the dominance of the denitrification timescale. Fig. 3: Subsurface nitrate vulnerability assessment across Europe under static and transient considerations of hydrologic transport and denitrification timescales. a Box-plots summarizing the spatial distribution of the Damköhler number ( \({D}_{{\mathrm{a}}}=\frac{{\mathrm{T{T}}}_{50}}{{\mathrm{R{T}}}_{50}}\) ) for a range of effective denitrification timescales \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) and the averaged transport times μ (TT 50 ) estimated across the cultivated areas of Europe 72 . Boxplot is displayed with the horizontal bar at the median, the box indicates the first and third quartiles and the whiskers indicate ±1.5 times the interquartile range. Grid cells with at least 5% of the cropland areas are considered cultivated areas in the analysis. b Europe-wide D a estimates for the static vulnerability assessment case corresponding to the effective reaction timescale, \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) and the average transport timescales, μ (TT 50 ). Areas with D a ≤ 1 are vulnerable to subsurface nitrate leaching and ~42% of the total cultivated area falls within this category under the static vulnerability assessment case. c Summary of the transient daily D a (t) estimates for every cultivated grid cell, arranged according to their corresponding aridity index values ( ϕ ). Summary statistics are presented as quantiles of the daily D a values (Q0.1,1,5, ...,95,99,99.9) for the actual estimates (in background) and the LOESS (locally weighted smoothing)-derived smooth statistics (in a transparent foreground for better clarity). d Frequency of the daily D a ( t ) ≤ 1 estimated based on the time-varying TT 50 (t) and RT 50 (t). Considering the transient nature of D a , nearly 75% of cultivated areas of Europe become vulnerable to nitrate leaching ( D a ( t ) ≤ 1) for at least one-third of the year (i.e., frequency estimates ≥ 0.33 or 4/12). Full size image We further elaborate on the moderate case of an \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) = 1 year, which is a highly plausible value, as inferred by the values of denitrification timescales in previous large-scale 39 , 42 , 46 and catchment-scale studies 40 , 43 . Using this \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) value and the average transport time μ (TT 50 ), we find ~42% of the cultivated areas across Europe to be vulnerable to subsurface nitrate leaching, with D a ≤ 1 (Fig. 3 b). Figure 3 b shows that the cultivated areas in the humid and transitional climate zones ( ϕ ≤ 1.5), specifically in the western part of Europe, i.e., France, Germany, Italy, UK and Ireland, are dominated by hydrologic transport ( D a ≤ 1) and are thus vulnerable to nitrate leaching. In contrast, the agricultural areas in the Iberian Peninsula and eastern European countries present higher dominance of denitrification over the hydrologic transport processes. The vulnerable areas delineated by our approach based on the Europe-wide D a map are remarkably consistent with the (nitrate) leaching risk potential map published by the European Commission 47 and this map was established following a static indices-based approach (see Supplementary Fig. 7 for more details). We now contrast the above static characterization of nitrate vulnerability with the results of the transient case based on the time-varying daily TT 50 ( t ) and RT 50 ( t ). Here, we account for the spatiotemporal variability of the environmental factors, f E ( t ), that affect the daily dynamics of the denitrification timescale, RT 50 ( t ). We include the effect of varying soil moisture, air temperature, and organic carbon content following established parameterization schemes 39 , 40 , 41 , 43 , 48 (see “Methods” for more details). This allowed us to construct the temporal variability of RT 50 ( t ) for each grid cell by explicitly accounting for the space/time variability of the environmental factor ( f E ) (see “Methods”). The daily mean and variability of the environmental factor ( f E ) follows the hydroclimatic gradient of the aridity index ϕ observed across Europe (see Supplementary Fig. 8 ). We observe lower mean and variability of the daily f E in (semi-)arid areas, resulting from the relatively drier soil moisture conditions that persist for long periods, than in humid areas that have on average higher f E because of wetter and more strongly seasonal soil moisture dynamics (Supplementary Fig. S8). Next we analyze the variability of the daily D a ( t ) that depicts the interplay between the hydrologic transport timescale (TT 50 ) and the reactive timescale (RT 50 ) behaviors for the excess nitrate removal (leaching vs. denitrification) through the soil. Here, in Fig. 3 c we summarize the daily D a ( t ) as the quantile estimates for every cropland cell arranged according to their respective aridity index ( ϕ ) value. For example, for a given cropland cell located in a climate region of ϕ = 1 and having the 50th percentile (or median; Q50) value of the daily D a = 1.0, this number would indicate that the given cell would be prone to excess nitrate leaching (i.e., D a ≤ 1.0) for nearly half of simulation times (on average six months of a year). The results depicted in Fig. 3 c clearly show the increasing range of the daily D a variability (e.g., between Q95 and Q5) when moving from humid to semi-arid and arid regions. We observe a nearly twofold (100%) increase in the D a range (Q95-Q5) for cropland cells with ϕ of 1 to 3. Interestingly, while the averages (medians) of the daily D a values are usually high (above 1) in (semi-) arid regions, the D a values also exhibit high temporal variability and, therefore, frequently fall below a critical value of 1 (i.e., the favorable times for (excess) nitrate leaching). The correlation analysis suggests a strong correspondence between the averaged D a statistics (median and Q95-Q5) and the aridity index ( ϕ ) across the European croplands ( R 2 ≥ 0.92). We now analyze how the temporal dynamics of the daily D a ( t ) affects our vulnerability assessment. The results of the frequency analysis based on the time-varying D a ≤ 1 suggest that ~75% of the cropland cells across Europe are vulnerable to nitrate leaching for at least one-third of the year (Fig. 3 f; see also Supplementary Fig. 9 ). Our estimate of potentially vulnerable areas is nearly twice the estimate (42%) obtained above under the static consideration of transport and reaction timescales. Importantly, the cultivated areas located in the Iberian Peninsula and eastern European countries, which were not recognized as vulnerable regions under the static assumptions (Fig. 3 b), are now regarded as regions that are temporarily vulnerable to subsurface nitrate contamination (Fig. 3 c, d). We find that the majority of cropland cells in Europe (>90%) are prone to nitrate leaching for at least two months of the year (see Supplementary Fig. 9 ). Therefore, our results highlight the limitation of the static vulnerability assessment approach 47 , which leads to an underestimation of nitrate vulnerable regions and has serious implications for nitrate management across Europe. Concluding remarks In this study, we provide a Europe-wide assessment of transient hydrologic transport behavior in the upper one meter of the subsurface. This approach allows us to quantify the intrinsic vulnerability of the subsurface to contaminant leaching at unprecedented spatial and temporal resolutions at the continental scale. We demonstrate the dominant role of large-scale hydroclimatic factors, as expressed in the aridity index, in determining the spatial heterogeneity of transport characteristics (e.g., temporal mean and variability) and the environmental factors that affect the daily variability of denitrification timescales. We apply an approach based on the dimensionless Damköhler number to characterize the vulnerability of subsurface waters to (excess) nitrate leaching from soil by accounting for the complex and dynamic interplay between the hydrologic transport and biogeochemical (denitrification) reaction timescales. This approach provides a general framework to objectively assess the vulnerability to other agrochemical pollutants in different subsurface compartments (e.g., root zone, deeper vadose zone, and eventually shallow and deep groundwater). Using this framework as a decision tool for subsurface nitrate contamination assessments, our study closes a pressing gap by making the recent progress in the field of transport dynamics accessible to practitioners, regulators, and decision-makers that aim to safeguard and restore European waters. Our results/framework can be used to identify hot spots and times vulnerable to (excess) nitrate leaching through the soil, and thereby could be used for assisting (nitrate) management strategies, such as the optimization/regulation of fertilizer applications. Our emphasis on the transient aspects in the process of defining vulnerable zones will become more important due to the projected increases in the frequency and intensity of extreme hydroclimatic events under changing climate conditions 49 , 50 , 51 . Our results call for improved vulnerability assessment approaches in Europe and other regions of intensive agriculture. Current practices that do not consider transient dynamics could lead to a substantial underestimation of the extent of vulnerable areas and the associated risk. Our study addresses this limitation and thus provides crucial vulnerability criteria, which can then be combined with information on the exposure (i.e., available data on excess nitrate components) for a risk assessment. To this end we provide a showcase example of vulnerability assessment combining information on excess nitrate (see “Methods”). We, therefore, urge modelers and planners to further develop and evaluate tools that use transient dynamics to assess the vulnerability of the subsurface to diffuse pollution (e.g., excessive nutrient surpluses in the root zone). Further avenues of research include the improvement, validation, and quantification of the predictive uncertainty of this kind of vulnerability assessment. Improved vulnerability maps are fundamental for deciding on agricultural subsidies and nitrate management, which are key components of the EU’s common agricultural policy (CAP). Methods Continental-scale hydrologic simulations We use the spatially explicit, process-based, mesoscale hydrologic model (mHM 52 , 53 ) to perform the continental-scale hydrologic simulations over Europe. The model features the multiscale parameter regionalization (MPR) technique, which explicitly accounts for the spatial heterogeneity of fine-scale terrain, soil, vegetation, and other landscape properties. The mHM-MPR modeling framework provides a unique capability to simultaneously and seamlessly operate the model at multiple scales and locations 52 , 53 , 54 . We briefly describe the processes within the root-zone soil compartment, relevant for this study, in the following text. Here, we account for the dynamics of the first meter of soil that coincides with the rooting zone for arable lands. This compartment is modeled as three consecutive layers with the following depths: 0–5, 6–25, and 26–100 cm. In each layer, the incoming water in the form of rainfall plus snowmelt, after accounting for the canopy interception, is partitioned into soil-water storage and exfiltration based on a non-linear (power) function depending on the degree of saturation of the corresponding layer, following the conceptualizations used in other large-scale models 55 , 56 . The exfiltrated water from the first layer is input to the second layer and from the second to the third layer. The evapotranspiration losses from each layer are modeled as a fraction of potential evapotranspiration and are dependent on water storage deficit-induced stress and the fraction of the vegetation roots in each layer (see Supplementary Fig. 10 for a conceptualization of these root-zone soil moisture accounting processes). Readers interested in a complete model description may refer to previous studies 52 , 53 , 57 , and for the source code and a detailed user manual. A comprehensive overview of underlying datasets, processing steps, as well as model establishment including impact assessment of natural/human intervention activities (e.g., irrigation) is detailed in Supplementary Note 1 and in Supplementary Table 1 . Model simulations are produced at a 0.25 ∘ spatial resolution and on a daily timescale for the period 1950–2015, using the model parameterizations established in previous studies 53 , 54 , 58 (see also the Supplementary Note 1 for more details). We conducted a thorough multivariate evaluation of model performance across the European domain (see Supplementary Note 2 for details). While the choice of spatial resolution used here is constrained by the availability of meteorological forcing datasets (E-OBS; v13.0) 59 , the multiscale parameterization approach implemented in mHM allows for the explicit treatment of fine-scale, sub-grid variability of landscape features 52 , 53 . We emphasize that our study focuses on providing a general framework for the characterization of subsurface nitrate vulnerability and applied it to the pan-European landscape. This framework can also be applied/expanded to finer scales (e.g., catchment scale), incorporating more detailed datasets 27 and relevant processes to provide crucial insights into local vulnerability and assist policy intervention strategies. Examples are the localized effects of irrigation (see Supplementary Note 1 for details) and artificial (tile) drainage, which can impact soil-water storage and indirectly the root-zone transport dynamics. Hydrologic components of mHM has been also coupled to a nitrate transport model and previous studies have demonstrated successful applications of this coupled model 43 , 60 . The formulation of reactive timescales related to denitrification process in soil used in this study follows a similar approach to that of the coupled model (i.e., a first-order denitrification in soil along with a space-time variability of environmental factors; see the Section below on “Nitrate vulnerability assessment” for more details). Derivation of hydrologic transport times We follow recent theoretical developments to infer the time-variant nature of the travel-time distributions (TTDs) 21 , 25 , 26 , 61 , 62 , 63 . Specifically, we adopt the notion by Botter et al., 21 , 61 who provided an elegant expression for deriving the time-variant TTDs for water parcels entering or leaving a control volume based on the temporal evolution of water storages and fluxes under several mixing (age function) schemes. We use the transient formulations of TTDs on a control volume taken as a single-grid cell and soil layer, which is characterized by the daily dynamics of the soil-water storage ( S ) and incoming flux as effective precipitation J (snowmelt plus rainfall minus canopy interception losses) and outgoing flux as O = ( I + E ), where I and E represent the exfiltration and evapotranspiration fluxes from a given soil layer, respectively. Under a random sampling scheme of mixing that assigns the same probability to all water particles with different ages in storage to be sampled by outgoing fluxes, the analytical expression for the transient TTD at any time t for water parcels exiting (as the exfiltration flux) can be expressed as follows 21 : $${p}_{I}(t-{t}_{{\rm{in}}}| {t}_{{\rm{in}}})=\frac{I(t)}{\theta ({t}_{{\rm{in}}})S(t)}\exp \left(-\mathop{\int}\nolimits_{{t}_{{\rm{in}}}}^{t}\frac{I(t^{\prime} )+E(t^{\prime} )}{S(t^{\prime} )}dt^{\prime} \right)$$ (1) with t − t in ( t > t in ) represents the time from the moment the water parcel enters the control volume ( t in ) until now ( t ). The partition function θ ( t in ) indicates the portion of the water parcel that enters the control volume at t in and leaves as the exfiltration flux (specifically, I as opposed to E ) and is expressed as follows: $$\theta ({t}_{{\rm{in}}})=\mathop{\int}\nolimits_{{t}_{{\rm{in}}}}^{\infty }\frac{I(t)}{S(t)}\exp \left(-\mathop{\int}\nolimits_{{t}_{{\rm{in}}}}^{t}\frac{I(t^{\prime} )+E(t^{\prime} )}{S(t^{\prime} )}dt^{\prime} \right)dt.$$ (2) This partition function θ defines a dimensionless number between [0 and 1]. The above expression is formulated for the TTDs conditioned to the entrance (or injection) time of water particles to the control volume, and therefore, it relates to the concept of life expectancy that tracks the ages of water particles forward in time 64 , 65 . The complementary approach in which the ages of the water particles exiting the system are tracked back in time relates to an age concept 65 , and both forward and backward TTs can be related through the Niemi’s continuity equation 66 . Recent studies 22 , 23 , 25 , 63 , 67 , 68 demonstrated the usefulness of the transient TTDs in capturing the overall behavior of hydrological and geochemical responses in experimental and intensively monitored catchments. We numerically solve the expressions to derive the daily evolution of the Europe-wide TTDs based on the water fluxes and storages of the three soil layers simulated by mHM for the period 1985–2015. The procedure to derive the overall TTD representing the entire root zone (0–1 m) is carried out in two steps. First, we derive the time-varying TTDs for each soil layer separately using the layer-specific storages and outgoing water fluxes at every modeling time-step. The overall TTD of water parcels leaving the entire root zone at a 1 m depth is then estimated by sequential convolution of the independently estimated probability density functions for the first, second, and third soil layers. Since this procedure is followed for each grid cell and the modeling timestep separately, it leads to a very high computational effort at a continental scale and daily time steps. We then summarize the daily TTDs for each grid cell with statistical measures corresponding to the median (TT 50 ) and interquartile range (TT IQR ), as well as the tails of the distribution such as the 10th (TT 10 ) and 90th (TT 90 ) percentile estimates. One of the critical considerations of the above TTD formulation is the choice of the mixing schemes (or StorAge Selection; SAS functions 25 , 61 , 62 , 69 , 70 ) for the preference of water parcels with different ages being sampled by outflows. Among several mixing approaches, we consider here a random sampling scheme for deriving TTDs in each soil layer meaning that all water parcels in storage have an equal preference for sampling; consequently, all the following analyses of the TTD characterization are contingent on this selection. Our decision is motivated by the fact that there is no a-priori information available for the mixing schemes (or SAS functions) at a continental scale and that the random sampling scheme is one with the highest entropy. It is important to note that despite the random sampling scheme used here for characterizing the TTDs in individual soil layers, the overall sampling scheme for the entire soil column is far from being random due to difference in soil-water content and evapotranspiration fluxes in different soil compartments 26 ; and such an approach provides a meaningful way to simulate non-random sample dynamics as shown in a recent study 70 . Nitrate vulnerability assessment Our nitrate vulnerability assessment is based on the leaching of excess nitrate from the root zone after accounting for the plant uptake (and other turnover processes). The mechanisms considered are downward transport and removal by denitrification, and we contrast the corresponding timescales of hydrologic transport (TT) with denitrification reaction (RT). Owing to the lack of reliable observations of RT, especially at large scales, we take a scenario approach and consider a wide range of RT estimates varying between 0.5 and 5 years. We represent the RT as a characteristic reaction timescale 38 corresponding to a given percentage of removal of initial substrate—here taken as 50% (RT 50 ) to allow for comparability with the corresponding hydrologic transport timescale (TT 50 ). Following the first-order kinetics adopted in this study, a RT 50 of 1 year, for example, would correspond to a denitrification rate constant of -ln(0.5)/RT 50 = 0.69 y −1 . Following previous large-scale studies 39 , 42 , 46 , RT represents an effective timescale \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) that encapsulates the relevant environmental factors, such as soil moisture, temperature, and organic carbon content, that affect the site-specific reaction behaviors 48 . We consider two cases for the nitrate vulnerability assessment that account for the static and transient behaviors of transport and reaction timescales. In the static case, we use the average estimates of the transport times μ (TT 50 ) and contrast them with the effective \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) values. In the transient case, we account for the daily dynamics of TT 50 ( t ) and RT 50 ( t ) for each grid cell. The transient nature of the site-specific RT 50 ( t ) is constructed from for the spatiotemporal variability of the environmental factors, f e ( t ), that affect the daily variability of the denitrification process. Specifically, we account for the dynamic reduction factors caused by varying soil moisture f S ( t ) and temperature f T ( t ) conditions. Following established parameterization approaches 39 , 40 , 41 , 43 , 48 , we define the following time-varying, dimensionless power functions f S ( t ) and f T ( t ) varying between [0,1] to reflect the status of anoxic and optimal temperature conditions required for the denitrification process 48 : $${f}_{S}(t)=\left\{\begin{array}{ll}{\hskip -9pt}0{\hskip 45pt}S(t)\ <\ {S}_{\tau }\\ {\left(\frac{S(t)-{S}_{\tau }}{{S}_{m}-{S}_{\tau }}\right)}^{\omega }\ \ \ \ {S}_{\tau }\le S(t)\le {S}_{m}\\ {\hskip -10pt}1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm{otherwise}},\end{array}\right.$$ (3) and, $${f}_{T}(t)={\beta }^{\frac{T(t)-{T}_{r}}{10}}$$ (4) where S ( t ) is the soil-water storage at a given day, t , relative to the saturation limit ( S m ), and S τ is the threshold limit below which the denitrification process is completely inhibited, taken here as one-third of the saturation limit 41 , 43 , 48 . The parameter ω (= 2.5) defines the steepness of the curve 41 , 43 , 48 . f T ( t ) represents the effect of increasing ambient temperature T ( t ) on denitrification. T r (= 25 ∘ C) represents the reference temperature where f T ( t ) = 1, and β (= 2) is the factor that embeds the dependence of the denitrification rate on ambient temperature 39 , 40 , 48 . We also consider the effect of spatially varying soil organic carbon content by accounting for the spatial information of potential denitrification rate constants based on the agro-ecosystem Carbon And Nitrogen DYnamics (CANDY) model 71 . Across the majority of the cultivated areas in Europe, the organic carbon content varies between 0.5 and 8%, with a median estimate of ~1% (see Supplementary Fig. 11 ) and the corresponding rates range between 0.02 and 0.16 kg/ha/day for the 1 dm soil layer. We use the relative (spatial) variations in this rate information to derive the spatial multiplier factors ( f OC ) and we reference these factors to the nominal value of 1.0 for the base carbon content of 0%. Following an established approach 41 , 43 , 48 , we estimate the combined effect of multiple environmental factors for each grid cell and modeling timestep as f E ( t ) = f S ( t ) f T ( t ) f OC . However, this f E ( t ) value only describes the environmental condition on a single day t , whereas any solute entering on a given day t in will reside in the root zone for a much longer time period and be, therefore, exposed to a range of environmental conditions. We express this combined effect of the varying environmental conditions over the entire transport time as: $$\left\langle {f}_{E}\right\rangle ({t}_{{\mathrm{in}}})=\mathop{\int}\nolimits_{{t}_{{\mathrm{in}}}}^{\infty }{p}_{I}(t-{t}_{{\rm{in}}}| {t}_{{\rm{in}}})\left(\frac{1}{t-{t}_{{\rm{in}}}}\mathop{\int}\nolimits_{{t}_{in}}^{t}{f}_{E}(t^{\prime} )dt^{\prime} \right)dt.$$ (5) The environmental conditions f E ( t ) in the above formulation are, therefore, weighed such that their different impact over the entire transport time is acknowledged. By virtue of the inner integral in the above equation, water, and therefore solutes (excess nitrate), leaving at any given day is only affected by the averaged behavior of the environmental condition up to that day (i.e., from t in to t ). The combined behavior is then represented as an aggregated behavior of all the different portions of water parcels or solutes leaving the root zone; and this effect is reflected in the outer integral in the above equation. We use as grid-specific environmental factor the normalized, time-varying weighting factor \(\left\langle {f}_{E}^{\prime}\right\rangle (t)\) , to establish the site-specific temporal dynamics of the denitrification timescale RT 50 ( t ). The normalization is based on the site-specific averaged \(\left\langle {f}_{E}^{\prime}\right\rangle (t)\) values that allows for the preservation of the effective \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) across the study domain. Our subsurface nitrate vulnerability analysis makes use of the dimensionless Damköhler number ( \({{{D}}}_{{\rm{a}}}=\frac{{{\rm{TT}}}_{50}}{{{\rm{RT}}}_{50}}\) ) to depict the complex interplay between the hydrologic transport (leaching) and biogeochemical turnover (denitrification) timescales 34 , 35 , 36 , 38 . This dimensionless number ranges between 0 and ∞ with D a < 1 indicate the dominance of the hydrologic transport over the reaction timescales, and vice versa. We use this objective measure to delineate the regions across Europe (with D a ≤ 1) that are vulnerable to nitrate leaching from soil. We synthesize this information for the cultivated area using the cropland map 72 of the year 2000, which was compiled using extensive resources at national and sub-national level agricultural census statistics as well satellite-based land cover classification datasets 72 . We consider as cultivated grid cells only those that have a cropland fraction of at-least 5%. Based on this threshold, a total number of nearly 8100 cropland cells at 0.25 ∘ grid resolution (leading to a total of ~192 million ha of cultivated areas) are considered in our vulnerability assessment analysis. Vulnerability assessment contingent on N-surplus Our analysis mainly considers the nitrate vulnerability across all cultivated areas without accounting for the spatial heterogeneity of N-surplus (or excess N) estimates, i.e., net nitrogen balance after accounting for fertilizer and atmospheric inputs and plant uptake. Although these estimates are needed to properly characterize the risk of nitrate leaching, they are rarely available at sub-annual timescales. Taking the example of the N-surplus balances for croplands around the year 2000 73 , we note that not all of the cultivated areas across the study domain have high N-surplus, especially the east European regions bordering the Black sea (see Supplementary Fig. 12 for the N-surplus map and the corresponding cropland fractions in Fig. 3b). There is, however, a general tendency of increased N-surplus with the larger fraction of cropland areas (see Supplementary Fig. 12 ). We find that around 33% of the total cultivated areas have a high N surplus (≥ 1 × 10 5 kg-N per grid cell) and is vulnerable to nitrate leaching under the static vulnerability assessment case (i.e., cropland cells of D a ≤ 1 based on the averaged TT 50 and effective \(\left\langle {\mathrm{R{T}}}_{{\rm{50}}}\right\rangle\) value). For the transient case based on the time-varying TT 50 and RT 50 , we find around 58% of the total cultivated areas with high N-surplus is temporarily vulnerable ( D a ( t ) ≤ 1) to nitrate leaching for at least one-third of the year. The analysis conducted here provides a preliminary overview on the nitrate risk assessment, and further improvements in this aspect require more information on the temporal variability of N-surplus. Data availability Data used in this study have been obtained from the following open sources: Terrain elevation GOTOPO30 and EU-DEM from ( ) and ; the river database CCM2 v2.1 from ; Soils texture maps based in HWSD from ; the land cover product GlobCOVER v2 from ; the CORINE land cover products (v18.4) from ; the hydrogeology map IHME1500 v11 from ; the historical forcings E-OBS v13 from ; and the 2000 cropland and N-excess maps from . Further details on these and other supporting databases are provided in Supplementary Table 1 . Finally, the underlying data for drawing the main conclusions of this study are available at and other auxiliary model simulations can be available from the corresponding author upon request. Code availability The underlying model source code along with a test case to validate successful installation of the mesocale Hydrologic Model (mHM) and the detail user manual are available at . The source code for computing the travel-time distributions (TTDs) is adapted after Hesse et al. 27 , which is available at . Further support on the details of processing algorithms can be obtained from the corresponding author upon request.
Nutrient contamination of groundwater as a result of nitrogen-based fertilizers is a problem in many places in Europe. Calculations by a team of scientists led by the UFZ have shown that over a period of at least four months per year, nitrate can leach into the groundwater and surface water on about three-quarters of Europe's agricultural land. The proportion of areas at risk from nitrate leaching is thus almost twice as large as previously assumed. In agriculture, nitrogen-based fertilizers are often not applied in a way that is appropriate to the location and use. If the level is too high, the plants do not fully take up the nitrogen. As a result, the excess nitrogen is leached into the groundwater and surface water as nitrate—a problem that occurs in several EU countries. For example, in 2018, the European Court of Justice (ECJ) condemned EU countries including Germany for breaching the EU Nitrates Directive. Last year, the EU Commission reminded Germany to implement the ECJ ruling. How much of the nitrogen applied through fertilization can enter the groundwater and surface water as nitrate or is denitrified (i.e. converted to molecular nitrogen and nitrogen oxides and released into the air) depends, among other things, on complex processes in the soil. A team of UFZ researchers and U.S. partners led by hydrologist Dr. Rohini Kumar have now analyzed in more detail which processes determine the fate of excess nitrogen. The focus is on hydrological and biogeochemical processes in the root zone (i.e. the area that extends from the surface of the soil down to a depth of one meter). "The root zone is the most dynamic and active part of the subsoil, where soil moisture, evaporation and dry/wet phases prominently take effect," says Kumar. It acts as both a hydroclimatic and biogeochemical filter between the surface and the deeper subsurface layers. The vulnerability of agricultural land to nitrate leaching has so far been described using static information on land use, soils, and the topography of the landscape, combined with mean precipitation and groundwater levels—without taking into account of their temporal variability. "However, precipitation and temperatures change daily. This affects evaporation and soil water and ultimately the retention time and water transport to deeper layers. Mean values, as used to describe the static condition, are therefore less appropriate from today's perspective," explains Kumar. The researchers therefore use a dynamic approach to calculate how long the dissolved nitrate could remain in the root zone before it leach down to deeper levels. They combine the mHM (mesoscale hydrologic model) developed at the UFZ with calculations of the daily change of water retention and nitrate in the root zone as well as denitrification. With the help of the mHM, scientists can simulate the spatio-temporal distribution of hydrological dynamics as well as transport dynamics occurring in the root zone throughout Europe to the day for the past 65 years. With the new approach, the UFZ researchers conclude that for at least four months per year, almost 75% of Europe's agricultural land is vulnerable to nitrate leaching into groundwater and surface waters. If the static approach is used, this proportion is only 42%. "Because the spatial-temporal dynamics of water transport have not been taken into account in the vulnerability assessment of delimiting nitrate vulnerable zones, the spatial extent of nitrate vulnerable areas is grossly underestimated," concludes co-author and UFZ hydrogeologist Dr. Andreas Musolff. This concerns, among others, areas in the east and north-east of Germany, the Iberian Peninsula, and some Eastern European countries. According to the UFZ researchers, the new findings could better aid to risk management of nitrogen in agriculture. "Farmers could use the more precise information to more precisely adjust their fertilizer regimes, thereby ensuring that as little nitrate as possible is present in the soil during the particularly critical months," says Musolff. This would prevent additional nitrate from entering the groundwater and surface waters. "This study focussing on the soil zone is a starting point for a comprehensive risk assessment of nitrate loads in the groundwater and surface water. It will be followed by further research on transport and denitrification in the subsoil, groundwater and the surface-waters," says Kumar.
10.1038/s41467-020-19955-8
Medicine
Small uveal melanomas 'not always harmless', study finds
Rumana N. Hussain et al, Small High-Risk Uveal Melanomas Have a Lower Mortality Rate, Cancers (2021). DOI: 10.3390/cancers13092267
http://dx.doi.org/10.3390/cancers13092267
https://medicalxpress.com/news/2021-05-small-uveal-melanomas-harmless-ground.html
Abstract Guidelines Hypothesis Interesting Images Letter New Book Received Obituary Opinion Perspective Proceeding Paper Project Report Protocol Registered Report Reply Retraction Short Note Study Protocol Systematic Review Technical Note Tutorial Viewpoint All Article Types Advanced Search Section All Sections Cancer Biomarkers Cancer Causes, Screening and Diagnosis Cancer Drug Development Cancer Epidemiology and Prevention Cancer Immunology and Immunotherapy Cancer Informatics and Big Data Cancer Metastasis Cancer Pathophysiology Cancer Therapy Clinical Trials of Cancer Infectious Agents and Cancer Methods and Technologies Development Molecular Cancer Biology Pediatric Oncology Systematic Review or Meta-Analysis in Cancer Research Transplant Oncology and Cancer Nursing Care Tumor Microenvironment All Sections Special Issue All Special Issues "The 10th International MDM2 Workshop"—Opening Up New Avenues for MDM2 and p53 Research T Cells and Myeloid Cells in Cancer Immunotherapy Helicobacter pylori Associated Cancer TP53 in Solid Tumors and Hematological Malignancies How Does Obesity Cause Cancer? Methylation Changes in Early Stage Non-small Cell Lung Cancer: New Evidences, Methodological Approaches and Applications Cancer and Liposomes 10th Anniversary of Cancers—Targeted Therapies for Ovarian Cancer Treatment from Bench to Bedside 2nd Edition: Adolescent and Young Adult Oncology 2nd Edition: Advances in the Management of Thyroid Cancer 2nd Edition: Colorectal Cancers 2nd Edition: Combination and Innovative Therapies for Pancreatic Cancer 2nd Edition: Estrogen Receptor-Positive (ER+) Breast Cancers 2nd Edition: New Perspectives of Ocular Oncology 2nd Edition: Resistance Mechanisms in Malignant Brain Tumors 3rd Etnean Occupational Medicine Workshop—Breast Cancer and Work 3D Cell Culture Cancer Models: Development and Applications A Deeper Dive into Signaling Pathways in Cancers A Selection of Papers from the Cancer Therapy 2021 Virtual Scientific Meeting A Selection of Papers from the Meeting on Cancer Science and Targeted Therapy Conference A Special Issue on Tumor Stroma Actionable Mutations in Lung Cancer Acute Myeloid Leukemia Acute Myeloid Leukemia (AML) Acute Myeloid Leukemia: From Diagnosis to Treatment Acute Myeloid Leukemia: The Future Is Bright Acute Promyelocytic Leukemia Acute Promyelocytic Leukemia (APML) Adaptation Strategies of Circulating Tumor Cells Adhesion and Integrins Adjuvant Chemotherapy for Colorectal Cancer Adrenocortical Carcinoma Advance and New Insights in Bladder Cancer Advance in Colorectal Cancer: A Themed Issue in Honor of Prof. Bengt Glimelius Advance in Computational Methods in Cancer Research Advance in Morbidity and Survivorship of Breast Cancer Advance in Supportive and Palliative Care in Cancer Advance Research in Thrombosis and Hematologic Malignancies Advanced Cancer Nanotheranostics Advanced Neuroendocrine Tumors Advanced NSCLC with Oncogene Addiction: An Ongoing Revolution Advanced Ovarian Cancer Advanced Pancreatic Cancer Advanced Prostate Cancer: From Bench to Bedside Advanced Research in Cancer Initiation and Early Detection Advanced Research in Glycoproteins and Cancer Advanced Research in Oncology in 2022 Advanced Research in Oncology in 2023 Advanced Research in Organs-on-a-Chip and Cancer Advanced Research in Pancreatic Ductal Adenocarcinoma Advanced Therapies for Hematological Cancers Advances and Research Progress in Hepatocellular Carcinoma Advances in Adrenocortical Carcinoma Research: Diagnosis, Treatment and Prognosis Advances in Antitumor Molecular-Targeted Agents of Urological Cancers Advances in Bone Metastatic Cancer Research Advances in Breast Cancer Brain Metastases Advances in Breast Cancer: A Themed Issue in Honor of Prof. Fernando Schmitt Advances in Breast Cancer: From Pathogenesis to Therapy Advances in Cancer Chemoprevention Advances in Cancer Cachexia Advances in Cancer Data and Statistics Advances in Cancer Disparities Advances in Cancer Epigenetics Advances in Cancer Radiotherapy Advances in Cancer Stem Cell Research Advances in Chordoma Advances in Chronic Lymphocytic Leukaemia (CLL) Research Advances in Diagnosis and Treatment for Bone and Soft Tissue Sarcoma Advances in Diagnosis, Treatment and Management of Endocrine Neoplasms Advances in Diagnostics and Treatment of Head and Neck Cancer Advances in Endometrial Cancer: From Pathogenesis, Pathology Diagnosis and Molecular Classification to Targeted Therapy Advances in Experimental Radiotherapy Advances in Follicular Lymphoma Advances in Genetic and Molecular Approaches to Skin Cancer Advances in Genetics and Epigenetics of Bladder Cancer Advances in Gynecological Oncology: From Pathogenesis to Therapy Advances in Head and Neck Cancer Research Advances in Head and Neck Squamous Cell Carcinoma Advances in Hematological Neoplasms: A Wide Perspective on the 2022 WHO and International Consensus Classifications Advances in HPB/GI Imaging Advances in Human-Papillomavirus-Related Squamous Cell Carcinoma: From Pathogenesis to Treatment Advances in Integrins in Cancer Advances in Locally Advanced and Metastatic Kidney Cancer Advances in Lymphoma, Plasma Cell Myeloma, and Leukemia Diagnostics Advances in Modern Radiation Oncology Advances in Neuroendocrine Neoplasms Advances in NK/T-cell Lymphoma, Epidemiology, Biology and Therapy Advances in Oral Cancers and Precancers Advances in Our Understanding of ALK-Related Cancers: A Selection of Papers from the Joint Annual Meeting of the European Research Initiative for ALK-Related Malignancies (ERIA) and the European Union Marie Curie European Training Network ALKATRAS Advances in Pancreatic Cancer Research Advances in Pancreatic Ductal Adenocarcinoma Diagnosis and Treatment Advances in Papillary Thyroid Cancer Research Advances in Parathyroid Carcinoma: From Bench to Bedside Advances in Plasma Cell Dyscrasias Advances in Precision Medicine: Targeting Known and Emerging Oncogenic Targets in Lung Cancer Advances in Research, Diagnosis and Treatment of Brain Metastases Advances in Salivary Gland Carcinoma Advances in Soft Tissue and Bone Sarcoma Advances in Stimuli-Responsive Nanostructures for Cancer Therapy and Diagnosis Advances in Surgical Management of Colorectal Liver Metastases: Toward a Better Patient Selection, Lower Surgical Stress, and Multidisciplinary Approach Advances in the biological responses to radiation-induced DNA damage: A selection of papers from the Joint 43rd European Radiation Research Society (ERRS) and 20th German Society for Biological Radiation Research (GBS) Annual Meetings Advances in the Diagnosis, Prognosis and Treatment of Diffuse Large B-cell Lymphoma Advances in the Management of Hepatocellular Carcinoma Advances in the Management of Oligometastatic Disease in Non-colorectal Non-neuroendocrine Tumors Advances in the Management of Thyroid Cancer Advances in Thoracic Carcinoma and Translational Research Advances in Thoracic Oncology Advances in Thymic Tumors Advances in Translational Ovarian Cancer Research Advances in Translational Research for Soft Tissue Sarcomas Advances in Treatment for Hepatobiliary and Pancreatic Cancers: Multi-Disciplinary Strategies and Outcome Evaluation Advances in Treatment of Rare Tumors Advances in Triple-Negative Breast Cancer Advances in Tumor Angiogenesis Advances of Brain Mapping in Cancer Research Advances in Head and Neck Cancer Biology and Clinical Management Advances in In Vivo Quantitative and Qualitative Imaging Characterization of Gliomas Advancing Cancer Research by Exploring the Tissue Engineering Toolbox AGC Kinases and Cancer Aggressive Prostate Cancer Aging and Cancers Alcohol and Cancer Algorithms and Data Analysis of High Throughput Sequencing in Cancers Alternative Lengthening of Telomeres in Neoplasia An Update on Surgical Treatment for Hepato-Pancreato-Biliary Cancers Androgen Receptor in Cancers: Not Only Prostate Anesthesia and Cancer Recurrence: A New Sight Anesthesia and Cancers Animal Models for Radiotherapy Research Annexin Proteins Family in Cancer Antibody-Drug Conjugates—a Coming of Age Anticancer Immunity with Physical Treatment Modalities Antigens and Cancer Therapy Antioxidants in Cancer Apoptosis in Cancer Application of Bioinformatics in Cancers Application of Emerging Technologies in Zebrafish and Mammalian Cancer Models for Disease Characterization and Therapeutic Target Discovery Application of Multi-Omics Analysis in Cancer Diagnosis, Treatment and Prognosis Application of New Molecular Probes in the Diagnosis and Treatment of Malignant Tumors Application of Next-Generation Sequencing in Cancers Application of Proteomics in Cancers Application of Ultrasound in Breast Cancer Applications of Different Knowledge Graphs and Large Language Models in Diagnosis Cancers Applications of Machine Learning and Statistical Modeling in Precision Oncology Approaches to Improve the Prognosis of Head-and-Neck Cancer Aptamers: Promising Tools for Cancer Diagnosis and Therapy AR Signaling in Human Malignancies: Prostate Cancer and Beyond Artificial Cells for Use in Cancers Artificial Intelligence and Deep Learning in Radiology Oncology Artificial Intelligence and Machine Learning in Cancer Research Artificial Intelligence and MRI Characterization of Tumors Artificial Intelligence in Cancer Research: Knowledge Representation and Data Perspectives Asbestos and Cancer Autophagy and Cancer B Cell Malignancies (including B Cell Lymphoma, Multiple Myeloma and B-CLL) Basal Cell Carcinoma of the Head and Neck Basic and Translational Research on Cancer Immunology and Immunotherapy – Selection Papers from The 8th Symposium on Advances in Cancer Immunology and Immunotherapy (SACII) Basic Research of Hepatopancreatobillary Tumor Beyond JAK Inhibition: Molecular Pathogenesis and Novel Therapeutic Strategies for the Treatment of Myeloproliferative Neoplasms (MPNs) Biliary Tract Tumors: Update in Diagnosis and Treatment Biogenesis and Function of Extracellular Vesicles in Cancers Biological Function for Laryngeal Cancer in Immunotherapy Biomarker of Lung Cancer: Early Detection, Chemoprevention and Treatment Biomarkers, Diagnostic Tools, Treatment Outcomes, and Late Complications in Neuro-Oncology Biomarkers: Oncology Studies Biomaterial-Assisted 3D In Vitro Tumor Models: From Organoid towards Cancer Tissue Engineering Approaches Biomedical Informatics and Cancer Blood Immune Cell and Cancer Therapeutics Body Composition in Oncology and Beyond Bone and Soft Tissue Tumors BRAF Mutation in Colorectal Cancer BRAF Mutations and Rare Aggressive Variants in Thyroid Cancer Brain Cancer Radiotherapy Brain Cancer: Use of Natural Derivatives as Anticancer Agents Brain Metastases Research Updates Brain Metastasis in Breast Cancer Brain Tumor: Recent Advances and Challenges BRCA Mutations and Cancer Breast Cancer Breast Cancer – Therapeutic Challenges, Research Strategies and Novel Diagnostics Breast Cancer and Hormone-Related Therapy Breast Cancer Biology and Treatment Breast Cancer Biomarkers and Clinical Translation Breast Cancer in Young Women Breast Cancer Metastasis: Novel Insights into Molecular Mechanisms and Treatments Breast Cancer Radiation Therapy Breast Cancer Recurrence: Symptoms & Treatment Breast Cancer Stem Cells: Therapy Resistance and Novel Therapeutic Targets Breast Cancer Survivors and Supportive Therapies Breast Cancers: Pathology and Biomarkers Breast Development and Cancer (Volume II) Bridging the Gap between Translational Research and Treatment in Diffuse Large B-cell Lymphoma Cancer and Chronic Illness Cancer and Diabetes: What Connections Lie between Them? Cancer and Non-cancer Effects following Ionizing Irradiation Cancer and Pregnancy Cancer Biomarkers Cancer Cachexia Cancer Cachexia: Molecular Insights and Clinical Implications Cancer Cell Imaging Cancer Cell Invasion Cancer Cell Metabolism, Glycolysis, Lactate Production and Transport and Potential Therapeutic Options for Inhibitors Cancer Cell Motility Cancer Cell Proliferation Cancer Chemoresistance Cancer Detection in Primary Care Cancer Diagnosis and Targeted Therapy Cancer Disease: Beyond the Border of Therapy Cancer Dormancy: Linking Laboratory and Clinical Findings Cancer Drug Resistance Cancer Epigenetics Cancer Evolution Cancer Genetics and Epigenetics: Their Roles and Clinical Implications Cancer Immunotherapies Cancer Invasion and Metastasis Cancer Metabolic Landscapes and Interactions Cancer Metabolism Cancer Minimally Invasive Surgery Cancer Molecular Imaging Cancer Nanomedicine Cancer Nanotherapy and Nanodiagnostic Cancer Neuroscience Cancer Organoids in Basic Science and Translational Medicine Cancer Pain: From Basic Research to Drug Discovery and Clinical Studies Cancer Pains Cancer Predisposition Syndromes: Genomics, Surveillance and Treatment Paradigms Cancer Signaling Pathways and Crosstalk Cancer Stem Cells Cancer Stem Cells and Personalized Medicine for Gynecologic Cancers Cancer Stem Cells and Tumor Microenvironment Cancer Therapy Targeting the Fibrinolytic System Cancer Vaccines and Immunotherapy Cancer Vaccines: Research and Applications Cancer-Associated Cachexia Cancer-Associated Fibroblasts Cancer-on-a-Chip: Applications and Challenges Cancers and Aging Cancers Gene Therapy Cancers Precision Immunotherapy Cancers: Molecular Imaging and Therapy CAR-T Cell in Human Cancers: Combinations, Gene-Editing, Payload Delivery, Autonomous Control and Synthetic Biology CAR-T Cell Therapy against Different Types of Cancer CAR-T Cell Therapy-Novel Approaches and Challenges CAR-T Cells: Past, Present, and Future Carbon-Ion Radiotherapy for Cancer Treatment Cardio-Oncology: An Emerging Paradigm in Modern Medicine CDK Targeting in Cancer Therapy CDK4/6 Inhibitors in Breast Cancer CelebratING 25 Years of the ING Family Proteins as Epigenetic Regulators in Cancer Cell Cycle Deregulation in Cancers Cell Death and Cancer Cell Death in Cancer Cell Invasion in Cancer Metastasis Cell Signaling in Cancer and Cancer Therapy Cellular Differentiation in Melanoma Development Cellular Plasticity and the Untapped Therapeutic Potential in Cancer Cellular Stress in Cancer Progression, Drug Resistance and Treatment Central Nervous System Tumors—the 2021 WHO Classification and Beyond Cervical Carcinoma Challenges and Opportunities in Implications of Omics for Breast Cancer in the Era of Precision Medicine and Beyond—Encompassing a Global View Challenges in Cancer-Associated Thrombosis Changing Landscape of Hereditary Breast and Ovarian Cancer 2.0 Chemoprevention Advances in Cancer Chemoradiotherapy for Head-and-Neck Cancer in the Elderly Chemoresistance in Solid Tumours Childhood Cancer in the Genomic Era: Experimental and Theoretical Approaches Childhood Leukemia Chromatin as a Target for Cancer Therapy Chromosomal Instability and Cancers Chromosomal Rearrangements in Haematological Malignancies Chromosome Instability and Aneuploidy in Cancer: State of the Art and Future Perspectives Chronic Lymphocytic Leukemia Chronic Lymphocytic Leukemia: Identification of Novel Prognostic Markers and Their Clinical Application Circadian Rhythms, Cancers and Chronotherapy Circulating Tumor Cells (CTCs) Circulating Tumor Cells in Cancers Circulating Tumor Cells' Heterogeneity and Precision Oncology Clear Cell Renal Cell Carcinoma 2022–2023 Clear Cell Renal Cell Carcinoma: From Biology to Treatment Clinical and Genetic Findings in Patients with Neurofibromatosis Type 1 Clinical and Translational Research in Gastrointestinal Cancers Clinical and Translational Research in Pediatric Surgical Oncology Clinical Application of Head and Neck Cancer Research Clinical Outcomes and Follow-Up Care in Gynecological Cancers Clinical Pharmacology in Cancer Clinical, Pathological and Molecular Peculiarities of Lobular Breast Cancer Clinical, Pathological, and Molecular Characteristics in Colorectal Cancer Cognitive Outcomes in Cancer: Recent Advances and Challenges Colorectal Cancer Heterogeneity and the Impact on Metastasis Formation and Therapy Efficacy Colorectal Cancers Colorectal Cancers: From Present Problems to Future Solutions Colorectal Cancers: From Present Problems to Future Solutions 2.0 Colorectal Liver Metastasis (Volume II) Combination Therapies in Cancers Commemorates Reaserches From Les Compagnons Hepat Biliaires 2023 Communication and Accessibility in the Tumor Microenvironment as a Therapeutic Target Comprehensive Review on Upper Tract Urothelial Carcinoma: An Update in 2023 Connexins in Cancer Conservative Axillary Surgery for Breast Cancer Contemporary Management for Gallbladder Cancer: From Diagnosis to Treatment Contemporary Perspectives and Emerging Trends in the Management of Gastric Cancer Coordination of p53 Functions in Normal and Cancer Cells Correlating Patient-Reported Quality of Life before, during, and after Therapy with Objective Measures and Outcomes Crosstalk between Cancer-Associated Fibroblasts and Cancer Cells Crosstalk between Inflammation and Carcinogenesis Curative Strategies for the Management of Hepatocellular Cancer Current Advances in Systemic Therapy for Unresectable Hepatocellular Carcinoma Current and New Insights in Theranostics of Endocrine Tumors Current Challenge and Future Advances for Lung Cancer: Genetics, Instrumental Diagnosis and Treatment 2.0 Current Challenges and Opportunities in Treating Glioma Current Challenges in Geriatric Oncology Current Concepts in the Diagnosis and Treatment of Cutaneous Melanoma Current Management of Castration-Resistant Prostate Cancer (CRPC) Current Progress and Research Trends in Ocular Oncology Current Status and Future Prospects for Oesophageal Cancer Current Status of Neuroendocrine Tumors with a Special Focus on Diagnosis and Novel Treatments-Volume II Current Topics in Cutaneous Melanoma Current Trends for Sinonasal Carcinoma Current Trends in Epigenetics of Brain Tumors: Basic and Translational Research Current Understanding of RAD52 Functions: Fundamental and Therapeutic Insights Current Updates and Future Directions in Neuroendocrine Neoplams (NENs): Understanding Biology, Diagnosis, Management and Research Efforts in NENs Current Use of PSMA in Prostate Cancer Treatment Cutaneous Lymphomas Cyclooxygenase (COX) and Lipoxygenase (LOX) in the Inflammogenesis of Cancer Cytokines and Cytokine/Chemokine Receptors in Lymphoma, Leukemia and Multiple Myeloma Cytokines in Cancer Cytokines in Cancer Immunotherapy 2.0 Cytologic Features of Tumor Decision-Support Systems for Cancer Diagnosis and Prognosis Deep Neural Networks for Cancer Screening and Classification Deregulation of Cell Death in Cancer (Volume II) Designification & Intelligentsia of Humanized Rodent Models in Cancer Research Desmoid Tumors Detecting and Targeting Mechanisms of Genomic Instability in Breast Cancer Development of Immunotherapeutic Agents against Intraepithelial Neoplasia Developments in Artificial Intelligence and Advanced Medical Imaging in Cancers Diabetes and Breast Cancer Diacylglycerol Kinases in Cancer Diagnosis and Staging of Gastroesophageal Cancer Diagnosis and Therapeutic Management of Gastrointestinal Cancers Diagnosis and Treatment for Bone Tumor and Sarcoma Diagnosis and Treatment for Hepatocellular Tumors Diagnosis and Treatment of Gastroenteropancreatic Neuroendocrine Neoplasms Diagnosis of Melanoma and Non-melanoma Skin Cancer Diagnosis, Treatment and Prevention of Gastrointestinal Cancer Diagnosis, Treatment and Prognosis of Osteosarcoma Diagnostic and Therapeutic Progress in Aggressive Lymphoma Differential Scanning Calorimetry and Related Thermal Analysis Techniques as Complementary Approaches for Cancer Diagnosis, Prognosis, Monitoring and Assessment of Treatment Response Digital Pathology: Basics, Clinical Applications and Future Trends Diversity and Biology of Cancer-Associated Fibroblasts and the Novel Targeting Therapy DNA Damage and Repair in Cancer Risk Prediction DNA Damage Response Targeting: Challenges and Opportunities DNA Methylation Markers in Liquid Biopsies DNA Methylation Markers in Liquid Biopsies (Volume II) DNA Repair Pathways in Cancer DNA Viruses in Human Cancer Does Breast Cancer Surgery Initiate Relapse, What Is the Evidence and How May This Be Arrested? Drug Resistance in Cancers Drug Targeting Therapy in Multiple Myeloma Drug/Radiation Resistance in Cancer Therapy Dynamics of Cancer: Complexity and Hierarchy on Cancer Cells and Tissues E-cadherin Mutations in Cancer Early Detection and Surgery for Pancreatic Cancer Early Gastric Cancer Efficacy and Complications of Liver Resection for Liver Cancer Efforts to Mitigate the Toxicity of Cancer Therapeutics EGFR Family Signaling in Cancer EGFR in Cancer: Innovative Insights into Signalling, Mutation and Therapeutic Targeting EGFR-Mutated Non-small Cell Lung Cancer Electric Field Based Therapies for Cancer: A Selection of Papers from the 2nd World Congress on Electroporation Electric Field Based Therapies in Cancer Treatment: a Selection of Studies Presented at the 3rd World Congress on Electroporation Electrochemotherapy as Treatment for Head and Neck Tumors Electroporation-Based Cancer Treatment. Selected Papers from the 4th World Congress on Electroporation Emerging Biomarkers and Molecular Characterization of Renal Cancers Emerging Concepts in Treatment of Laryngeal Cancer Emerging Roles of Non-coding RNAs in Gynecological Cancer Metastasis and Drug Resistance Emerging Trends in Immunotherapy for Triple Negative Breast Cancer End-of-Life Cancer Care Endometrial Cancer: Old Questions and New Perspectives (Volume II) Endoscopic Diagnosis and Treatment of Early Gastric Cancer: Current Evidence and What the Future May Hold Endoscopic Management of Gastrointestinal, Hepatobiliary and Pancreatic Malignancies Endoscopic Management of Liver and Pancreatic Cancer Endoscopic Ultrasound Fine-Needle Biopsy of Gastrointestinal Tumors Endoscopic Ultrasound in Gastrointestinal Cancers Endothelial Cells in Inflammation, Tissue Repair, Ageing and Cancer Engaging Nanotechnology and Artificial Intelligence Tools for Early Cancer Detection and to Personalize Treatment Environmental Factors in Endocrine-Related Cancers Epidemiologic Research and Cancer Epidermal Growth Factor Receptor (EGFR) Signaling in Cancer Epidermal Growth Factor Receptor Signaling in Cancer Epigenetic Dysregulation in Cancer: From Mechanism to Therapy Epigenetic Influence on Cancer Metastasis and/or Treatment Resistance Epigenetic Regulation in Human Cancers Epigenetics of Cancer Progression Epstein-Barr Virus Infection in Cancer Epstein–Barr Virus Associated Cancers Esophageal Squamous Cell Carcinoma Estrogen Receptor (ER) Signalling Pathway in Cancers Ethical Implications in Cancer Research Ewing Sarcoma Ewing Sarcoma: Basic Biology, Clinical Challenges and Future Perspectives Exosomes in Cancer Development Experimental and Clinical Advances in Counteracting Progression of Solid Cancers Volume II Experimental and Modeling Efforts to Target Metabolism in Cancer Exploring Inflammation in Cancers Exploring Microenvironment Intricacies as Putative Common Targets in Tumors Extracellular Vesicles in Cancer Progression and Drug Resistance FAK Signaling Pathway in Cancers FAP-Ligands and Its Clinical Translation in Cancers Feature Paper from Journal Reviewers Feature Review Papers on Advanced Gastric Cancer Feature Review Papers on Gastroesophageal Junction and Gastric Cancers Fertility and Pregnancy in Cancer Patients: Illusion or Reality Fertility Issues in Cancer Survivors Fertility Preservation in Oncology Fibroblasts and Growth Factors in Cancer Fibroblasts as Playmakers of Cancer Progression: Current Knowledge and Future Perspectives Flow Cytometric Analysis in Cancer Fox Proteins and Cancers: Old Proteins with Emerging New Tales From Bench to Bedside in the Management of Cholangiocarcinoma Frontiers in Hodgkin Lymphoma Frontiers in Neurofibromatosis Frontiers in Radiotherapy Function Sparing Approaches in Pelvic Malignancies Functional and Structural Insights of Non-coding RNA in Cancer Functional Neuro-Oncology Functional Neuro-Oncology—Volume II Fusion Protein-Driven Human Sarcomas: New Molecular Insights and Clinical Opportunities Future Trends and Therapies of Pancreatic Cancer—Where Are We Going from Here On? Gammopathies of Certain Significance: Managing MGUS and Smoldering Myeloma Gap Junctions and Connexins in Cancer Formation, Progression, and Therapy Gastric Cancer Gastric Cancer Metastasis Gastric Cancer Screening in the West Gastric Cancers: Molecular Mechanisms, Novel Targets and Immunotherapies: From Bench to Clinical Therapeutics Gastric-Type Mucinous Carcinoma (GAS) of the Uterine Cervix 2.0 Gastrointestinal Cancers: Advances in Diagnostic, Prognostic, and Therapeutics Gastrointestinal Oncology: Clinical Management Gastrointestinal Stromal Tumors - Recent Progress and Upcoming Challenges in a Diverse Disease Gene Expression Studies in Cancer Research Gene Regulatory Networks in Cancers Genes in Cancer Genetic and Epigenetic Regulation of Tissue Homeostasis in Cancer Genetic and Molecular Epidemiology of Breast Cancer Genetic Findings in Acute Myeloid Leukemia Genetic/Non-genetic Tumor Heterogeneity Genetics and Cancer: Recent Advances and Challenges Genetics and Epigenetics of Leukemia and Lymphoma Genetics and Heterogeneity of Colorectal Cancer Genetics of Ovarian Cancer Genome Maintenance in Cancer Biology and Therapy Genome Maintenance Systems in Cancer Genomic Instability and Cancers Genomic Instability in Multiple Myeloma and Solid Malignancies: Role of DNA Repair from Prognostic Marker to Therapeutic Target Genomics of Chronic Lymphocytic Leukemia (CLL) Genomics of Hematologic Cancers Genomics of Rare Hematologic Cancers Germ Cell Tumors Germline Mutations in Cancer—Implications for Practice Giant-Cell-Containing Tumors of Bone—New Insights into Pathobiology, the Clinical Setting and Targeted Therapies Glioblastoma Glioblastoma: Advances in Molecular Insights and Therapeutic Strategies Glioblastoma: State of the Art and Future Perspectives Global Management of Sarcoma Data: Is Real-Time Predictive Analytics on the Horizon? Gynaecological Cancer and Surgery: Current Practice, Novel Technologies and Future Developments Gynecologic Cancers: Clinical and Translational Research Gynecologic Oncology: Prevention, Screening and Treatment Innovations Hallmark Properties and Behind-the-Scenes Role of Non-coding RNAs in Gastrointestinal Cancers’ Onset, Progression and Metastasis Harnessing the Therapeutic Potential of Targeting Matrix Metalloproteinases for Gastrointestinal Cancer Head and Neck Cancer Head and Neck Cancer Genomics and Translational Applications Heat Shock Proteins in Cancer: Chaperones of Tumorigenesis Heat Shock Proteins in Cancers Hedgehog Signaling in Cancer Hedgehog Signaling Pathway in Cancer: Smoothened and GLI Take Center Stage Hematologic Malignancies: Challenges from Diagnosis to Treatment Hematologic Malignancy Hematologic Malignancy (Volume II) Hepatobiliary Cancers Hepatoblastoma and Pediatric Liver Tumors Hepatocellular Cancer Treatment Hepatocellular Cancer Treatment 2.0 Hepatocellular Cancer: Molecular Mechanisms, Diagnosis and Therapy Hepatocellular Carcinoma: Advances and Challenges in Research and Treatment Hepatocyte Growth Factor Pathway in Cancer Hereditary Syndromes and Radiation High-Risk Localized and Locally Advanced Prostate Cancer Hippo Pathway in Cancer, towards Realization of the Hippo-Targeted Therapy Hippo Signaling in Cancer Hodgkin Lymphoma (Volume II) Hodgkin Lymphoma: Present Status and Future Strategies Hodgkin Lymphoma: Recent Advances and Challenges Hodgkin's Lymphoma Hormone Involvement in Tissue Development, Physiology, and Oncogenesis Hormone Receptors in Genitourinary Tumors Hormone Signaling in Cancer Hormones and Carcinogenesis Hot Topics in Neuro-Oncology How to Improve Chondrosarcoma Treatment? From Fundamental Research to Biomarker Discovery and Clinical Applications HOX Genes in Cancer HPV Associated Cancers Human Hepatocellular Carcinoma (HCC) Human Hepatocellular Carcinoma (HCC) 2.0 Human Papillomavirus (HPV) Associated Head and Neck Cancers: From Basic Biology to Clinical Challenges Human Papillomavirus and Cancers Hyaluronan Family Members in Carcinogenesis Hyperthermia in Cancer Therapy Hyperthermia-based Anticancer Treatments Hyperthermic Intraperitoneal Chemotherapy in Ovarian Cancer Identification of Candidate Genes in Breast and Ovarian Cancer IECC2021: Exploiting Cancer Vulnerability by Targeting the DNA Damage Response IECC2022: Tumor Microenvironment Heterogeneity in Cancer Progression: Challenge or Opportunity IECC2023: New Targets for Cancer Therapies IL-6 and IL-6-Type Cytokines in Cancer Immunotherapy: Signaling, Receptor Blockade and Designer Proteins Imaging and Liquid Biopsy Biomarkers for Cancer Diagnosis and Treatment Imaging in CAR-T Cell Therapy on Cancers Imaging of Gynecologic and Genitourinary Malignancies Immune Checkpoint Inhibitors in Cutaneous Oncology Immune Responses to Human Prostate Cancer Immunity in Melanoma Immuno-Competent 3D Tumour Models to Predict Patient Response Immunohistochemistry and Cancer Diagnosis Immunohistochemistry in Translational Research and Diagnostics of Breast Cancer Immunometabolism and Cancer: Localized and Systemic Metabolic Interactions That Shape the Evolution of Malignancy Immunotherapy for Pancreatic Cancers—Challenges and Perspectives Immunotherapy in Ovarian Cancer Incidence, Mortality, Trend, and Survival of Cancer Infiltrative Gliomas: Emerging Insights in Pathophysiology, Diagnosis, and Management Inflammation and Cancer Inflammation and Cancer Metastasis Inflammatory and Immunological Markers in Liver Cancers Innate Immunity in Cancer Therapy Innovation in Esophageal Cancer Innovations in Diagnosis and Treatment of Colon and Rectal Cancer: Preoperative Optimisation, Multidisciplinary Management, and Surgical Technology Advancement Innovations in Diagnosis, Prognostic Evaluation, and Therapeutic Management of Gynecologic Tumors Innovations in Early Cancer Diagnostics and Therapeutics Innovations in Endocrine Cancer—Technology, Techniques and Therapy Innovative Approaches in the Management of Sinonasal Cancers Innovative Immunotherapies: CAR-T Cell Therapy for Cancers Insights and Advances in the Surgical Management of Hepatocellular Carcinoma Insights in Cancer Endocrinology Insights into Cancer Metabolism from Metabolomics Insights into Urologic Cancer Integrated Management of Cancer Integrating Loco-Regional Hyperthermia in Clinical Oncology Integrating Tumor Evolution Dynamics into the Treatment of Cancer Integrins and Tumor Microenvironment, New Perspectives in Targeted Treatments Integrins in Cancer Intensity Modulated Radiation Therapy Intensive Care and Cancers Intercellular Communication between Tumor and Stromal Cells in Endocrine-Related Cancer Interconnectivity of Cell Death Pathways in Cancer Interdisciplinary Management of Colorectal Liver Metastases in the Era of Precision Medicine Interplay Networks of Driver Oncogenes—Mechanisms and Therapeutic Targeting in Cancer Interventional Oncology: A Theme Issue in Honor of Professor Luigi A. Solbiati Interventional Radiotherapy in Gynecological Cancer Intraductal Cancer of the Prostate (IDC-P): Diagnosis and Characterization Intraoperative Visualization Techniques and Advanced Imaging in Brain Tumors Invasive Skin Cancers and Underlying Compartments Interactions Ion Channels in Cancer Ionizing Radiation in Therapy and Biology of Cancer: Role of Monte Carlo simulations, Biophysical Modeling, and Radiobiological Techniques Iron and Cancer JAK-STAT Signalling Pathway in Cancer Kinase Signaling in Cancer Kinases and Cancer Larynx Cancer: From Diagnosis to Treatment and Rehabilitation Late Recurrence in Breast Cancer Latest Advances in Research on Chronic Lymphocytic Leukemia Latest Development in Melanoma Research Latest Development in Multiple Myeloma Latest Development in Pancreatic Cancer Latest Research in Cartilaginous Neoplasms Leukemia Lifestyle Modifications and Survival of Cancer Patients Linking Obesity to Colorectal Cancer Lipids and Small Metabolites in Cancer Liquid Biopsy for Cancer Liquid Biopsy in Breast Cancer Liquid Biopsy in Hematologic Malignancies Liquid Biopsy in Lung Cancer Liver Cancer and Liver Cirrhosis with Portal Hypertension Liver Cancer and Potential Therapeutic Targets Liver Cancer: Current Surgical Management Liver Interventional Oncology Liver Metastasis of Cancer Locally Advanced and Recurrent Rectal Cancer Locally Advanced and Recurrent Rectal Cancer (Volume II) Locally Advanced Non-small Cell Lung Cancer—Challenges and Current Treatment Options Loco-Regional Arterial Chemotherapies Alone or in Combination with Sistemic Treatments for Primary and Secondary Hepato-Pancreatic Tumors Locoregional Treatment and Gene Targeted Therapies for Cancer Metastasis Long-Read Sequencing in Cancer Low Grade Gliomas Lung Cancer Lung Cancer - Molecular Insights and Targeted Therapies Lung Cancer Biomarkers Lung Cancer: Molecular Pathways, Current Therapies and New Targeted Treatments Lung Cancer—Molecular Insights and Targeted Therapies (Volume II) Lymph Node Dissection for Gynecologic Cancers Lymph Node Dissection in Colorectal Cancer Lymphoma Lynch Syndrome: State of the Art Lysine-Specific Demethylase 1 (LSD1): A Multifaced Epigenetic Enzyme with Multifunctional Roles in Cancer Lysophosphatidic Acid Signalling in Cancer Macrophage Polarization States in Cancer Tumor Microenvironment Macrophages in Cancer Progression, Diagnosis and Treatment Magnetic Resonance Imaging of Brain Tumor Malignant Adrenal Tumors – from Bench to Bedside Malignant Mesothelioma Malignant Pleural Mesothelioma (MPM) Management of Gastric Cancer Management of Glioblastomas Management of Hepatocellular Carcinoma in Liver Disease Management of Locally Advanced Cervical Cancer Management of Neuroendocrine Neoplasms Management of Pancreatic Cancer Management of Pancreatic Cancer: Prediction and Prognostic Factors Management of Relapsed and Refractory Lymphomas Management of Side Effects of Cancer Treatments: New Approaches Mantle Cell Lymphoma: From Biology to Therapy MAPK in Cancers: From Signalling Pathways to Therapeutic Targets Mathematical Oncology: Using Mathematics to Enable Cancer Discoveries Matrix Metalloproteinases in Cancer Progress Measurable Residual Disease in Cancer Mechanisms of Cancer Stem Cells in Melanoma Progression and Treatment Resistance Mechanisms of Genetics in Acute Myeloid Leukemia and Their Influence of Metabolism Mechanisms of mRNA Translation in Pancreatic Cancer Medical Complications and Supportive Care in Patients with Cancer Melatonin and Cancer: Current Challenges and Future Perspectives Meningioma: From Bench to Bedside Meningiomas and Low Grade Gliomas Meningiomas: Update on the Diagnosis and Management Merkel Cell Biology through Clinical Research Merkel Cell Carcinoma Merkel Cell Carcinoma: An Update and Review MET in Cancer Metabolic Reprogramming and Vulnerabilities in Cancer Metastasis and Tumor Cell Migration of Solid Tumors Metastatic Colorectal Cancer Metastatic Lung Cancer Metastatic Progression and Tumour Heterogeneity Microbiome in Cancer: Role in Carcinogenesis and Impact in Therapeutic Strategies-Volume II Microbiota in Colorectal Cancer MicroRNA-Associated Cancer Metastasis Microtubule-Associated Proteins (MAPs) and Cancers Minimally Invasive Surgery in Ovarian Cancer miRNA in Colorectal Cancer Mitochondria and Cancer Mitochondrial function and dysfunction in cancer and their potential as anti-cancer targets Mitochondrial Function and Dysfunction in Cancer and Their Potential as Anti-cancer Targets (Volume II) Models of Experimental Liver Cancer Molecular Advances and Targeted Therapy in Asian Thyroid Practice Molecular and Image Diagnosis in Endometrial Cancer 2024 Molecular Biology and Therapeutic Perspectives for K-ras Mutant Cancers Molecular Diagnostics and Targeted Therapy in Patients with Metastatic Breast Cancer Molecular Genetics and Signaling Pathways in Liver Cancer Molecular Genetics of Pancreatic Cancer and Translational Challenges Molecular Imaging in Tumor Evolution and Therapy Response Molecular Mechanisms and Clinical Implications in Thoracic Cancers Molecular Mechanisms and Clinical Implications of Pancreatic Cancer Molecular Mechanisms in Head and Neck Cancer Molecular Mechanisms of Gastric Cancer Development Molecular Mechanisms of Lung Cancer and Mesothelioma Molecular Pathogenesis of Cervical Cancer Molecular Pathogenesis of Cervical Cancer (Second Edition) Molecular Pathogenesis of Liver Cancers Molecular Pathways in Cancers Molecular Pathways in Metastasis of Lung Cancer Molecular Profiling of Lung Cancer Molecular Signaling Pathways and Networks on Cancer Molecular Stress Response Dysregulation in Cancer: Therapeutic Targets and Opportunities Molecular Targeted Therapy in Cancer Molecular Testing for Thyroid Nodules and Cancer Molecular Tumor Boards: Promise and Limitations for Personalized Cancer Therapy MRI in Breast Cancer mTOR Pathway in Cancer mTOR Signaling in Cancer Development and Growth Multi-Faceted Epigenetic Dysregulation in Acute Myeloid Leukemia Multimodality Management of Sarcomas Multiple Myeloma — Biology, Diagnosis, Treatment and Prognosis Multiple Myeloma: Recent Advances in Diagnosis, Analysis and Therapeutic Management Multiple Signaling Pathways in Ovarian Cancer Multiplexing Immunohistochemistry as an Approach for Diagnosis, Research and Bases for Targeted Therapy Mutation Profiles, NGS and Heterogeneity of Ovarian Cancer Nanobiomaterials for Cancer Early Detection and Therapy Nanotechnology and Cancer Nanotechnology and Cancer Therapeutics Nanotherapeutics in Cancer Management Natural Killer Cells and Cancer Therapy Neoadjuvant and Adjuvant Therapy for Gynecologic Malignancies Neoadjuvant Treatments in Breast Cancer Patients Neuroendocrine Neoplasms: Current Challenges and Advances in the Biological Aspects, Diagnostic and Therapeutic Management Neuroendocrine Neoplasms: Current Challenges and Advances in the Biological Aspects, Diagnostic and Therapeutic Management (Volume II) Neuroendocrine Prostate Cancer Neuroendocrine Tumors Neuroendocrine Tumors: Treatment and Management 2.0 Neurofibromatosis Type 1 (NF1) Related Tumors New Advances and Perspectives for Relapsing Brain Tumors New Advances in Breast Cancer Surgery New Advances in High-Grade Glioma Research New Advances in Metastatic Prostate Cancer New Advances in Urothelial Cancer: Diagnosis, Therapy and Prognosis New and Special Subtypes of Breast Cancer New Approaches in the Management of Head and Neck Cancer New Biomarkers in Cancers New Biomarkers in Cancers (Volume II) New Challenges in Breast Cancer Diagnosis and Management New Challenges in Cancer Imaging New Concepts and Recent Advances in the Management of Skin Cancer New Developments in Radiotherapy New Diagnostic and Therapeutic Tools Against Multidrug Resistant Tumors (STRATAGEM Special Issue, EU-COST CA17104) New Discoveries in Radiation Science: Selection of Papers from the 44th Congress of the European Radiation Research Society (Pécs, Hungary) New Era of Cancer Research: From Large-Scale Cohorts to Big-Data New Era of Neuroblastoma Treatment: DNA Project New Horizons and Surgical Decision Making in HPB Cancer New Insight of Non-small Cell Lung Cancer New Insights and Future Directions in Palliative Care across the Cancer Continuum New Insights in Lymphedema after Cancer to Enhance Clinical Practice New Insights in Neuroendocrine Neoplasms (NENs) and Thyroid Cancer: Focus on Basic, Translational, and Clinical Research New Insights in the Genetics and Genomics of Adrenocortical Tumors and Pheochromocytomas New Insights in the Genetics and Genomics of Adrenocortical Tumors and Pheochromocytomas New Insights in Thoracic Sarcoma New Insights in Tumor-Infiltrating Lymphocytes New Insights into Breast and Endometrial Cancer New Insights into Myeloproliferative Neoplasms New Insights into Neurofibromatosis New Insights into Oligo-Recurrence of Various Cancers New Insights into the Management of Intrahepatic Cholangiocarcinoma New Insights into the Use of Cytotoxic Agents for Cancer Treatment New Insights into Thymic Epithelial Tumors New Insights into Tumour pH Regulation New Insights of Hematology in Cancer New Insights of Malignant Pleural Mesothelioma New Insights on the Hippo-YAP/TAZ-TEAD Pathway and Its Roles in Cancer New Perspectives of Ocular Oncology New Sights in Cancer Survival Analysis: Non-clinical Determinants of Survival for Patients with Cancer 2.0 New Technologies and Advancements in Gastro-Esophageal Cancer Surgery New Therapies for Prostate Cancer New Trends in Esophageal Cancer Management New Trends in Esophageal Cancer Management (Volume II) New Trends in Surgery for Non–Small-Cell Lung Cancer Next Generation Sequencing Approaches in Cancer NF-kappaB signalling pathway Non-coding RNA and Cancer Non-Coding RNA in Pancreatic Ductal Adenocarcinoma: New Perspectives for the Clinical Practice Non-coding RNAs and Extracellular Vesicles in Cancer Crosstalk: Diagnostic and Therapeutic Implications Non-Coding RNAs as Emerging Regulators of Signalling Pathways and Novel Therapeutic Targets in Human Cancers Non-Coding RNAs in Cancers Non-Melanoma Skin Cancer: Advances Towards Prevention and Personalized Medicine in Clinical Practice Non-Small Cell Lung Cancer Therapies Nonalcoholic Fatty Liver Disease-Related Hepatocellular Cancer (NAFLD-HCC) Noncoding Landscapes of Uveal Melanoma Noninvasive Diagnostic Imaging and Management of Skin Cancer Novel Approaches in Thymic Epithelial Tumors Diagnosis and Treatment Novel Biomarkers in Acute Myeloid Leukemia (AML) Novel Biomarkers in Pancreatic Cancer Novel Biomarkers of Hepatobiliary Tumors Novel Computational and Artificial Intelligence (AI) Models in Cancer Research Novel Concepts of Metastatic Cancer Progression Novel Developments on Skin Cancer Diagnostics and Treatment Novel Diagnostic and Therapeutic Approaches in Diffuse Gliomas Novel Insight in the Etiology of CRC: Genetics, Diagnosis, Management and Risk Assessment Novel Insight of MRI for Lung Cancer and Thoracic Neoplasm Novel Insights in Acute Lymphoblastic and Myeloblastic Leukemia Novel Insights in Myeloma Novel Insights in Ocular and Orbital Oncology: From Molecular Biology to Treatment Strategies Novel Insights in the Biology and Clinical Management of Breast Cancer during Pregnancy Novel Insights into Biology and Cancers Novel Insights into the Hallmarks of Breast Cancer Progression Novel Personalized Therapeutic Strategies for Breast Cancer Novel Predictive and Prognostic Biomarkers for Locoregionally Advanced Nasopharyngeal Carcinoma Novel Strategies in the Prevention/Treatment of Colorectal Cancer Novel Targeted Therapies in Brain Tumors Novel Targets in Triple-Negative Breast Cancer Novel Therapeutic Considerations in Bone and Soft Tissue Sarcoma Novel Therapeutic Strategies for Neuro-Oncology Novel Therapies for Pediatric Acute Myeloid Leukemia Novel Treatment for Glioblastoma Targeting Heterogeneity and Cellular Plasticity NSCLC—Tumor Microenvironment and Metastasis NUTM1-Rearranged Neoplasia: Understanding an Expanding Family of Aggressive Cancers Obesity as a Risk Factor for Cancer Oesogastric Cancer: Treatment and Management Oesophago-Gastric Cancer Surgery Old Drugs in a New Package: Future of Cancer Nanomedicine Oligometastatic Disease Oligoprogression in the Non-small Cell Lung Cancer (NSCLC) Oncogenes and Tumor Suppressor Genes in Brain Tumor Oncogenic Forms of BRAF as Cancer Driver Genes Oncogenic Metabolic Reprogramming in Cancer and Metastasis Oncogenic Virus-Associated Nasopharyngeal and Oropharyngeal Carcinoma. How Should We Treat It? Oncologic Emergencies: The Emergency Care of Cancer Patients Oncological Safety of Endoscopic and Robotic Thyroidectomy Oncology: State-of-the-Art Research and Initiatives in Japan Oncology: State-of-the-Art Research in France Oncology: State-of-the-Art Research in Germany Oncology: State-of-the-Art Research in Spain Oncology: State-of-the-Art Research in the USA Oncology: State-of-the-Art Research in UK Oncology: State-of-the-Art Researches in Poland Oncolytic Virotherapy Oncolytic Virus Therapy Against Cancer Oncolytic Viruses: A key Step Toward Cancer Immunotherapy Oral Cancer Risk and Its Management: What Is New? Oral Squamous Cell Carcinoma – from Diagnosis to Treatment Organ-Specific Metastasis Formation Organotypic 3D In Vitro Tumor Models: Bioengineering and Applications Oropharyngeal Squamous Cell Carcinoma, Challenges and Opportunities Ovarian Cancer Metastasis Ovarian Cancer Progression: From Experimental Models to Clinical Applications Ovarian Cancer Proliferation and Progression Oxidative Stress and Cancer p21 – An Underestimated Driver for Cancers p53 Family in Cancer: How Close Are We to the Clinic? p53 Signaling in Cancers P53, EMT and DNA Repair: Novel Links Impacting Cancer Progression and Drug Response Palliation of Gastrointestinal Tumors with Lumen Apposing Metal Stents Palliative Care for Patients with Cancer Palliative Radiotherapy in Cancer Palliative/Supportive Care Pancreatic and Duodenal Neuroendocrine Tumors Pancreatic Cancer Pancreatic Ductal Adenocarcinoma Pancreatic Neuroendocrine Tumors PARP Enzymes, ADP-Ribose and NAD+ Metabolism in Cancer PARP Inhibitors in Cancers PARP Inhibitors: Targeting DNA Damage Repair in Cancer Treatment PARPs, PAR and NAD Metabolism and Their Inhibitors in Cancer Past, Present, and Future Strategies in the Treatment and Management of Gliomas Pathogenesis and Diagnosis of Genitourinary Cancer Pathogenesis, Prognosis and Prediction in Breast Cancer Pathology and Genetics of Glioblastoma Pathology of Acute Myeloid Leukemia (AML) Pathology of Hematologic Malignancies Pathophysiology and Treatment of Peritoneal Metastasis Patient-Derived Xenograft-Models in Cancer Research Patient-Derived Xenografts in Cancer PD-L1/PD1 Modulation Mechanisms in Lung Cancer: From Basic to Translational Evidences Pediatric Brain Tumor Pediatric Cancer Predisposition Pediatric/Adolescent Cancer and Exercise Penile Carcinoma Perihilar Cholangiocarcinoma Perioperative Care and Pain Management in Cancer Patients: From Basic Science to Clinical Practice Perioperative Care in Gynecologic Oncology Perioperative Chemotherapy for Liver Metastasis of Colorectal Cancer Perioperative Imaging and Mapping Methods in Glioma Patients Perioperative Interventions and Oncological Outcome in Surgical Cancer Patients Personalized Medicine: Recent Progress in Cancer Therapy Personalized Medicine—Guided Synthetic Lethality Targeting DNA Repair Pathways Personalized Therapy of Sarcomas Perspectives on Early-Stage Medullary Thyroid Cancer Treatment PET and MRI Radiomics in Cancer Predictive Modeling PET Imaging in Prostate Cancer PET/CT and Conventional Imaging in Cancers PET/CT in Head and Neck Cancer PET/CT in Multiple Myeloma Patients Pheochromocytoma (PHEO) and Paraganglioma (PGL) Photodynamic Cancer Therapy Photodynamic Therapy (PDT) in Oncology PI3K Pathway in Cancer PI3K/PDK1/Akt Pathways in Cancer Pituitary Tumors: Molecular Insights, Diagnosis, and Targeted Therapy (Volume II) Pituitary Tumors: New Insights into Molecular Features, Diagnosis and Therapeutic Targeting Plasma Cell Heterogeneity in Humoral Responses and Malignancies Plasma in Cancer Treatment Platelets and Cancer Platinum-Based Therapeutics for Cancers Polyphenols in Cancer Treatment Possible Biomarkers in Oral Tumors and Their Clinical Significance Precision Medicine in Gastrointestinal Oncology Precision Medicine in Myeloma: Current, Past and Future Precision Oncology: Bioinformatics and Experimental Validation Precision Urologic Oncology—A Blueprint for Clinical Practice and Research for the Decade (2023 to 2033) to Come Preclinical and Clinical Advances in Ovarian Cancer Preclinical and Clinical Studies of Novel Therapies in Leukemia and Lymphoma Predictive and Prognostic Factors in Mesothelioma Predictive Biomarkers for Colorectal Cancer Predictive Biomarkers for Treatment Response for Head and Neck Cancers Prevention, Diagnosis and Treatment of Oropharyngeal Cancers Primary and Secondary Liver Tumors Primary Hepatobiliary Tumor Primary Liver Cancer Primary Liver Cancers in Autoimmune Liver Diseases Prognostic and Predictive Biomarkers in Malignant Mesothelioma: From Bench to Bedside and Return Prognostic and Predictive Biomarkers of Prostate Cancer Prognostic and Predictive Factors in Colorectal Cancer Prognostic and Therapeutic Implications of Tumor Biology in Colorectal Liver Metastases Prognostic Factors after Surgery for Salivary Gland Cancer; What's New, and What's Next? Prognostic Factors in Prostate Cancer Prognostic Factors in Urologic Cancers — Assessment and Integration into Clinical Care Prognostic Factors Research in Breast Cancer Patients Prostate Cancer Prostate Cancer and Radical Prostatectomy; Controversies in Anatomy, Surgical Techniques and Outcome Prostate Cancer Progression Prostate Cancer Radiotherapy Prostate Cancer: Past, Present, and Future Protein Kinases and Cancers Protein Regulatory Mechanisms in Tumorigenesis Protein Structure and Cancer Protein Synthesis in Cancer Cells: Mechanisms and Novel Targeted Therapies (Volume II) Protein Ubiquitination and Degradation in Tumor Cells Proton and Carbon Ion Therapy Proton Therapy for Cancer Proton Therapy Promises and Perils: What Progress Has Been Made? PTEN: A Multifaceted Tumor Suppressor PTEN: Regulation, Signalling and Targeting in Cancer Pulmonary Oncology Research Quality of Life and Side Effects Management in Cancer Treatment Quality of Life and Side Effects Management in Cancer Treatment (Volume II) Quality of Life for Cancer Patients Quality of Life in Patients with Gynecologic Cancer Quantitative Image Tissue Analysis Based on Multiplexed Platforms for Immuno-Oncology Applications Quantitative Technologies to Decipher Functional Phenotypes of Tumor-Associated Macrophages Radiation and Cancers Radiation Dose in Cancer Radiotherapy Radiation Therapy in Primary Liver Cancers Radiation Therapy in Thoracic Tumors: Recent Trends and Current Issues 2.0 Radiation-Induced Carcinogenesis Radioimmunetheranostics – An Emerging Approach in Personalized Oncology Radiomics in Brain Tumor Imaging Radiomics in Head and Neck Cancer Care Radionuclides in Diagnostics and Therapy of Malignant Tumors: New Development (Volume II) Radiopharmaceuticals for Oncological Diseases Radioprobes and Other Bioconjugates for Cancer Theranostics Radiotherapy for Gastrointestinal Cancer Radiotherapy in Endometrial Cancer Rapid Diagnostics for Antimicrobial Resistance in Cancer Patients Rare Tumors Involving Bone – Insight into Their Biology, Novel Diagnostic and Therapeutic Tools, including Targeted Therapies RASSF Signalling in Cancer Re-Irradiation, Chemotherapy, New Drugs for the (Re)-Treatment of Recurrent Gliomas Re-Irradiation: New Challenges and Perspectives Recent Advance in Antibody–Drug Conjugates for Cancer Therapy Recent Advance in Thoracic Cancers Progressing after Chemo-/Immunotherapy Recent Advances and Challenges in Breast Cancer Surgery Recent Advances in Basic and Clinical Colorectal Cancer Research Recent Advances in Cutaneous Squamous Cell Carcinoma Recent Advances in Deep Learning and Medical Imaging for Cancer Treatment Recent Advances in Gastric Cancer Recent Advances in Liver Transplantation for Cancer Recent Advances in Nanotechnologies for Cancer Detection and Treatment Recent Advances in Non-Small Cell Lung Cancer Recent Advances in Oncology Imaging Recent Advances in Orthopaedic Oncology Recent Advances in Ovarian Cancer Surgery Recent Advances in Pancreatic Ductal Adenocarcinoma Recent Advances in Particle Therapy for Cancers Recent Advances in Rare Cancers: From Bench to Bedside and Back Recent Advances in the Management of Neutropenia in Cancer Patients Recent Advances in the Pathogenesis of B Cell Malignancies Recent Advances in the Treatment of Peritoneal Metastasis, with Special Reference to the Role of Hyperthermic Intraperitoneal Chemotherapy (HIPEC) Recent Advances in the Understanding of Myelodysplastic Syndrome and Acute Myeloid Leukemia Recent Advances in Trachea, Bronchus and Lung Cancer Management Recent Advances on the Pathobiology and Treatment of Multiple Myeloma Recent Developments of Hematologic Diagnostics in the Interplay with Evolving Treatment Developments—Volume II Recent Highly Advanced Surgery for Pancreatic Cancer Recent Perspectives on Mechanisms of Radiation-Mediated DNA Damage Induction and Response in Cancers Recent Progress in the Diagnosis and Treatment of Melanoma and Other Skin Cancers Recent Research of Geriatric Hematology Recent Research on Gastrointestinal Carcinoma Recent Scientific Developments in Metastatic Prostate Cancer Recent Updates on Salivary Gland Tumors Recurrent Glioblastoma Redox Dysregulation and Oxidative Stress in Cancer: Therapeutic Targets and Opportunities Redox Mechanisms in Infection-Associated Cancers Regulatory and Non-Coding RNAs in Cancer Epigenetic Mechanisms Renal Cell Carcinoma Research Advances in Genetic Variants Associated with Cancer Research Advances in Paediatric Tumour of the Nervous System Research on Ocular and Intraocular Tumors Research Progress of Prostate Cancer Stem Cell Inhibitors Resistance in Breast Cancer Rhabdomyosarcoma: Still Unresolved Questions but New Perspectives Rho Family of GTPases in Cancer Risk Factors for Bladder Cancer – Clinical and Molecular Insights Risk Prediction and New Prophylaxis Strategies for Thromboembolism in Cancer Risk Stratification of Thyroid Nodule: From Ultrasound Features to TIRADS Robot-Assisted Urologic Cancer Surgery: Current Standards and Future Trends Robotic Assisted Surgical Approach for Genitourinary Tract Malignancies: State of The Art Robust Methodology for the Network-Centered “Downstream” scRNA-seq Cancer Data Analysis Role of Epigenetic Modifications in Cancers Role of Immunotherapy in Gastroesophageal Cancers: Advances, Challenges and Future Strategies Role of Medical Imaging in Cancers Role of Mesenchymal Stem Cells and Exosomes on Cancer Role of Methylglyoxal and other Oncometabolites in Linking Metabolism and Cancer Role of miRNAs in Cancer—Analysis of Their Targetome Role of Mitochondria in Cancer: Past, Present and Future Role of Natural Bioactive Compounds in the Rise and Fall of Cancers Role of New Clinical, Biologic Factors and Prognostic Systems in the Clinical Management of Chronic Lymphocytic Leukemia Role of Oxidatively-Induced DNA Damage in Carcinogenesis Role of Platelet-Derived Extracellular Vesicles in Cancer and Metastasis Roles of Exosomes/Microvesicles in Stromal-Epithelial Interaction and Cancer Progression Salivary Glands Tumors, Head and Neck Tumors and Thymoma: Current Understanding and Future Personalized Therapeutic Approach Sarcoma and Bone Cancer Awareness Month Sarcomas of Extra-Mesenchymal Sites Sarcopenia and Frailty as a Prognostic/Outcome Biomarker of Urological Cancer Patients Sarcopenia in Cancer Patients and Tumor Bearing Animal Models Selecting the Best Approach for Single and Multiple Liver Tumors Seminal Discoveries Relating to Chromatin Remodeling and Its Regulation of Cancer Biology Senescence and Cancer Sensitization Strategies in Cancer Treatment Setting the Standards for Malignancies of the Nose Vestibule Sex Differences in Cancer Signaling Pathways and Immune Checkpoint Regulation in Cancer Signaling Pathways in Gliomas Signaling Pathways Involved in Liver Cancer Development and Progression Signaling Pathways of Breast Cancer Signalling Pathways in Glioblastoma Significance of Altered (Glucose) Metabolism in Cancers Single Cell and Spatial Analysis of Solid Cancers Skin Cancer Skin Cancer and Enviornment Skull Base Reconstruction Following Surgical Treatment of Sinonasal and/or Intracranial Tumors Skull Base Tumors Small GTPases in Cancer Soft Tissue and Bone Sarcoma Soft Tissue Sarcoma: Imaging, Mechanisms and Therapy Solitary Fibrous Tumor Splicing in Cancer Research STAT3 Signalling in Cancer: Friend or Foe State of the Art and New Approaches to Spinal Cord Tumors State-of-the Art Updates in the Molecular Characteristics of Thyroid Cancer State-of-the-Art and Perspectives in the Treatment of Hormone-Receptor-Positive Breast Cancer State-of-the-Art Cancer Immunology and Immunotherapy in the USA State-of-the-Art in Cancer Cachexia Diagnostic, Prognostic and Therapy State-of-the-Art Research on Multiple Myeloma Progression State-of-the-Art Strategies for Non-coding RNA Function Detection and Regulation in Cancer State-of-the-Art Treatment on Chemotherapy and Immunotherapy for Urological Cancer State-of-the-Art in Eye Cancer Stem Cell Origin of Cancers: Biological and Clinical Implications of a Unified Theory of Cancer Stem Cells in Oral Cancer Stereotactic Body Radiotherapy for Various Cancers: Recent Advances and Future Stress Responses in Tumors and The Tumor Microenvironment Study on the Complex Melanoma Study on the Complex Melanoma 2.0 Subcutaneous Melanoma Supportive Care for Patients with Cancer Surgery for Osteosarcoma Surgery in Metastatic Cancer Surgery Induced Tumorigenesis in Breast and Other Cancers: An Inconvenient Truth? Surgical Management of Gastric Cancer: New Insights and Future Prospectives Surgical Pathology in the Digital Era—Volume II Surgical Treatment for Urogenital Cancers Surgical Treatments and Modern Techniques in Colorectal Cancer System Biology in Cancer Research Systems Biology of Tumor Immune Microenvironment and Immuno-Oncology T-Cell Lymphomas TAM family receptors in cancer biology and therapeutic resistance Targeted Intervention for Pancreatic Cancer Associated with Smoking, Alcohol Abuse and Psychological Stress Targeted Modalities for Individualized Cancer Treatment: Radiotherapy, Chemotherapy, Immunotherapy and Rationales for Their Combination Targeted Radiation Therapy and Molecular Imaging in Neuroendocrine Cancer Targeted Radiation Therapy in Spinal Metastases Targeted Therapies for Melanoma Targeted Therapies/Targetable Molecules for Treatment of Cancer and Diseases That Could Predispose to Cancer Targeted Therapy for Small Cell Lung Cancer Targeted Treatment for Soft Tissue Sarcoma and Bone Sarcoma Targeting ALK in Cancer Targeting Bone Metastasis in Cancer Targeting Bone Metastasis in Cancers Targeting Channel Proteins in Cancer Targeting FLT3 Mutations in AML (Acute Myeloid Leukemia) Targeting Head and Neck Cancer Targeting Immune Checkpoints for Cancer Therapy: Potential and Challenges Targeting Innate Immunity Cells in Cancer Targeting Metabolic Vulnerabilities in Cancer Targeting Novel Immunotherapy in Cancers: Selection of Papers from the 4th Congress of the International Symposium on Immunotherapy (London, United Kingdom) Targeting Raf Kinase Inhibitor Protein (RKIP) in Cancer Targeting Ras/RASSF in Cancer Targeting Signal Transduction Pathways in Cancer Targeting Solid Tumors Targeting STAT3 and STAT5 in Cancer Targeting STATs for Anti-cancer Therapy Targeting the JAK–STAT Signaling Axis in Cancer Targeting the Sphingolipid Metabolic Pathway: Promotion from Benchwarmer to the Starting Lineup—A Themed Issue in Memory of Dr’s. Lina Obeid and Mark Kester Targeting the Ubiquitin Pathway in Cancer Targeting Tumor Niches for Cancer Chemoprevention and Treatment Targeting Wnt Signaling in Cancer TGF-β Signaling and Its Roles in Cancers TGF-Beta Signaling in Cancer The 5th ACTC: “Liquid Biopsy in Its Best” The Asymptomatic Version of Myeloma: MGUS and Smoldering Myeloma The Chorioallantoic Membrane (CAM) Model – Traditional and State-of–the Art Applications: The 1st International CAM Conference The Complex and Evolving World of Thyroid Cancer: From Basic to Clinical Studies The Computational Methods for Anticancer Drug Development The Current Staging Systems of Tumor and Their Pitfalls The Current Status of CT, MRI and Molecular Imaging in Brain Tumors The Development of Effective Therapy Targeting the Microenvironment of Cancer The Dual Roles of Telomeres and Telomerase in Aging and Cancer The Effect of Radiation Therapy on the Tumor Ecosystem The Epithelial-to-Mesenchymal Transition (EMT) in Cancer The Evolving Landscape of Treatment against Unresectable Malignant Pleural Mesothelioma (MPM): Novel Options and Future Outlook The Future of Radiation Research in Cancers The Impact of Iron Metabolism in Cancer The Influence of Advances in Head and Neck Imaging on Diagnosis and Treatment of Head and Neck Squamous Cell Carcinoma The Landscape of Transcriptomic Diversity in Oncology The Latest Findings of the Comprehensive Management of Intrahepatic Hepatocellular Carcinoma through the Combined Use of Lenvatinib and Locoregional Therapies The Mouse Xenograft Model in Cancer Research The Multidimensional Landscape of Pancreatic Cancer Research The p53 Family in Lung Cancer The p53 Pathway in Cancer Research The p53 Pathway in Cancers The Parallel Universe of RNA beyond the Codifying Genome: ncRNAs, RNA Editing, RBP, and Epitranscriptome in Cancer The Portrait of Cancer Immunotherapy: Tumor Microenvironment, Biomarkers and Immune Resistance—Volume II The Progressive Skeletal Muscle and Body Weight Loss in Cancer Patients The Recent Updates in Primary CNS Tumors The Role of Adenovirus in Cancer Therapy The Role of Alternative Splicing in Cancer The Role of Aptamers in Cancer Diagnostics and Therapy The Role of Autophagy in Brain Tumors The Role of Autophagy in Brain Tumors (Volume II) The Role of Autophagy in Cancer Progression and Drug Resistance The Role of Bcl-2 Family Proteins in Cancer The Role of Cancer Stem Cells in Cancer Targeted Therapy The Role of Cell Death in Cancer Research The Role of Chromosomal Instability in Cancer The Role of CXCR4 in Cancer The Role of Epithelial Signalling Pathways on Tumour Progression The Role of Epithelial-Mesenchymal Transition in Therapies Resistance and Cancer Metastasis The Role of Extracellular Vesicles (EVs) in Cancer Immunotherapy The Role of Hypoxia Inducible Factor (HIF) in Cancers The Role of Immunotherapy in Hematological Malignancies Volume II The Role of Integrins in Cancer The Role of Lactate Isomers in Cancer The Role of Long Non-coding RNA in Solid Tumors The Role of Molecular Medicine in the Targeted Treatment of Gastric Cancer The Role of PET Imaging in Oncology: New Advancements in Clinical and Research Setting The Role of Rho GTPases in Cancer The Role of SBRT/SABR in Prostate Cancer Radiotherapy The Role of Src Kinase Family in Cancer The Role of T/NK Cells in Anti-tumor Immunity The Role of Telomeres and Telomerase in Cancer The Role of the Advanced Patient Derived Models in Tumors and Translational Medicine The Role of Thrombosis and Haemostasis in Cancer The Role of Vitamin D in Cancer The Roles of microRNA in Tumor Initiation and Development: Diagnostic and Therapeutic Potential The Sphingolipid Pathway in Cancer The Study of Cancer Susceptibility Genes The Study of Cancer Susceptibility Genes (Volume II) The Study of Molecular Pathogenesis and Therapeutic Strategies of Pancreatic Cancer The Survival of Colon and Rectal Cancer The Theragnostics Era: New Radiopharmaceuticals for Diagnostics and Therapy The Tumor Microenvironment of High Grade Serous Ovarian Cancer The Tumor Neuroenvironment The Tumor–Immune Interface for Next-Generation Immunotherapy The Use of Real World (RW) Data in Oncology The Warburg Effect in Cancers Theranostic Imaging and Dosimetry for Cancer Therapeutic Approaches in Chronic Lymphocytic Leukemia Therapeutic Monitoring of Anti-cancer Agents Therapeutic Monoclonal Antibodies and Antibody Products, Their Optimization and Drug Design in Cancers Therapeutic Strategies for Metastatic Melanomas Therapeutic Targeting of the Unfolded Protein Response in Cancer Therapeutic Targets in Chronic Lymphocytic Leukemia Therapeutics of Ovarian Cancers: State of the Art and Science Therapy for Human Endometrioid - Endometrial Carcinoma and Endometriosis Thermal Ablation in the Management for Colorectal Liver Metastases Third Edition of Gynecological Cancer Thoracic Malignancies Surgery Thoracic Neuroendocrine Tumors and the Role of Emerging Therapies Thromboembolism in Breast Cancer: Evidence in Context Thymic Tumors: New Developments and Future Directions in Molecularly Informed Therapies Thyroid Cancer Thyroid Cancer in the Elderly Thyroid Cancer: Diagnosis, Prognosis and Treatment Thyroid Cancer: New Advances from Diagnosis to Therapy Thyroid Cancer: Translational and Clinical Studies Thyroid Cancers Time for a Consolidated Approach for the Integration of Precision Surgery in the Treatment of Colon and Rectal Cancer. Focus on Laparoscopic and Robotic Colorectal Surgery Tissue Agnostic Drug Development in Cancer Tobacco-related Cancers Topical and Intralesional Immunotherapy for Skin Cancer Towards New Promising Discoveries for Lung Cancer Patients: A Selection of Papers from the First Joint Meeting on Lung Cancer of the FHU OncoAge (Nice, France) and the MD Anderson Cancer Center (Houston, TX, USA) TRAIL Signaling in Cancer Cells Transcription Factor Regulation and Activities in Cancer Translational Research in Gynecologic Cancer Treatment Advancement in Localized and Metastatic Renal Cell Carcinoma Treatment Intensification in Localized Prostate Cancer: What Route with What Car? Treatment of Cancer-Associated Thrombosis Treatment of Hepatocellular Carcinoma and Cholangiocarcinoma Treatment of Lung Cancer Treatment of Older Adults with Acute Myeloid Leukemia Treatment Strategies and Emerging Biomarkers in High Risk Early-Stage Melanoma Treatment Strategies and Survival Outcomes in Breast Cancer Treatment Strategies for Recurrent Cancers in Head and Neck Oncology Triple Negative Breast Cancer: From Biology to Treatment Tumor and Metabolism Tumor Angiogenesis: An Update Tumor Associated Fibroblasts on Tumor Immune Response Tumor Associated Macrophages Tumor Cell Genesis and Its Microenvironment: Chicken or the Egg Tumor Evolution: Progression, Metastasis and Therapeutic Response Tumor Heterogeneity Tumor Heterogeneity in Pancreatic Cancer Tumor Markers in the Diagnosis of Urological Malignancies Tumor Metabolome: Therapeutic Opportunities Targeting Cancer Metabolic Reprogramming Tumor Microenvironment and Treatment in Uveal Melanoma Tumor Models and Drug Targeting In Vitro (Volume II) Tumor Radioresistance Tumor Stroma Tumor Xenografts Tumor-Associated Myeloid Cells Tumor-Initiating Cells in Breast Cancer: From Bench to Bedside Tumor, Tumor-Associated Macrophages, and Therapy Tumorigenesis Mechanism of Colorectal Cancer Tumors of the Central Nervous System: An Update Tumour Angiogenesis Tumour Associated Dendritic Cells Two Years into the COVID-19 Pandemic: What It Means for Our Cancer Patients Tyrosine Kinase Inhibitors for Lung Cancer Tyrosine Kinase Signaling Pathways in Cancer Ubiquitin-Related Cancer Unmet Need for Evidence on the Possible Role of Neoadjuvant Chemotherapy in Gynaecologic Malignant Disease Unraveling an Aggressive Cancer: The Role of Epigenetics in Pancreas Cancer Unravelling Gastric Cancer Pathobiology: From Correa Cascade to Patchwork Vision Update in Ocular Oncology Update on Pathogenesis and Treatment of Kaposi’s Sarcoma Update on the Management of Head & Neck Paragangliomas: Papers from the First International Congress, Piacenza, Italy September 20-22, 2023 Updates in Acute Myeloid Leukemia Updates in Diagnosis and Management of Bladder Cancer Patients: From Prevention to Surgery and Beyond Updates in Thyroid Cancer Surgery Updates on Chronic Lymphocytic Leukemia Updates on Epigenetics of Brain Tumor Updates on the Genetics of Myeloid Malignancies Updates on the Molecular Profile of Gastrointestinal Stromal Tumors (Volume II) Updates on Urologic Oncology: From Diagnosis to Localized and Systemic Therapy Options Urologic Cancer: Endoscopic, Laparoscopic, and Robot-Assisted Surgery Managment Urology Cancers: Drug Resistance and Signaling Mechanism Urothelial Carcinoma of the Upper Urinary Tract: What Changed? Uveal Melanoma Vaginal Cancer: From Pathology to Treatment Venous Thromboembolism and Cancer Views and Perspectives of Cutaneous Squamous Cell Carcinoma Views and Perspectives of Robot-Assisted Liver Surgery Vitamin D: Role in Cancer Causation, Progression and Therapy What Is New in the Treatment of Intraocular (Uveal) Melanoma Whole Breast Radiotherapy versus Endocrine Therapy in Early Breast Cancer Wnt Signaling in Cancer Women’s Special Issue Series: Oncology World Lung Cancer Awareness Month All Special Issues Volume Issue Number Page Logical Operator Operator AND OR Search Text Search Type All fields Title Abstract Keywords Authors Affiliations Doi Full Text
A new article from Liverpool ocular researchers demonstrates that small uveal (intraocular) melanomas are not always harmless, as the current paradigm suggests. Instead, a reasonable proportion of them have molecular genetic alterations, which categorizes them as highly metastatic tumors. The article recommends that they should not be observed but rather treated immediately, to improve patients' chances of survival. The paper shows that uveal melanoma patients with small tumors, when treated within a certain time frame in Liverpool, do indeed have improved outcomes. The study was undertaken by researchers at Liverpool Ocular Oncology Center based at Liverpool University Hospitals NHS Foundation Trust, the Liverpool Ocular Oncology Research Group (LOORG) at the University of Liverpool and with Professor Bertil Damato, formerly of Liverpool and now based at the Ocular Oncology Service at Moorfields Eye Hospital, London. First author Dr. Rumana Hussain, of Liverpool Ocular Oncology Center, said: "Uveal melanoma is a potentially lethal disease, with a 50% mortality rate from metastatic disease. However, traditionally, small lesions have been monitored rather than treated as it was considered that these are less likely to cause metastatic spread and that local treatment does not influence outcome. "Liverpool is one of the only ocular oncology centers in the world that offers prognostic biopsies to all of its melanoma patients, and we have therefore collected a large molecular genetic cohort of small tumors. This is the first study to show that over a quarter of these smaller uveal melanomas have lethal genetic mutations, and suggests that we may be able to influence patient survival and mortality outcomes with earlier treatment of these small melanomas. This will cause a massive shift in the approach to such patients, both in terms of management of their primary tumor, but also in terms of the consideration of prognostic biopsies in small ocular cancers." The Liverpool Ocular Oncology Research Group's mission is to conduct high quality basic, translational and clinical research into the pathogenesis and treatment of adult ocular tumors that will improve patient care and survival. Together with Dr. Helen Kalirai, Professor Sarah Coupland leads the basic science and translational research portfolio, in addition to being a diagnostic Consultant Pathologist at the Liverpool University Hospitals Foundation Trust. Sarah leads one of the four NHSE supra-regional Ophthalmic Pathology services, and has led the molecular oncology prognostication service for around 10 years. Professor Heinrich Heimann leads the clinical research portfolio of the LOORG and heads the Liverpool Ocular Oncology Center. Professor Sarah Coupland said: "Since the early 1990s it was clear that uveal melanomas could be divided into differing genetic prognostic groups. This has become even more definitive through studies such as The Cancer Genome Atlas Uveal Melanoma study, to which LOORG significantly contributed. These past analyses, however, were based mainly on large tumors, and very few genetic investigations have been undertaken on small uveal melanomas, which erroneously have all been labeled as 'safe'. Our study using a unique collection of tiny intraocular biopsies of small uveal melanomas with follow-up clinical data, shows that they too can be broken down into 'good' and 'bad' tumors. Instead of watching the latter, they can be treated earlier and thereby increase significantly the chance of cure for these patients".
10.3390/cancers13092267
Medicine
Caterpillars could hold the secret to new treatment for osteoarthritis
The polyadenylation inhibitor cordycepin reduces pain, inflammation and joint pathology in rodent models of osteoarthritis. Scientific Reports. doi.org/10.1038/s41598-019-41140-1 Journal information: Scientific Reports
https://doi.org/10.1038/s41598-019-41140-1
https://medicalxpress.com/news/2019-03-caterpillars-secret-treatment-osteoarthritis.html
Abstract Clinically, osteoarthritis (OA) pain is significantly associated with synovial inflammation. Identification of the mechanisms driving inflammation could reveal new targets to relieve this prevalent pain state. Herein, a role of polyadenylation in OA synovial samples was investigated, and the potential of the polyadenylation inhibitor cordycepin (3’ deoxyadenosine) to inhibit inflammation as well as to reduce pain and structural OA progression were studied. Joint tissues from people with OA with high or low grade inflammation and non-arthritic post-mortem controls were analysed for the polyadenylation factor CPSF4 and inflammatory markers. Effects of cordycepin on pain behavior and joint pathology were studied in models of OA (intra-articular injection of monosodium iodoacetate in rats and surgical destabilisation of the medial meniscus in mice). Human monocyte-derived macrophages and a mouse macrophage cell line were used to determine effects of cordycepin on nuclear localisation of the inflammatory transcription factor NFĸB and polyadenylation factors (WDR33 and CPSF4). CPSF4 and NFκB expression were increased in synovia from OA patients with high grade inflammation. Cordycepin reduced pain behaviour, synovial inflammation and joint pathology in both OA models. Stimulation of macrophages induced nuclear localisation of NFĸB and polyadenylation factors, effects inhibited by cordycepin. Knockdown of polyadenylation factors also prevented nuclear localisation of NFĸB. The increased expression of polyadenylation factors in OA synovia indicates a new target for analgesia treatments. This is supported by the finding that polyadenylation factors are required for inflammation in macrophages and by the fact that the polyadenylation inhibitor cordycepin attenuates pain and pathology in models of OA. Introduction Osteoarthritis (OA) is a common chronic age-related joint disease, with a significant inflammatory component 1 , 2 , 3 , 4 , and is a leading cause of pain and disability 5 . The pathophysiology of pain in OA is complex. Treatment options are largely limited to lifestyle changes (diet and exercise) and reducing pain with non-steroidal anti-inflammatory drugs [NSAIDS] or opioids which have limited efficacy and problematic side effects. As a result, joint replacement surgery is a common outcome. OA pathology includes synovitis, cartilage damage, osteophytes and subchondral bone changes. The most prevalent symptom of OA is pain, which is associated with inflammation 6 , 7 . Macrophages play a major role in driving synovitis which in turn augments the progression of OA pathogenesis 3 . The nuclear factor kappa B (NF-ĸB) family of transcription factors mediates activation of inflammatory gene expression and is upregulated in chronic inflammatory states such as OA 8 . Upon inflammatory signaling, these transcription factors translocate into the nucleus and trigger the expression of a wide range of immunomodulatory, angiogenic and proliferative factors 9 . Differentiation of osteoclasts involved in bone remodelling is also NFkB-dependent 10 . Cordycepin (3′deoxyadenosine) is an active compound from the caterpillar fungus Cordyceps militaris 11 . The biochemical pathway for cordycepin is well described, once inside the cell it is converted to cordycepin triphosphate (cordyTP) which inhibits the last two steps in messenger RNA synthesis, cleavage and polyadenylation, both in nuclear extracts and tissue culture 12 , 13 . Incorporation of cordyTP into the poly(A) tail traps a protein complex on the incomplete mRNA. This complex includes the polyadenylation factors cleavage and polyadenylation specificity factor subunit 4 (CPSF4) and WD repeat-containing protein 33 (WDR33), as well as other proteins such as nuclear export factors 14 , 15 , 16 . Although it is evident that cordyTP is a polyadenylation inhibitor, other targets of cordycepin, such as adenosine receptors have been proposed 17 , 18 . Previously we showed that cordycepin specifically inhibits inflammatory gene expression in human airway smooth muscle cells. The effects of cordycepin in these cells were consistent with an inhibition of polyadenylation 19 , making this process a putative target for novel anti-inflammatory drugs. Cordycepin has effects on both cartilage and bone, reducing chondrocyte hypertrophy in vitro via down-regulation of runt-related transcription factor 2 (Runx2), matrix metalloproteinases (MMPS) −3 and −13 as well as a disintegrin and metalloproteinase with thrombospondin motifs ( ADAMTS) -4 and -5 20 , 21 , 22 , 23 . Both in vitro and in vivo studies support potential benefits of cordycepin treatment in preventing bone loss through inhibition of osteoclast differentiation and having osteoprotective effects during osteoporosis 24 , 25 , 26 , 27 . Intra-articular knee injection of cordycepin for a period of 4 to 8 weeks ameliorated cartilage damage in osteoarthritic mice 28 , however neither pain or inflammation endpoints were reported 28 . Synovial inflammation is associated with cartilage damage and bone changes in OA, and is significantly associated with joint pain. Anti-inflammatory activity of cordycepin is evident in murine macrophages in vitro and attributed to the repression of NF-ĸB dependent gene expression 19 , 29 , 30 , 31 . However, it is unknown if the effects of cordycepin on inflammation in macrophages can be attributed to effects in polyadenylation or whether this is true in vivo . Identification of the mechanisms driving synovial inflammation have the potential to reveal new targets to relieve OA pain. Here we investigated whether there is evidence for changes in polyadenylation factors in clinical OA synovial samples, and then the potential of the polyadenylation inhibitor cordycepin to reduce pain and structural OA progression and inflammation. Our findings identify polyadenylation as a novel target for analgesic and disease modifying drugs for OA. Materials and Methods Reagents and antibodies All reagents were purchased from Sigma-Aldrich unless otherwise stated. The following antibodies were obtained from Abcam, MMP13 (39012), osterix (22552), VEGF (46154) and nestin (18102). DRAQ5, NFκB p65 (4764) and nestin (47607) antibodies were purchased from Cell Signalling Technology Inc. WDR33 (374466) and ADAMTS5 (83186) antibodies were purchased from Santa Cruz Biotech. CPSF4 antibody was obtained from Protein Tech. PCNA (M0879) and CD68 (M0814) antibodies were obtained from Dako. Alkaline phosphatase and peroxidase kits as well as secondary antibodies were obtained from Vector Labs. MCSF was obtained from R&D Systems. RANKL was obtained from Peprotech. Rodent models of OA Studies were in accordance with UK Home Office Animals (Scientific Procedures) Act (1986) and the International Association for the Study of Pain guidelines and were approved by ethical review board at the University of Nottingham. Data are presented in line with the ARRIVE guidelines. All animal studies were conducted in a manner that minimised animal distress, and euthanization of the animal occurred via an appropriate S1 technique (as listed by the UK Home Office). Animals were anesthetised with isoflurane (2.5–3%) in 100% oxygen (1 L per min) prior to surgeries and intra-articular injections. Tissues including synovia and joints were collected for molecular biology, histological and immunohistochemistry studies. Male Sprague Dawley rats (n = 10/group) weighing 180–200 g were given intra-articular injection of monosodium iodoacetate (MIA) (1 mg/50 μl) in saline at day 0 into their left knee 32 . Control rats received intra-articular injection of 50 µl of saline. For the therapeutic MIA study, at Day 14, cordycepin (4 mg/kg, 8 mg/kg or 16 mg/kg) or vehicle (1 ml distilled water) were mixed with 1 g of wet mash and administered every other day until day 28. For the pre-emptive MIA study, cordycepin (8 mg/kg) was given at day 0 (prior to intra-articular injection) for a period of 2 weeks, until day 14. Rats were food restricted for 2 hrs prior to being given cordycepin. Pain behaviour was measured twice weekly following model induction. Eight to nine weeks old male C57BL/6 mice (at least n = 15/group) underwent surgery on their left knee joint at week 0 to displace the medial meniscus as described previously 33 . A small longitudinal incision was made over the joint and using blunt dissection the underlying medial meniscotibial ligament (MMTL), which anchors the medial meniscus to the tibial plateau was transected, destabilising the medial meniscus (DMM). The wound was sutured and mice observed until they regained consciousness. Control group included those having sham surgery, in which the ligament was visualised but not transected. From week 14 to 16 mice were orally gavaged every other day with 200 µl cordycepin (8 mg/kg) or vehicle (23% propylene glycol [PPG] in distilled water) 34 . Pain behaviour was measured once weekly following model induction and then twice weekly following cordycepin treatment, until week 16. Pain behaviour was quantified as a change in hindlimb weight-distribution and hindpaw mechanical withdrawal thresholds, as previously described 32 . Human osteoarthritic and post-mortem joint tissues The joint tissue repository of the Arthritis Research UK Pain Centre, which contains samples from >1,700 subjects, was screened to select tissues obtained at the time of total knee replacement (TKR) for OA and tissues obtained post-mortem from age-matched subjects who had not sought medical attention for knee pain during the last year of life (non-OA control group). The tissue samples (n = 10 per group) were split into three distinct groups, OA group having high grade inflammation (median age [IQR]; 58 [57–73], 78% were male), OA group having low grade inflammation (median age [IQR]; 61 [58–70], 67% were male) and non-OA control group (median age [IQR]; 60 [50–70], 67% were male). Synovial inflammation was graded on a scale of 0–3 (where 0 = normal and 3 = severe [high grade] inflammation) by assessing the degree of synovial lining hyperplasia, inflammatory cell infiltrate, and cellularity 35 . Patients undergoing TKR fulfilled the American College of Rheumatology classification criteria for OA at the time of surgery 36 . Subjects from whom samples were obtained postmortem were recently deceased, had no history of rheumatoid arthritis or pseudogout, and had not previously sought help for knee pain during the last year of life, as determined by interviews with the relatives and review of case notes. Exclusion criteria for non-OA controls consisted of a history of OA, Heberden’s nodes identified on clinical examination, macroscopic chondropathy lesions of grade 3 or 4 in the medial tibiofemoral compartment, or osteophytes on direct visualization of the dissected knee. Informed consent was obtained from the TKR patients and from the next of kin of the postmortem subjects. All study protocols were performed in accordance with the relevant guidelines and regulations indicated by the UK National Research Ethics Service (Nottingham Research Ethics Committee 1 [05/Q2403/24] and Derby Research Ethics Committee 1 [11/H0405/2]). Tissue processing Rat synovia with patellae were dissected, embedded in OCT and snap frozen in isopentane. Tibiofemoral joints were fixed for 48 hrs in 4% paraformaldehyde (PFA), then decalcified in 10% ethylenediaminetetraacetic acid (EDTA) in 10 mM Tris buffer (pH 6.95) for 4 weeks on a shaker at room temperature (RT). Coronal sections of trimmed joint tissues were mounted in paraffin wax. Mice synovia with patellae and tibiofemoral joints were either frozen on dry ice or the whole knee joints were fixed in 4% paraformaldehyde for 24 hrs before being decalcified in EDTA for 6 days on a shaker at RT. Sagittal sections of trimmed joint tissues were mounted in paraffin wax. For human tissue samples, midcoronal sections of the middle one-third of the medial tibial plateau were fixed in neutral-buffered formalin and then decalcified in 10% EDTA in 10 m M Tris buffer (pH 6.95; at 4 °C) prior to embedding in wax. Surgeons and technicians were instructed to collect synovium from the medial joint line. Synovial tissues were fixed in formalin and embedded in wax without decalcification. Joint histology All sections for histology were cut at 5 μm and visualised using a 20 × objective lens unless otherwise indicated. All histomorphometry analysis was done on haematoxylin and eosin or Safranin-O/fast green-stained sections by at least two observers blinded to the treatment groups. In the rat MIA model, cartilage damage, matrix proteoglycan and osteophytes were assessed as previously described 37 . The integrity of the osteochondral junction (OCJ) was measured as the number of channels (and those that were nestin positive) crossing the OCJ into the cartilage of the whole section of medial tibial plateau 37 . Synovial inflammation was graded as previously described 38 on a scale from 0 (lining cell layer 1–2 cells thick) to 3 (lining cell layer >9 cells thick and/or severe increase in cellularity). In the mice DMM model, joint pathology was assessed based on previously published scoring criteria 39 , 40 . Briefly, cartilage surface integrity was scored from 0 (normal) to 6 (vertical clefts/erosions to the calcified cartilage extending to >75% of the articular surface). Cartilage proteoglycan loss was scored from 0 (normal staining of non-calcified cartilage) to 5 (complete loss of safranin-o/fast green staining in the non-calcified cartilage extending to ≥75% of the articular surface). Chondrocyte hypertrophy score ranged from 0 (no chondrocyte hypertrophy) to 1 (enlarged chondrocyte lacunae with lack of safranin-o/fast green stain). Osteophyte size was scored from 0 (no osteophyte) to 3 (large osteophyte, greater than 3 x the thickness of the adjacent cartilage. Osteophyte maturity scores ranged from 0 (no osteophyte) to 3 (predominantly bone). Subchondral bone thickening score ranged from 0 (normal trabecular bone with greater than 50% marrow space) to 3 (solid bone spanning greater than two thirds of the width of the epiphysis). Synovial inflammation was graded on a scale of 0 (no inflammation: lining cell layer 1–2 cells thick) to 3 (severe inflammation: greater than 6 cells thick lining). In the human synovial sections inflammation was graded on a scale of 0–3 (where 0 = normal and 3 = severe inflammation) by assessing the degree of synovial lining hyperplasia, inflammatory cell infiltrate, and cellularity 35 . Immunohistochemistry and immunofluorescence Synovial inflammation was measured as CD68 (clone ED1) positive macrophages as previously described 41 . Proliferating cell nuclear antigen (PCNA) positive cells and PCNA-immunoreactive CD31-positive cells were used to identify proliferating cells and proliferating endothelial cells respectively, as measures of the extent of synovial proliferation and angiogenesis 41 . To detect ADAMTS5, MMP13, nestin, VEGF, PCNA, CD68, NF-ĸB and CPSF4 immunoreactivity in paraffin embedded tissue sections, the sections were first deparaffinised and rehydrated in graded ethanol and water, followed by antigen unmasking (10 mM sodium citrate buffer, pH 6) at 80–85 °C for 20 mins. Sections were cooled for 10 mins at RT followed by permeabilisation (0.1% Triton X-100) and blocking (5% serum) steps. Primary antibodies were incubated overnight at 4 °C and secondary antibodies for 45 mins at RT. Vectastain ABC-AP alkaline peroxidase with Fast Diaminobenzidene (DAB) was used to visualise ADAMTS-5, MMP13, nestin, VEGF, CPSF4 and PCNA staining. Preparations were mounted in DePeX. To detect NF-ĸB, WDR33, CPSF4 and CD68 immunofluorescence in tissue sections and cell cultures, Alexa Fluor 488 and 568 secondary antibodies were used. Cell cultures were fixed for 15 mins in 4% PFA before proceeding to the immunofluorescence protocol as described above. DRAQ5 was used as nuclear stain and sections were mounted in aqueous mounting media. Osteoclast number Tissue sections were dewaxed and recalcified before tartrate-resistant acid phosphatase (TRAP) staining. The number of TRAP-positive multinucleated osteoclasts were quantified within the subchondral bone area comprising the area between the cartilage/bone junction and the growth plate as described previously 42 . In-vitro model of human macrophage and osteoclast differentiation This study was approved by the Nottingham University Medical School Research Ethics Committee. Monocytes were isolated from peripheral blood of healthy human donors and either differentiated into macrophages or osteoclasts as previously described 43 . For osteoclast differentiation, monocytes were isolated from buffy coats by gradient centrifugation and seeded onto glass coverslips within a 24-well culture plates, and cultured in growth media supplemented with human macrophage colony stimulating factor (MCSF) and human receptor activator of NF-ĸB ligand (RANKL), unless otherwise stated. Cells were incubated at 37 °C, 7% CO 2 for 2 hrs, and the medium replaced. Growth media containing cordycepin (20 µM) was then added to the cells. After 14 days, cells were washed and fixed with 4% PFA. Differentiated osteoclasts were identified by TRAP staining. For quantification of TRAP positive cells five random fields of view were counted per coverslip using four coverslips per condition. Cells that stained positive for TRAP and had three or more nuclei were counted. For macrophage differentiation, the monocytes were grown in RPMI 1640 supplemented with 5% foetal bovine serum (FBS) in the presence of MCSF for 5 days. Adherent cells were washed, replated onto coverslips in a 24-well plate and cultured for a further 24 h in 3% FBS before stimulation with LPS with and without cordycepin. Cells were fixed in 4% PFA before proceeding to the immunofluorescence protocol. RAW264.7 cells were maintained in Dulbecco’s Modified Eagle Medium (DMEM) with 10% FBS in a humidified atmosphere of 5% CO 2 and 95% air at 37 °C. Twenty four hours before experimentation, the cells were washed with PBS and supplemented with 0.5% serum. The cells were then treated with cordycepin (20 µM) either 1 hr before or 10 mins after LPS stimulation (1 µg/ml). After which the cells were either processed for protein/RNA extraction or immunofluorescence protocol. RNA isolation for tissue culture cells was done using the Promega Reliaprep system. Western blot analysis RAW264.7 cells were lysed with radioimmunoprecipitation assay (RIPA) buffer (0.5% Igepal, 0.5% deoxycholate, 0.05% sodium dodecyl sulfate, 1 mM β-glycerophosphate, 1 mM Na 3 VO 4 , 1 mM phenylmethylsulfonyl fluoride) containing protease/phosphatase inhibitors to extract total cell protein content. Protein concentration was determined by Bradford Assay. Approximately 30 µg of protein was subjected to SDS-PAGE and transferred to nitrocellulose membrane. To block non-specific binding of proteins, membranes were treated in TBST with 5% skimmed milk for 1 hr at RT, and were incubated overnight with primary antibodies against IĸB and vinculin at 4 °C, followed by secondary antibody incubation for 1 hr at RT. Immunoreactivity was detected by chemiluminescence. Western blot images were not reassembled after cuts between the vertical lanes. The blots were cut horizontally so to make it easier to probe for various antibodies simultaneously on the same blot. Each continuous image represents a single exposure. RNA isolation from tissues At the end of the pain behavioural studies conducted in the DMM-model of OA, fresh-frozen synovial tissue and knee joints were collected and stored at −80 °C. Tissues were homogenized using the bullet blender. Total RNA was extracted from the tissues and RAW264.7 cells using TRIzol. Quantitative real-time polymerase chain reaction (qRT-PCR) 500 ng of RNA was reversed transcribed to cDNA and diluted 5 fold with sterile distilled water before being subjected to qPCR using the GoTaq qPCR Master Mix containing the relevant forward and reverse primer sets (Supplementary Table 1 ). Primers for CD68, IL1β (spliced and unspliced), nestin, VEGF, PCNA, MYC, osterix, RUNX1, RUNX2 and TNF (spliced and unspliced) were designed with Primer Express 3.0 software. All qRT-PCR experiments were performed in triplicates, data were normalised to relative expression using the Qiagen Rotor-Gene Q software. All values were normalised to Ribosomal Protein L28 (RPL28) using the 2 −∆∆ct method. siRNA Transfection Protocol RAW264.7 cells were transfected for 24 hrs with lipofectamine diluted in opti-MEM containing 5 nM siRNA for either WDR33 (Dharmacon SMARTpool ON-TARGETplus L-051645-01-0005) or CPSF4 (Dharmacon SMARTpool ON-TARGETplus L-052851-01-0005). Media was changed the next day and cells transfected again and incubated for a period of 24 hrs before being processed for either immunofluorescence or RNA extraction. Microarray analysis High-throughput analysis was conducted on RNA extracted from RAW 264.7 cells treated with either 20 μM cordycepin or vehicle control for 1 hr before being stimulated with LPS (1 μg/ml) or vehicle control for a further 1 hr, to reveal the genome wide changes brought about by cordycepin. Biological replicates (n = 4, 16 RNA samples in total) were then analysed on a mouse GE 8 × 60 K microarray (Agilent, cat no G4852A). This was followed by cluster analysis on the lists of RNAs whose levels were changed by cordycepin. This gene ontology analysis was done using the Database for Annotation, Visualization and Integrated Discovery (DAVID) v6.8 software on the list of genes most strongly downregulated in the LPS with cordycepin treatment group compared to LPS alone group. Image analysis and quantification Synovial macrophage fractional area was the percentage of synovial section area positive for CD68, and synovial angiogenesis was measured as endothelial cell proliferation index, defined as the percentage of endothelial nuclei positive for PCNA, derived using four fields per section and one section per case as described previously 2 , 44 . Cartilage cellularity was quantified by counting the chondrocytes in three microscopic fields (355 μm x 265 μm) per section, at central, medial and lateral side of the medial tibial plateau (MTP) taken under 40 × magnification. Total number of chondrocytes and number of positively stained chondrocytes were counted in each section. Results were expressed as the percentage positive cells. At least 3 images per section and 3 sections per block were analysed 45 . Nestin and VEGF expression in the subchondral bone was analysed using imageJ software as the area (µm 2 ) covered by nestin/VEGF immunoreactivity. PCNA expression in human synovial tissues was analysed on deconvoluted DAB and haematoxylin images in ImageJ as percentage of PCNA positive cells. NF-ĸB immunofluorescence was analysed on ImageJ as nuclear versus cytoplasmic intensity expression or intensity per µm 2 . All histology and immunohistochemistry image analyses were performed using a Zeiss Axioskop 50 microscope and a KS300 image analysis system. Immunofluorescence images were captured using a Leica confocal microscope. Statistical analysis Data were analyzed with GraphPad Prism version 6 and are presented as either the mean ± SEM or mean ± SD. For all comparisons, p < 0.05 was taken to indicate statistical significance. Parametric data were analysed using analysis of variance (ANOVA) with post hoc Dunnett’s test. Univariate comparisons were made against controls using the Student t test. Non-parametric data were analysed using the Kruskal–Wallis test followed by the Mann–Whitney test with Bonferroni correction. Results The polyadenylation marker CPSF4 is elevated in inflamed synovial tissues from people with OA Increased synovial cellularity and angiogenesis were observed in synovia from people with OA who had high grade synovial inflammation compared with age and sex-matched post mortem non-arthritic control tissues (Fig. 1A ). Synovia from people with OA with low grade inflammation were similar to post mortem control tissues. Increased NFκB and CPSF4 expression observed in synovia from people with OA who had high grade inflammation was localised to CD68 positive macrophages (Fig. 1B,C ). These data demonstrate that there are significant changes in the polyadenylation machinery in human OA synovitis. Figure 1 Expression of CD68 positive macrophages, cell proliferation, angiogenesis, NFκB and the polyadenylation marker CPSF4 in synovia from OA patients with varying degree of inflammation. Synovia from OA patients with high grade inflammation (n = 10), low grade inflammation (n = 10) and post-mortem (PM) control tissues (n = 10) were used to detect the expression of the polyadenylation marker CPSF4, angiogenesis marker nestin, extent of cell proliferation (proliferating cell nuclear antigen [PCNA], CD68 positive macrophages and NFκB. Synovia from OA patients with high grade inflammation expressed higher levels of CPSF4 ( A , B ) and NFκB ( C ) compared to PM non-arthritic control tissues, and the expression was localised to CD68 positive macrophages. CPSF4 expression was seen in the nucleus and cytoplasm of CD68 positive macrophages ( B ) as indicated by yellow arrows. Greater density of nestin positive blood vessels ( A ) and increased cell proliferation ( A ) was also seen in the synovia from OA patients with high grade inflammation compared to PM control tissues. Synovia from OA patients with low grade inflammation were similar to PM control tissues. DRAQ5 was used to detect cell nuclei. Scale bar = 20 µm. Data are mean ± SEM of tissues from n = 10 patients per group. *p < 0.05 versus post mortem controls. ++ p < 0.01 versus low inflamed OA group. Full size image Cordycepin reduces pain and synovial inflammation in two rodent OA models To assess the potential of cordycepin as a treatment for OA we used two different rodent OA models. Oral doses of 8 mg/kg and 16 mg/kg cordycepin were similarly effective at reducing pain behaviour in rats with OA induced by intra-articular injection of MIA and 8 mg/kg was used for all further animal studies (Fig. 2 ). Figure 2 Cordycepin reduces established pain behavior in a dose-dependent manner in the MIA model of OA. Male Sprague Dawley rats (n = 8/group) were given intra-articular injection of MIA (1 mg/50 μl) at day 0. At Day 14 (dotted line), cordycepin or vehicle mixed with 1 g of wet mash was administered every other day until day 28. Rats were food restricted for 2 hrs prior to being given cordycepin. At the time of being given cordycepin, rats were moved to individual cages (housed 1/cage) and given wet mash containing cordycepin. Rats were habituated for pain behaviour testing ( A : incapacitance [weight-bearing] and B: Von-Frey [paw-withdrawal threshold]) prior to model induction. Pain behaviour was measured twice weekly following model induction until day 28. Pain behavior increased in the arthritic animals (blue line) following MIA injection (day 0) compared to controls (black line) and the increase was maintained until day 28. Dose dependent reduction in pain behavior is evident following cordycepin administration at day 14. With higher doses of cordycepin being more effective at reducing pain behavior. *versus saline + vehicle treated groups. + versus MIA group. Full size image Preventative oral cordycepin treatment given 2 hrs prior to MIA injection and thereafter every other day for 2 weeks (day 0 to 14) reduced MIA-induced pain behaviour measured as hind limb weight bearing asymmetry and ipsilateral paw withdrawal threshold (Fig. 3A,B ). MIA-induced synovitis (Fig. 3C,D,G ), synovial cell proliferation (Fig. 3E,H ) and synovial angiogenesis (Fig. 3F,H ) were also reduced following cordycepin treatment. Initial reduction in pain behaviour following preventative treatment with cordycepin in the MIA model was seen at day 3 and this reduction was maintained through day 14 (Fig. 3A,B ). Figure 3 Pre-emptive cordycepin treatment reduces pain and synovial inflammation in the MIA model of OA. OA was induced on day 0 by injecting MIA (1 mg/50 µl) in the left knee joints of male Sprague Dawley rats ( A , B ). Cordycepin (Cordy; 8 mg/kg, orally, every other day) or vehicle (Veh) was administered for a period of 2 weeks, starting at day 0 until day 14. Saline (50 µl) injected rats were used as controls. Cordycepin treatment reduced MIA-induced changes in pain behaviour measured as weightbearing asymmetry ( A ) and mechanical allodynia ( B ). Cordycepin treatment reduced MIA-induced increase in synovial macrophages ( C , G ), cellularity/lining thickness ( D , G ), cell proliferation ( E , H ) and angiogenesis (F and H). Immunostaining of CD68 positive macrophages and haematoxylin and eosin stained sections showed cellular infiltration ( G ). Immunostained sections for proliferating endothelial cells (ECs) (proliferating cell nuclear antigen [PCNA] positive CD31 cells); black arrows, non-proliferating ECs; blue arrows and PCNA positive cells; red arrows (H). Data are presented graphically as mean ± SEM from n = 10 rats/group. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus vehicle-treated saline-injected controls. + p < 0.05, ++ p < 0.01, +++ p < 0.001, +++ p < 0.0001 versus vehicle-treated MIA-injected day 14 OA rats. Full size image When cordycepin was given therapeutically to rats with MIA-induced OA for a period of 2 weeks (day 14 to 28; every other day) it first reduced the pain behaviour as measured by the hind limb weight bearing asymmetry and ipsilateral paw withdrawal threshold (Fig. 4A,B ) at day 21 (7 days after cordycepin treatment). This decrease in pain behaviour was maintained through day 28. The reduction in MIA-induced pain behaviour was accompanied by a reduction in MIA-induced increases in synovial macrophages (Fig. 4C–E ), angiogenesis (Fig. 4C,G ), cellularity (Fig. 4C,F ) and NFκB expression by synovial macrophages (Fig. 4H,I ). Although CPSF4 expression was somewhat increased in the synovia from the MIA model of OA, neither this nor inhibitory effects of cordycepin on CPSF4 expression reached statistical significance (Supplementary Fig. 1 ). Figure 4 Therapeutically administered cordycepin reduces MIA-induced pain and synovial inflammation. OA was induced on day 0 by injecting MIA (1 mg/50 µl) in the left knee joints. Cordycepin (Cordy; 8 mg/kg, orally, every other day) or vehicle (Veh) was administered for a period of 2 weeks starting at day 14; once MIA-induced changes were established, to day 28. Saline (50 µl) injected rats were used as controls. Cordycepin treatment reduced MIA-induced changes in pain behaviour measured as weightbearing asymmetry ( A ) and mechanical allodynia ( B ). Synovial sections were immunostaining for CD68 positive macrophages, PCNA positive CD31 blood vessels and proliferating endothelial cells (ECs) (proliferating cell nuclear antigen [PCNA] positive CD31 cells); black arrows, non-proliferating ECs; blue arrows and PCNA positive cells; red arrows ( C ). Haematoxylin and eosin stained sections showed cellular infiltration (C). MIA-induced synovial inflammation measured as macrophage fractional area ( D ) and synovial lining thickness score ( E ) as well as synovial cell proliferation ( F ) were also reduced in the cordycepin treated groups. Cordycepin treatment did not significantly reduce MIA-induced synovial angiogenesis ( G ). Immunofluorescence staining for synovial NFκB (p65) intensity ( H , I ) showed that synovial macrophages expressed NFκB and the expression of NFκB increased with MIA-induced disease progression ( H ). Cordycepin treatment reduced the synovial NFκB (p65) intensity (I). Scale bars are 100 µm. Data are mean ± SEM of n = 10 rats per group. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus vehicle-treated saline-injected controls. + p < 0.05, ++ p < 0.01 versus vehicle-treated MIA-injected OA rats at day 28. ## p < 0.01 versus MIA-injected OA rats at day 14. Full size image Induction of the DMM model in mice exhibited a significant increase in weight bearing asymmetry by week 12 and this was maintained through week 16 (Fig. 5A ). DMM surgery did not significantly alter paw withdrawal threshold (Supplementary Fig. 2 ). Cordycepin treatment (every other day from week 14 to 16) reduced established pain behaviour measured as hind limb weight bearing asymmetry in the DMM model (Fig. 5A ). Initial reduction was seen 2 hrs following oral cordycepin and maintained through week 16 (Fig. 5A ). The DMM model was also associated with synovial inflammation (Fig. 5B,C and Table 1 ), and with increases in synovial mRNA expression levels of inflammatory (Fig. 5B,E ), angiogenesis (Fig. 5C,F ) and cell proliferation (Fig. 5D,G ) markers. A 2 week oral treatment with cordycepin reduced synovial mRNA expression of inflammatory (CD68 and IL1β) and angiogenic (nestin and VEGF) markers in the DMM model, indicating effects of cordycepin on synovial inflammation at the cellular level, although significance was not achieved for cell proliferation (PCNA and MYC) markers (Fig. 5D,G ), nor on histological synovial inflammation score (Table 1 ). Figure 5 Cordycepin treatment reduces established knee joint pain and synovial inflammation in the DMM model of OA. OA was induced on day 0 by surgically displacing the medial meniscus (DMM). Sham-operated mice were used as controls in which the ligament was visualised but not transected. Cordycepin or vehicle was given orally for a period of 2 weeks, starting at week 14 when knee joint pain measured as weightbearing asymmetry ( A ) was first evident in the DMM operated mice. Sham-operated mice did not exhibit joint pain. Cordycepin (cordy) treatment reduced the established pain behavior observed following DMM surgery ( A ). Fresh frozen synovial tissues were dissected at week 16 and synovial tissue mRNA expression analysed for inflammatory ( B ; CD68 and E; IL1β) and angiogenic ( C ; nestin and F ; VEGF) markers with no alteration in cell proliferation (D; PCNA and G; MYC) markers. Data are mean ± SEM of n = 15 mice per group. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus sham controls. # p < 0.05, ## p < 0.01, ### p < 0.001 versus DMM + Cordy group. Full size image Table 1 Cordycepin treatment reduces cartilage and bone damage in the mice destabilising of medial meniscus (DMM) model of OA. Values are the median (IQR). + p < 0.05, ++ p < 0.01 versus arthritic (DMM) mice. Full size table Effects of cordycepin on cartilage damage and subchondral bone changes in two rodent OA models The MIA model of OA was associated with cartilage damage measured as proteoglycan loss, increased chondropathy score and expression of proteolytic enzymes (MMP13 and ADAMTS5). In addition, there was formation of osteophytes and subchondral bone remodelling measured as number of channels crossing the OCJ, expression of TRAP-positive osteoclasts and nestin and VEGF expression in the subchondral bone (Figs 6 – 9 , and Supplementary Fig. 3 ). Figure 6 Cordycepin treatment reduces monosodium iodoacetate (MIA)-induced subchondral bone changes. OA was induced by injecting MIA (1 mg/50 µl) in the left knee joints on day 0. At day 14, cordycepin (Cordy; 8 mg/kg, orally, every other day) or vehicle (Veh) was administered for a period of 2 weeks until day 28. Saline (50 µl) injected rats were used as controls. Cordycepin treatment reduced MIA-induced increase in number of channels crossing the osteochondral junction (OCJ) ( A , B ), tartrate-resistant acid phosphatase (TRAP) positive osteoclasts ( A , C ) as well as nestin ( A , D ) and VEGF ( A , E ) expression in the subchondral bone. Vehicle-treated saline-injected control rats showed fewer number of channels crossing the OCJ and TRAP positive osteoclasts as well as nestin and VEGF expression in the subchondral bone compared with MIA rats. Coronal sections of rat joints were stained for Safranin-O fast green showing histological changes in the cartilage and subchondral bone (OCJ), TRAP positive osteoclasts), nestin and VEGF immunoreactivity. Scale bar = 100 µm. Data are presented graphically as mean ± SEM from n = 10 rats/group. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus vehicle-treated saline-injected controls. + p < 0.05, ++ p < 0.01, +++ p < 0.001, ++++ p < 0.0001 versus vehicle-treated MIA-injected OA rats at day 28. # p < 0.05, ## p < 0.01, ### p < 0.001, #### p < 0.0001 versus MIA-injected OA rats at day 14. Full size image Figure 7 Pre-emptive cordycepin treatment reduces MIA-induced cartilage damage. OA was induced by injecting MIA (1 mg/50 µl) in the left knee joints on day 0. At day 14, cordycepin (Cordy; 8 mg/kg, orally, every other day) or vehicle (Veh) was administered for a period of 2 weeks, from day 0 until day 14. Saline (50 µl) injected rats were used as non-arthritic controls. Vehicle-treated saline-injected control rats showed smooth cartilage and joint margins with normal chondrocyte distribution and proteoglycan staining ( A ). Increased cartilage loss ( B ; arrows) and osteophyte growth ( B ; circle) at joint margins accompanied with chondrocyte hypocellularity and severe loss of proteoglycan staining was observed in the MIA rats ( B ). Cordycepin-treated MIA rats had reduced cartilage damage ( C , D ) and improvement in proteoglycan staining ( C , E ). Codycepin treatment did not significantly reduce osteophyte score ( C , F ). Data are presented graphically as mean ± SEM from n = 10 rats/group. **p < 0.01 versus vehicle-treated saline-injected controls. + p < 0.05 versus vehicle-treated MIA-injected day 14 OA rats. Full size image Figure 8 Pre-emptive cordycepin treatment reduces MIA-induced cartilage damage by inhibiting the expression of cartilage proteolytic enzymes. OA was induced by injecting MIA (1 mg/50 µl) in the left knee joints on day 0. At day 14, cordycepin (Cordy; 8 mg/kg, orally, every other day) or vehicle (Veh) was administered for a period of 2 weeks, from day 0 until day 14. Saline (50 µl) injected rats were used as non-arthritic controls. Vehicle-treated saline-injected control rats showed normal chondrocyte distribution and expression of proteolytic enzymes (ADAMTS5 and MMP13). Chondrocyte hypocellularity was observed in the MIA rats with increased expression of proteolytic enzymes by chondrocytes ( A – C ). Cordycepin-treated MIA rats showed significant reduction in chondrocyte expression of ADAMTS5 and MMP13. Moreover, following cordycepin treatment, chondrocyte expression of ADAMTS5 and MMP13 appeared to be mostly nuclear and there was a reduction in chondrocyte hypertrophy. Data are presented graphically as mean ± SEM from n = 10 rats/group. *p < 0.05, ***p < 0.001, ****p < 0.0001 versus vehicle-treated saline-injected controls. ++ p < 0.01, ++++ p < 0.0001 versus vehicle-treated MIA-injected day 14 OA rats. Full size image Figure 9 Pre-emptive cordycepin treatment reduces monosodium iodoacetate (MIA)-induced subchondral bone changes. OA was induced by injecting MIA (1 mg/50 µl) in the left knee joints on day 0. Cordycepin (Cordy; 8 mg/kg, orally, every other day) or vehicle (Veh) was administered for a period of 2 weeks from day 0 until day 14. Saline (50 µl) injected rats were used as controls. Cordycepin treatment reduced MIA-induced increase in number of channels crossing the osteochondral junction (OCJ) ( A – C and J ) and those that were nestin positive (K). MIA-induced increase of nestin expression in the subchondral bone ( D – F and L ) and tartrate-resistant acid phosphatase (TRAP) positive osteoclasts ( G – I and M ). Vehicle-treated saline-injected control rats showed fewer number of channels crossing the OCJ and TRAP positive osteoclasts as well as nestin expression in the subchondral bone compared with MIA rats. Safranin-O fast green stained coronal sections of rat joints ( A – C ) showing histological changes in the cartilage and subchondral bone. Examples of coronal rat joint sections showing nestin immunoreactivity ( D – F ) and TRAP positive osteoclasts ( G – I ). Scale bar = 100 µm. Data are presented graphically as mean ± SEM from n = 10 rats/group. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 versus vehicle-treated saline-injected controls. + p < 0.05, ++ p < 0.01, +++ p < 0.001, ++++ p < 0.0001 versus vehicle-treated MIA-injected OA rats at day 14. Full size image Preventative cordycepin treatment reduced all the measures of MIA-induced cartilage damage (Figs 7 and 8 ) and subchondral bone remodelling (Fig. 9 ), but did not significantly alter MIA-induced osteophyte formation (Fig. 7 ). Therapeutic cordycepin reduced MIA-induced subchondral bone remodelling (Fig. 6 ), but had no significant effect on MIA-induced cartilage damage and osteophyte formation (Supplementary Fig. 3 ). Similar to the MIA model (Figs 2 – 4 , Figs 6 – 9 and Supplementary Fig. 3 ), the DMM model was associated with cartilage damage, proteoglycan loss, chondrocyte hypertrophy, increased ADAMTS5 and MMP13 expression in chondrocytes, osteophytosis (size and maturity) and synovial inflammation (Table 1 and Supplementary Fig. 4 ). Cordycepin treatment reduced all of the above measures of joint pathology in the DMM model (Table 1 and Supplementary Fig. 4 ). Increased expression of CD68 mRNA was evident in the joints of DMM-operated mice and reduced in the DMM-operated mice treated with cordycepin (Supplementary Fig. 5C ). The moderate increase in joint angiogenesis markers (nestin and VEGF) observed in the DMM model (Supplementary Fig. 5A,B ) was not significantly affected by cordycepin treatment. The mRNA expression profile of the osteoblast differentiation markers, osterix, RUNX1 and RUNX2 was increased in the joints of DMM mice and reduced with cordycepin treatment (Supplementary Fig. 5 ). Unlike the MIA model of OA, TRAP positive osteoclasts were not observed in the DMM mice (Supplementary Fig. 4 ). Cordycepin reduces inflammatory transcription in macrophages through inhibiting nuclear localisation of NFĸB To elucidate possible direct actions of cordycepin in macrophages we stimulated mouse macrophage cell line RAW264.7 and human blood monocyte derived macrophages with LPS. Microarray analysis of RAW264.7 cells showed that many LPS induced inflammatory mRNAs were repressed by 1 hour pre-treatment with cordycepin (Fig. 10A ). There was a significant enrichment of inflammatory genes in the repressed mRNAs (Fig. 10B ). Time course analysis for IL1β and TNF mRNA demonstrated effects even when cordycepin was given 10 minutes after LPS (Fig. 10C ), indicating a rapid and direct effect on inflammatory gene expression. The unspliced precursors of the inflammatory RNAs were also reduced, suggesting a transcriptional block (Fig. 10C ). However, IĸB degradation, which is required for NFĸB to enter the nucleus, was unchanged by cordycepin treatment (Fig. 10D ). Despite this normal signal transduction, LPS-induced nuclear localisation of NFĸB was reduced by cordycepin treatment in the RAW264.7 macrophage cell line (Fig. 10E ), and in the human monocyte derived macrophages (Fig. 10F ). In addition, cordycepin reduced the differentiation of primary human osteoclasts which is also dependent on NFκB (Supplementary Fig. 6 ). Cordycepin thus appears to affect inflammatory gene transcription through blocking nuclear NFĸB localisation. Figure 10 Effects of cordycepin treatment on LPS-stimulated RAW 264.7 macrophages and human monocyte derived macrophages. RAW-264.7 cells were treated with cordycepin (Cordy; 20 μM) either 60 mins prior to or 10 mins after LPS (1 μg/ml) stimulation. DMSO was used as vehicle control. Microarray (A) and gene ontology ( B ) analysis of LPS (60 mins pre-treatment) stimulated cells in the presence and absence of cordycepin. In the microarray data ( A ), the colours indicate statistically significant changes of 2 fold or more. Cordycepin treatment ( ● ) 60 mins before and 10 mins after LPS stimulation ( C ) suppresses LPS-induced activation of inflammatory genes compared to LPS stimulated and DMSO treated cells ( ● ). GAPDH was used as housekeeping gene. Cordycepin ( ● ) prevented the LPS ( ● ) induced increase in unspliced mRNA of TNF and IL1β inflammatory genes ( C ). Cordycepin treatment did not prevent the degradation of IκB as shown in the western blots ( D ). Uncropped images of the blots can be seen in Supplementary Fig. 7 . Independent replicates are in Supplementary Fig. 8 . At 1 hr, cordycepin reduced nuclear accumulation of NF-ĸB ( E ), reducing nuclear:cytoplasmic NFκB expression ratio (E). Monocytes isolated from peripheral blood of healthy human donors were differentiated into macrophages ( F ). Monocytes grown in the presence of human macrophage colony stimulating factor (MCSF) for 5 days were differentiated into macrophages before stimulation with LPS (100 ng/ml) with and without 20 µM cordycepin for 1 hr. DMSO was used as vehicle control. Cordycepin treatment for 1 hr reduced LPS-induced nuclear accumulation of NF-ĸB in human macrophages, reducing nuclear:cytoplasmic NFκB expression ratio (F). DRAQ5 was used to detect cell nuclei. RAW264.7 data are Mean ± SD of n = 3 biological replicates. Human macrophage data are mean ± SEM of at least n = 3 biological replicates. Scale bar = 10 μm ( E ) and 20 μm ( F) . ****p < 0.0001 versus vehicle treated group. ++++ p < 0.0001 versus LPS group. Full size image Cordycepin reduces inflammation through polyadenylation inhibition in macrophages Cordycepin is a polyadenylation inhibitor and arrests a complex of factors that include polyadenylation factors such as WDR33 and CPSF4 on the mRNA precursor in polyadenylation reactions in nuclear extracts 16 , however no clear link between polyadenylation factors and inflammation has so far been established. We investigated the effect of cordycepin on the localisation of polyadenylation factors. WDR33 and CPSF4 were predominantly cytoplasmic (Fig. 11 ) in untreated macrophages. LPS treatment induced localisation of WDR33 and CPSF4 to the nuclei (Fig. 11A ), a novel finding that indicates that they are sensitive to inflammatory signalling. Nuclear localisation of polyadenylation factors was reduced by cordycepin (Fig. 11A–C ). Cordycepin therefore depletes polyadenylation factors from the nucleus, where they are usually localised during inflammatory gene induction. Figure 11 A role for the polyadenylation factors WDR33 and CPSF4 in NFκB mediated transcription. RAW 264.7 macrophages were stimulated with LPS (1 μg/ml) for 10 mins and then treated with 20 μM cordycepin for another 50 mins ( A – C ). DMSO was used as vehicle control. Cordycepin treatment reduced the LPS induced nuclear localisation of polyadenylation factors WDR33 (A and B) and CPSF4 ( A , C ). siRNA mediated knockdown of polyadenylation factors WDR33 and CPSF4 prevented LPS-induced increase in expression of TNF ( D , E ) IL1β ( F , G ) and the nuclear accumulation of NFκB (H and I). Mean ± SD of n = 3 biological replicates. *p < 0.05. **p < 0.01, ***p < 0.001, ****p < 0.0001 versus LPS stimulated cells. Scale bar = 10 μm. Full size image To characterise a potential role of polyadenylation factors in NFĸB localisation, we knocked down WDR33 and CPSF4 using siRNA transfection. This knock-down reduced LPS mediated induction of TNF and IL1β mRNAs (Fig. 11D–G ) and NFĸB nuclear localisation (Fig. 11H,I ), similar to the effect of cordycepin on inflammatory gene expression. Our data therefore indicate that inflammatory gene expression in macrophages is dependent on the presence of high levels of polyadenylation factors in the nucleus. Discussion OA pain is closely associated with synovial inflammation and joint damage 6 , 46 , 47 . Inflammation is present at the earliest stages of OA, even before cartilage damage is evident 48 , 49 . In a significant subset of patients with OA, chronic low-grade inflammation appears to be a major driver of ongoing joint damage 4 . A recent study has demonstrated that OA knees are more sensitive to inflammatory flares, leading to transient increase in pain behaviour and long-term exacerbation of inflammation-induced joint damage 50 . Preventing inflammation may therefore be particularly important in preventing symptoms and long term joint damage in OA. We demonstrate an increased expression of polyadenylation factor CPSF4 associated with synovial inflammation in human OA and that the polyadenylation inhibitor cordycepin sequestered polyadenylation factors in the cytoplasm of human macrophages, reducing nuclear levels of polyadenylation factors. Cordycepin treatment reduced pain behaviour and structural damage in the two rodent models of OA. Our data support a role of polyadenylation in OA progression, inflammatory gene expression and pain. NF-κB is a central regulator of inflammation, involved in OA pathophysiology and is activated in OA chondrocytes during aging and inflammation 51 . We confirmed that NFκB is overexpressed, and inflammatory markers are increased, in inflamed human OA synovium, as well as in the two rodent models of OA. We used the surgically induced DMM and the chemically induced MIA models of OA, which resemble many features of human OA, pain, synovial inflammation, cartilage damage, osteophytes and subchondral bone changes. Interestingly, these models display varying degrees of inflammation and structural changes, reflecting the heterogeneity of human OA. Oral cordycepin had robust inhibitory effects on pain behaviour whether given therapeutically or preventatively in the rat MIA and the mouse DMM OA models, and reduced inflammatory markers in the synovium, indicating analgesic and anti-inflammatory properties in vivo . Therapeutic cordycepin reduced cartilage damage in the DMM model, but not in the MIA model, despite a significant reduction of cartilage damage by the preventative treatment protocol. The MIA model of OA is a rapidly progressing, severe OA model and at the time when cordycepin was given (day 14 after MIA injection), cartilage damage is advanced and cordycepin treatment did not reverse the existing damage. Our data therefore suggests that cordycepin can prevent, but not repair cartilage damage, and a window of opportunity might exist in early OA during which disease modification is possible. This window was targeted in the more slowly progressing DMM model, in which therapeutic cordycepin (week 14 following DMM model induction) not only reduced OA-associated bone changes, but also reduced cartilage damage. Effects of cordycepin on subchondral bone changes were observed with both therapeutic and preventative treatment regimens. These effects of cordycepin on structural OA (cartilage damage and subchondral bone) may be mediated by suppression of synovial inflammation or direct effects on cartilage or bone. In cultured human chondrocytes and in intervertebral discs, cordycepin had cartilage protective effects 20 , 21 , 22 , 23 . In addition tissue culture and animal studies support benefits of cordycepin in preventing bone loss through inhibition of osteoclast differentiation 24 , 25 , 26 , and an osteoprotective effect in osteoporosis 27 . Our data are in keeping with the beneficial effects of intra-articular injection of encapsulated cordycepin on cartilage damage and ADAMTS5 and MMP13 immunoreactivity in the mouse anterior cruciate ligament transection (ACLT) model of OA 28 . However we acknowledge that other potential mechanisms leading to cartilage protection following cordycepin treatment may exist. These may include regulation of autophagy markers and aggrecan neoepitopes generated by aggrecanases and metalloproteases 28 . Our novel finding that orally administered cordycepin reduces synovial inflammation, structural damage and pain in the DMM and MIA models of OA demonstrates that encapsulation is not required to obtain the therapeutic effects of cordycepin. The mechanisms by which cordycepin attenuates pain behaviour in the models of OA may arise as a result of an inhibition of inflammatory signalling, and/or direct effects on the primary afferent nociceptors. Cordycepin mediated prevention of the activation of mRNA translation in axons 52 , 53 , 54 , 55 , which is proposed to occur via cytoplasmic polyadenylation 56 , 57 may contribute to the effects of cordycepin on OA pain responses. Local injection of cordycepin into the hindpaw reduced carrageenan-induced and prostaglandin E 2 -induced hyperalgesia 52 , 53 , however these studies are complicated by the locally high doses of cordycepin used, which can also inhibit general protein synthesis and may not be specific to polyadenylation of axonal mRNAs 58 . We report an increase in the synovial expression of the polyadenylation factor CPSF4 associated with inflammation in human OA and that polyadenylation factors undergo nuclear translocation in response to inflammatory signalling in human macrophages, suggesting a role of polyadenylation in inflammatory responses. Our data showing a specific role for polyadenylation factors in the inflammatory process are novel, but are also supported by a previous finding that polyadenylation factor CPSF4 overexpression is required for NFκB mediated transcription in lung cancer cells, suggesting this dependence is not limited to one cell type 59 . Importantly, we show that the polyadenylation inhibitor cordycepin sequesters polyadenylation factors to the cytoplasm of macrophages, lowering nuclear levels of polyadenylation factors. In combination with our earlier data showing that cordycepin acts intracellularly as cordyTP 19 , 58 , our data indicate that cordycepin acts on polyadenylation, sequestering polyadenylation factors in an RNA bound complex in the cytoplasm 14 , 15 , 16 , and not as an adenosine receptor agonist 17 , 18 . Although the exact role of polyadenylation factors in the inflammatory response and the molecular detail of the mechanisms of action of cordycepin on polyadenylation remain to be elucidated, our data demonstrate cordycepin is orally effective in models of OA pain and indicate that it functions as a polyadenylation inhibitor. We demonstrate that the effects of cordycepin on NFkB mediated transcription is shared between human and rodent macrophages, suggesting that cordycepin will also have the capacity to reduce inflammation in humans. Given that the reported toxicity of cordycepin is low 60 , 61 , 62 , the prospects for clinical application appear excellent. Inflammation is a core component of OA, with subgroups of human knee OA characterized by varying severities of synovial inflammation 4 . For example, generalized nodal OA displays greater synovitis than other OA subtypes 63 . Our data indicate that cordycepin holds promise as the lead compound for a novel class of orally available anti-inflammatory and analgesic drugs, initially for OA patients with high synovial inflammation, and potentially also for other conditions associated with pain and inflammation. Data Availability All data generated or analysed during this study are included in this published article (and its Supplementary Information files). The microarray dataset has been deposited in the GEO database under accession number GSE126157.
A substance from a fungus that infects caterpillars could offer new treatment hope for sufferers of osteoarthritis according to new research. Cordycepin is an active compound isolated from the caterpillar fungus Cordyceps militaris and has proved to be effective in treating osteoarthritis by blocking inflammation in a new way, through reducing a process called polyadenylation. The research was undertaken by scientists from the University of Nottingham and supported by funding from Versus Arthritis. The findings have been published today in Scientific Reports. Dr. Cornelia De Moor from the University of Nottingham's School of Pharmacy led the study and said: "The natural compound cordycepin is derived from a caterpillar fungus which is famous in the Far East for its medicinal properties. In this paper we show that orally administrated cordycepin reduces pain and halts disease progression in animal models of osteoarthritis. Intriguingly, it does this by a different mechanism than any other known anti-inflammatory painkiller, through affecting the last step of making a messenger RNA, polyadenylation. This means that medicines derived from cordycepin may help patients for whom other treatments have failed. We hope that cordycepin will prove to be the founder of a new class of pain killer, the polyadenylation inhibitors. There is a long way to go before a cordycepin derived medicine reaches patients, but our work is very promising we are very excited about the prospects." Reducing pain and damage Osteoarthritis (OA) is a common chronic age-related joint disease, with approximately a third of people over the age of 45 seeking treatment for the disease. In osteoarthritis, the cartilage becomes flaky and rough and small pieces break off to form loose bodies in the fluid that lubricates the joint called synovial fluid. This causes irritation and inflammation of the synovial membrane. The loss of cartilage leaves bones unprotected and vulnerable to damage. In this new study it was found that there is an increased expression of polyadenylation factor CPSF4 associated with synovial inflammation in osteoarthritis. CPSF4 and another polyadenylation factor are required for the activation the key inflammatory cells, the macrophages. Administering cordycepin represses the activity of the polyadenylation factors and suppresses inflammation in macrophages. Cordycepin treatment reduced pain behaviour and structural damage in rats and mice with osteoarthritis, supporting a role of polyadenylation in osteoarthritis progression, inflammatory gene expression and pain. Possible new treatment options Treatment options for this painful and debilitating disease are largely limited to lifestyle changes and reducing pain with non-steroidal anti-inflammatory drugs [NSAIDS] or opioids which have limited efficacy and come with problematic side effects. As a result, joint replacement surgery is a common outcome. The results from this new research provides the possibility of a more effective treatment for osteoarthritis suffers that is less toxic, so will have reduced side effects for patients. Dr. Stephen Simpson from Versus Arthritis said: "Persistent pain is life changing for people with arthritis. This is not good enough and so we are delighted to support this research that has led to these fascinating findings. Previous work by this group has shown this compound has anti-inflammatory effects and in the latest studies support understanding of how this works on cells responsible for inflammation. Although in its early stages, the study has great potential for helping people suffering pain of musculoskeletal conditions and demonstrates the high value and impact of novel discovery-led research on understanding and treating diseases."
doi.org/10.1038/s41598-019-41140-1
Medicine
Sugar not so nice for your child's brain development
Emily E. Noble et al, Gut microbial taxa elevated by dietary sugar disrupt memory function, Translational Psychiatry (2021). DOI: 10.1038/s41398-021-01309-7 Journal information: Translational Psychiatry
http://dx.doi.org/10.1038/s41398-021-01309-7
https://medicalxpress.com/news/2021-03-sugar-nice-child-brain.html
Abstract Emerging evidence highlights a critical relationship between gut microbiota and neurocognitive development. Excessive consumption of sugar and other unhealthy dietary factors during early life developmental periods yields changes in the gut microbiome as well as neurocognitive impairments. However, it is unclear whether these two outcomes are functionally connected. Here we explore whether excessive early life consumption of added sugars negatively impacts memory function via the gut microbiome. Rats were given free access to a sugar-sweetened beverage (SSB) during the adolescent stage of development. Memory function and anxiety-like behavior were assessed during adulthood and gut bacterial and brain transcriptome analyses were conducted. Taxa-specific microbial enrichment experiments examined the functional relationship between sugar-induced microbiome changes and neurocognitive and brain transcriptome outcomes. Chronic early life sugar consumption impaired adult hippocampal-dependent memory function without affecting body weight or anxiety-like behavior. Adolescent SSB consumption during adolescence also altered the gut microbiome, including elevated abundance of two species in the genus Parabacteroides ( P. distasonis and P. johnsonii ) that were negatively correlated with hippocampal function. Transferred enrichment of these specific bacterial taxa in adolescent rats impaired hippocampal-dependent memory during adulthood. Hippocampus transcriptome analyses revealed that early life sugar consumption altered gene expression in intracellular kinase and synaptic neurotransmitter signaling pathways, whereas Parabacteroides microbial enrichment altered gene expression in pathways associated with metabolic function, neurodegenerative disease, and dopaminergic signaling. Collectively these results identify a role for microbiota “dysbiosis” in mediating the detrimental effects of early life unhealthy dietary factors on hippocampal-dependent memory function. Introduction The gut microbiome has recently been implicated in modulating neurocognitive development and consequent functioning 1 , 2 , 3 , 4 . Early life developmental periods represent critical windows for the impact of indigenous gut microbes on the brain, as evidenced by the reversal of behavioral and neurochemical abnormalities in germ free rodents when inoculated with conventional microbiota during early life, but not during adulthood 5 , 6 , 7 . Dietary factors are a critical determinant of gut microbiota diversity and can alter gut bacterial communities, as evident from the microbial plasticity observed in response to pre- and probiotic treatment, as well as the “dysbiosis” resulting from consuming unhealthy, yet palatable foods that are associated with obesity and metabolic disorders (e.g., Western diet; foods high in saturated fatty acids and added sugar) 8 . In addition to altering the gut microbiota, consumption of Western dietary factors yields long-lasting memory impairments, and these effects are more pronounced when consumed during early life developmental periods vs. during adulthood 9 , 10 , 11 . Whether diet-induced changes in specific bacterial populations are functionally related to altered early life neurocognitive outcomes, however, is poorly understood. The hippocampus, which is well known for its role in spatial and episodic memory and more recently for regulating learned and social aspects of food intake control 12 , 13 , 14 , 15 , 16 , 17 , is particularly vulnerable to the deleterious effects of Western dietary factors 9 , 18 , 19 . During the juvenile and adolescent stages of development, a time when the brain is rapidly developing, consumption of diets high in saturated fat and sugar 20 , 21 , 22 or sugar alone 23 , 24 , 25 , 26 impairs hippocampal function while in some cases preserving memory processes that do not rely on the hippocampus. While several putative underlying mechanisms have been investigated, the precise biological pathways linking dietary factors to neurocognitive dysfunction remain largely undetermined 11 . Here we aimed to determine whether sugar-induced alterations in gut microbiota during early life are causally related to hippocampal-dependent memory impairments observed during adulthood. Methods and materials Experimental subjects Juvenile male Sprague Dawley rats (Envigo; arrival postnatal day (PN) 26–28; 50–70 g) were housed individually in standard conditions with a 12:12 light/dark cycle. All rats had ad libitum access to water and Lab Diet 5001 (PMI Nutrition International, Brentwood, MO; 29.8 % kcal from protein, 13.4% kcal from fat, 56.7% kcal from carbohydrate), with modifications where noted. Treatment group sizes for Aim 1 experiments are derived from power analyses conducted in Statistica Software (V7) based on our published data, pilot data, and relevant publications in the literature. All experiments were performed in accordance with the approval of the Animal Care and Use Committee at the University of Southern California. Experiment 1 Twenty-one juvenile male rats (PN 26–28) were divided into two groups with equal bodyweight and given ad libitum access to (1) 11% weight-by-volume (w/v) solution containing monosaccharide ratio of 65% fructose and 35% glucose in reverse osmosis-filtered water (SUG; n = 11) or 2) or an extra bottle of reverse osmosis-filtered water (CTL; n = 10). This solution was chosen to model commonly consumed sugar-sweetened beverages (SSBs) in humans in terms of both caloric content and monosaccharide ratio 27 . In addition, all rats were given ad libitum access to water and standard rat chow. Food intake, solution intake, and body weights were monitored thrice-weekly except were prohibited due to behavioral testing. At PN 60, rats underwent Novel Object in Context (NOIC) testing, to measure hippocampal-dependent episodic contextual memory. At PN 67 rats underwent anxiety-like behavior testing in the Zero Maze, followed by body composition testing at PN 70 and an intraperitoneal glucose tolerance test (IP GTT) at PN 84. All behavioral procedures were run at the same time each day (4–6 h into the light cycle). Investigators were blind to animal groups when scoring the behavioral tasks such that the scorers did not know which animal was in which group. Fecal and cecal samples were collected prior to sacrifice at PN 104. In a separate cohort of juvenile male rats ( n = 6/group) animals were treated as above, but on PN day 60 rats were tested in the Novel Object Recognition (NOR) and Open Field (OF) tasks, with two days in between tasks. Animals were sacrificed and tissue punches were collected from the dorsal hippocampus on PN day 65. Tissue punches were flash-frozen in a beaker filled with isopentane and surrounded dry ice and then stored at −80 °C until further analyses. Experiment 2 Twenty-three juvenile male rats (PN 26–28) were divided into two groups of equal bodyweight and received a gavage twice daily (12 h apart) for 7 days (only one treatment was given on day 7) of either (1) saline (SAL; n = 8), or (2) a cocktail of antibiotics consisting of Vancomycin (50 mg/kg), Neomycin (100 mg/kg), and Metronidazole (100 mg/kg) along with supplementation with 1 mg/mL of ampicillin in their drinking water (ABX; n = 15), which is a protocol modified from 28 . Animals were housed in fresh, sterile cages on Day 3 of the antibiotic or saline treatment, and again switched to fresh sterile cages on Day 7 after the final gavage. All animals were maintained on sterile, autoclaved water and chow for the remainder of the experiment. Rats in the ABX group were given water instead of ampicillin solution on Day 7. Animals in the ABX group were further subdivided to receive either gavage of a 1:1 ratio of Parabacteroides distasonis and Parabacteroides johnsonii (PARA; n = 8) or saline (SAL; n = 7) thirty-six hours after the last ABX treatment. To minimize potential contamination, rats were handled minimally for 14 days. Cage changes occurred once weekly at which time animals and food were weighed. Experimenters wore fresh, sterile PPE, and weigh boxes were cleaned with sterilizing solution in between each cage change. On PN 50 rats were tested in NOIC, on PN 60 rats were tested in NOR, on PN 62 rats were tested in the Zero Maze, followed by OF on PN 64. Investigators were blind to animal groups when scoring the behavioral tasks such that the scorers did not know which animal was in which group when timing the behavior (NOIC, NOR, Zero Maze, OF). On PN 73 rats were given an IP GTT, and on PN 76 body composition was tested. Rats were sacrificed at PN 83 and dorsal hippocampus tissue punches and cecal samples were collected. Tissue punches were flash-frozen in a beaker filled with isopentane and surrounded by dry ice and cecal samples were placed in microcentrifuge tubes embedded in dry ice. Samples were subsequently stored at −80 °C until further analyses. IP glucose tolerance test (IP GTT) Animals were food-restricted 24 h prior to IP GTT. Immediately prior to the test, baseline blood glucose readings were obtained from the tail tip and recorded by a blood glucose meter (One-touch Ultra2, LifeScan Inc., Milpitas, CA). Each animal was then intraperitoneally (IP) injected with dextrose solution (0.923 g/ml by body weight) and tail tip blood glucose readings were obtained at 30, 60, 90, and 120 min after IP injections, as previously described 23 . Zero Maze The Zero Maze is an elevated circular track (63.5 cm fall height, 116.8 cm outside diameter), divided into four equal-length sections. Two sections were open with 3 cm high curbs, whereas the 2 other closed sections contained 17.5 cm high walls. Animals are placed in the maze facing the open section of the track in a room with ambient lighting for 5 min while the experimenter watches the animal from a monitor outside of the room. The experimenter records the total time spent in the open sections (defined as the head and front two paws in open arms), and the number of crosses into the open sections from the closed sections. The novel object in context task NOIC measures episodic contextual memory based on the capacity for an animal to identify which of two familiar objects it has never seen before in a specific context. Procedures were adapted from prior reports 29 , 30 . Briefly, rats are habituated to two distinct contexts on subsequent days (with the habituation order counterbalanced by the group) for 5-min sessions: Context 1 is a semi-transparent box (15 in. W × 24 in. L × 12 in. H) with orange stripes and Context 2 is a grey opaque box (17 in. W × 17 in. L × 16 in. H) (Context identify assignments counterbalanced by the group), each context is in a separate dimly lit room, which is obtained using two desk lamps pointed toward the floor. Day 1 of NOIC begins with each animal being placed in Context 1 containing two distinct similarly sized objects placed in opposite corners: a 500 ml jar filled with blue water (Object A) and a square glass container (Object B) (Object assignments and placement counterbalanced by the group). On day 2 of NOIC, animals are placed in Context 2 with duplicates of one of the objects. On NOIC day 3, rats are placed in Context 2 with Objects A and Object B. One of these objects is not novel to the rat, but its placement in Context 2 is novel. All sessions are 5 min long and are video recorded. Each time the rat is placed in one of the contexts, it is placed with its head facing away from both objects. The time spent investigating each object is recorded from the video recordings by an experimenter who is blinded to the treatment groups. Exploration is defined as sniffing or touching the object with the nose or forepaws. The task is scored by calculating the time spent exploring the Novel Object to the context divided by the time spent exploring both Objects A and B combined, which is the novelty or “discrimination index”. Rats with an intact hippocampus will preferentially investigate the object that is novel to Context 2, given that this object is a familiar object yet is now presented in a novel context, whereas hippocampal inactivation impairs the preferential investigation of the object novel to Context 2 29 . Novel object recognition The apparatus used for NOR is a grey opaque box (17 in. W × 17 in. L × 16 in. H) placed in a dimly lit room, which is obtained using two desk lamps pointed toward the floor. Procedures are adapted from ref. 31 . Rats are habituated to the empty arena and conditions for 10 min on the day prior to testing. The novel object and the side on which the novel object is placed are counterbalanced by the group. The test begins with a 5-min familiarization phase, where rats are placed in the center of the arena, facing away from the objects, with two identical copies of the same object to explore. The objects were either two identical cans or two identical bottles, counterbalanced by the treatment group. The objects were chosen based on preliminary studies which determined that they are equally preferred by Sprague Dawley rats. Animals are then removed from the arena and placed in the home cage for 5 min. The arena and objects are cleaned with 10% ethanol solution, and one of the objects in the arena is replaced with a different one (either the can or bottle, whichever the animal has not previously seen, i.e., the “novel object”). Animals are again placed in the center of the arena and allowed to explore for 3 min. Time spent exploring the objects is recorded via video recording and analyzed using Any-maze activity tracking software (Stoelting Co., Wood Dale, IL). Open Field OF measures general activity level and also anxiety-like behavior in the rat. A large gray bin, 60 cm (L) × 56 cm (W) is placed under diffuse even lighting (30 lux). A center zone is identified and marked in the bin (19 cm L × 17.5 cm W). A video camera is placed directly overhead and animals are tracked using AnyMaze Software (Stoelting Co., Wood Dale, IL). Animals are placed in the center of the box facing the back wall and allowed to explore the arena for 10 min while the experimenter watches from a monitor in an adjacent room. The apparatus is cleaned with 10% ethanol after each rat is tested. Body composition Body composition (body fat, lean mass) was measured using LF90 time-domain nuclear magnetic resonance (Bruker NMR minispec LF 90II, Bruker Daltonics, Inc.). Bacterial transfer P. distasonis (ATCC 8503) was cultured under anaerobic conditions at 37 °C in Reinforced Clostridial Medium (RCM, BD Biosciences). P. johnsonii (DSM 18315) was grown in anaerobic conditions in PYG medium (modified, DSM medium 104). Cultures were authenticated by full-length 16S rRNA gene sequencing. For bacterial enrichment, 10 9 colony-forming units of both P. distasonis and P. johnsonii were suspended in 500 µL pre-reduced PBS and orally gavaged into antibiotic-treated rats. When co-administered, a ratio of 1:1 was used for P. distasonis and P. johnsonii . Gut microbiota DNA extraction and 16s rRNA gene sequencing in sugar-fed and control rats All samples were extracted and sequenced according to the guidelines and procedures established by the Earth Microbiome Project 32 . DNA was extracted from fecal and cecal samples using the MO BIO PowerSoil DNA extraction kit. Polymerase chain reaction (PCR) targeting the V4 region of the 16S rRNA bacterial gene was performed with the 515F/806R primers, utilizing the protocol described in Caporaso et al. 33 . Amplicons were barcoded and pooled in equal concentrations for sequencing. The amplicon pool was purified with the MO BIO UltraClean PCR Clean-up kit and sequenced by the 2 × 150 bp MiSeq platform at the Institute for Genomic Medicine at UCSD. All sequences were deposited in Qiita Study 11255 as raw FASTQ files. Sequences were demultiplexed using Qiime-1 based “split libraries” with the forward reads only dropping. Demultiplexed sequences were then trimmed evenly to 100 bp and 150 bp to enable comparison to other studies for meta-analyses. Trimmed sequences were matched to known OTUs at 97% identity. Gut microbiota DNA extraction and 16S rRNA gene sequencing for Parabacteroides -enriched and control rats Total bacterial genomic DNA was extracted from rat fecal samples (0.25 g) using the Qiagen DNeasy PowerSoil Kit. The library was prepared following methods from (Caporaso et al. 33 ). The V4 region (515F–806R) of the 16S rDNA gene was PCR amplified using individually barcoded universal primers and 30 ng of the extracted genomic DNA. The conditions for PCR were as follows: 94 °C for 3 min to denature the DNA, with 35 cycles at 94 °C for 45 s, 50 °C for 60 s, and 72 °C for 90 s, with a final extension of 10 min at 72 °C. The PCR reaction was set up in triplicate, and the PCR products were purified using the Qiaquick PCR purification kit (QIAGEN). The purified PCR product was pooled in equal molar concentrations quantified by nanodrop and sequenced by Laragen, Inc. using the Illumina MiSeq platform and 2 × 250 bp reagent kit for paired-end sequencing. Amplicon sequence variants (ASVs) were chosen after denoising with the Deblur pipeline. Taxonomy assignment and rarefaction were performed using QIIME2-2019.10. Hippocampal RNA extraction and sequencing Hippocampi from rats treated with or without sugar or Parabacteroides were subject to RNA-seq analysis. Total RNA was extracted according to the manufacturer’s instructions using RNeasy Lipid Tissue Mini Kit (Qiagen, Hilden, Germany). Total RNA was checked for degradation in a Bioanalyzer 2100 (Agilent, Santa Clara, CA, USA). Quality was very high for all samples, and libraries were prepared from 1 µg of total RNA using a NuGen Universal Plus mRNA-seq Library Prep Kit (Tecan Genomics Inc., Redwood City, CA). Final library products were quantified using the Qubit 2.0 Fluorometer (Thermo Fisher Scientific Inc., Waltham, MA, USA), and the fragment size distribution was determined with the Bioanalyzer 2100. The libraries were then pooled equimolarly, and the final pool was quantified via qPCR using the Kapa Biosystems Library Quantification Kit, according to the manufacturer’s instructions. The pool was sequenced in an Illumina NextSeq 550 platform (Illumina, San Diego, CA, USA), in Single-Read 75 cycles format, obtaining about 25 million reads per sample. The preparation of the libraries and the sequencing were performed at the USC Genome Core ( ). RNA-seq quality control Data quality checks were performed using the FastQC tool ( ) and low-quality reads were trimmed with Trim_Galore ( ). RNA-seq reads passing quality control were mapped to Rattus novegicus transcriptome (Rnor6) and quantified with Salmon 34 . Salmon directly mapped RNA-seq reads to Rat transcriptome and quantified transcript counts. Txiimport 35 was used to convert transcript counts into gene counts. Potential sample outliers were detected by principal component analysis (PCA) and one control and one treatment sample from the Parabacteroides experiment were deemed outliers (Fig. S1 ) and removed. Identification of differentially expressed genes (DEGs) DESeq2 36 were used to conduct differential gene expression analysis between sugar treatment and the corresponding controls or between Parabacteroides treatment and the corresponding controls. Low-abundance genes were filtered out and only those having a mean raw count > 1 in more than 50% of the samples were included. Differentially expressed genes were detected by DESeq2 with default settings. Significant DEGs were defined as Benjamini–Hochberg (BH) adjusted false-discovery rate (FDR) < 0.05. For heatmap visualization, genes were normalized with variance stabilization transformation implemented in DESeq2, followed by calculating a z -score for each gene. Pathway analyses of DEGs For the pathway analyses, DEGs at an unadjusted P value < 0.01 were used. Pathway enrichment analyses were conducted using enrichr 37 by intersecting each signature with pathways or gene sets from KEGG 38 , gene ontology biological pathways, cellular component, molecular function 39 , and Wikipathways 40 . Pathways at FDR < 0.05 were considered significant. Unless otherwise specified, R 3.5.2 was used for the analysis mentioned in the RNA sequencing section. Additional statistical methods Data are presented as means ± SEM. For analytic comparisons of body weight, total food intake, and chow intake, groups were compared using repeated-measures ANOVA in Prism software (GraphPad Inc., version 8.0). Taxonomic comparisons from 16S rRNA sequencing analysis were analyzed by analysis of the composition of microbiomes (ANCOM). When significant differences were detected, Sidak post-hoc test for multiple comparisons was used. The area under the curve for the IP GTT testing was also calculated using Prism. All other statistical analyses were performed using Student’s two-tailed unpaired t tests in excel software (Microsoft Inc., version 15.26). Normality was confirmed prior to the utilization of parametric testing. For all analyses, statistical significance was set at P < 0.05. A predetermined criterion for exclusion was utilized and was based on the Grubbs Outlier Test (Prism, Graphpad Inc.) using alpha = 0.05. Results Early life sugar consumption impairs hippocampal-dependent memory function Results from the NOIC task, which measures hippocampal-dependent episodic contextual memory function 30 , reveal that while there were no differences in total exploration time of the combined objects on days 1 or 3 of the task (Fig. 1A, B ), animals fed sugar solutions in early life beginning at PN 28 had a reduced capacity to discriminate an object that was novel to a specific context when animals were tested during adulthood (PN 60), indicating impaired hippocampal function (Fig. 1C ). Conversely, animals fed sugar solutions in early life performed similarly to those in the control group when tested in the novel object recognition task (NOR) (Fig. 1D ), which tests object recognition memory independent of context. Notably, when performed using the current methods with a short duration between the familiarization phase and the test phase, NOR not hippocampal-dependent but instead is primarily dependent on the perirhinal cortex 30 , 41 , 42 , 43 . These data suggest that early life dietary sugar consumption impairs performance in hippocampal-dependent contextual-based recognition memory without affecting performance in perirhinal cortex-dependent recognition memory independent of context 23 . Fig. 1: Early life sugar consumption negatively impacts hippocampal-dependent memory function. A , B Early life sugar consumption had no effect on total exploration time on days 1 (familiarization) or day 3 (test day) of the Novel Object in Context (NOIC) task. C The discrimination index was significantly reduced by early life sugar consumption, indicating impaired hippocampal function ( P < 0.05, n = 10,11; two-tailed, type 2 Student’s t test). D There were no differences in exploration index in the Novel Object Recognition (NOR task) ( n = 6; two-tailed, type 2 Student’s t test). E , F There were no differences in time spent in the open arm or the number of entries into the open arm in the Zero Maze task for anxiety-like behavior ( n = 10; two-tailed, type 2 Student’s t test). G , H There were no differences in distance traveled or time spent in the center arena in the Open Field task ( n = 8; two-tailed, type 2 Student’s t test). I There was no differences in body fat % during adulthood between rats fed early life sugar and controls ( n = 10,11; two-tailed, type 2 Student’s t test). J , K Body weights and total energy intake did not differ between the groups ( n = 10,11; two-way repeated-measures ANOVA), despite ( L ) increased kcal consumption from sugar-sweetened beverages in the sugar group. CTL = control, SUG = sugar, PN = post-natal day; data shown as mean ± SEM. Full size image Elevated anxiety-like behavior and altered general activity levels may influence novelty exploration independent of memory effects and may therefore confound the interpretation of behavioral results. Thus, we next tested whether early life sugar consumption affects anxiety-like behavior using two different tasks designed to measure anxiety-like behavior in the rat: the elevated zero mazes and the OF task, the latter of which also assesses levels of general activity 44 . Early life sugar consumption had no effect on time spent in the open area or in the number of open area entries in the zero maze (Fig. 1E, F ). Similarly, early life sugar had no effect on distance traveled or time spent in the center zone in the OF task (Fig. 1G, H ). Together these data suggest that habitual early life sugar consumption did not increase anxiety-like behavior or general activity levels in the rats. Early life sugar consumption impairs glucose tolerance without affecting total caloric intake, body weight, or adiposity Given that excessive sugar consumption is associated with weight gain and metabolic deficits 45 , we tested whether access to a sugar solution during the adolescent phase of development would affect food intake, body weight gain, adiposity, and glucose tolerance in the rat. Early life sugar consumption had no effect on body composition during adulthood (Fig. 1I , Fig. S2A, B ). Early life sugar consumption also had no effect on body weight or total kcal intake (Fig. 1J, K ), which is in agreement with the previous findings 23 , 26 , 46 . Animals steadily increased their intake of the 11% sugar solution throughout the study (Fig. 1L ) but compensated for the calories consumed in the sugar solutions by reducing their intake of dietary chow (Fig. S2C ). However, animals that were fed sugar solutions during adolescence showed impaired peripheral glucose metabolism in an IP GTT (Fig. S2D ). Gut microbiota is impacted by early life sugar consumption Principal component analyses of 16 s rRNA gene sequencing data of fecal samples revealed a separation between the fecal microbiota of rats fed early life sugar and controls (Fig. 2A ). Results from LEfSe analysis identified differentially abundant bacterial taxa in fecal samples that were elevated by sugar consumption. These include the family Clostridiaceae and the genus 02d06 within Clostridiaceae , the family Mogibacteriaceae , the family Enterobacteriaceae , the order Enterobacteriales , the class of Gammaproteobacteria , and the genus Parabacteroides within the family Porphyromonadaceae (Fig. 2B, C ). In addition to an elevated % relative abundance of the genus Parabacteroides in animals fed early life sugar (Fig. 2D ), log-transformed counts of the Parabacteroides negatively correlated with performance scores in the NOIC memory task (Fig. 2E ). Of the additional bacterial populations significantly affected by sugar treatment, regression analyses did not identify any other genera as being significantly correlated to NOIC memory performance. Within Parabacteroides , levels of two operational taxonomic units (OTUs) that were elevated by sugar negatively correlated with performance in the NOIC task, identified as taxonomically related to P. johnsonii and P. distasonis (Fig. 2F, G ). The significant negative correlation between NOIC performance and Parabacteroides was also present within each of the diet groups alone, but when separated out by diet group only P. distasonis showed a significant negative correlation for each diet group ( P < 0.05), whereas P. johnsonii showed a nonsignificant trend in both the control and sugar groups ( P = 0.06, and P = 0.08, respectively; Fig. S3A–C ). The abundance of other bacterial populations that were affected by sugar consumption was not significantly related to memory task performance. Fig. 2: Effect of adolescent sugar consumption on the gut microbiome in rats. A Principal component analysis showing separation between fecal microbiota of rats fed early life sugar or controls ( n = 11, 10; dark triangles = sugar, open circles = control). B Results from LEfSe analysis showing Linear Discriminate Analysis (LDA) scores for microbiome analysis of fecal samples of rats fed early life sugar or controls. C A cladogram representing the results from the LEfSe analysis with the class as the outermost taxonomic level and species at the innermost level. Taxa in red are elevated in the sugar group. D Relative % abundance of fecal Parabacteroides are significantly elevated in rats fed early life sugar ( P < 0.05; n = 11, 10, two-tailed, type 2 Student’s t test). E Linear regression of log normalized fecal Parabacteroides counts against the shift from baseline performance scores in the novel object in context task (NOIC) across all groups tested ( n = 21). E – G Linear regression of the most abundant fecal Parabacteroides species against shift from baseline performance scores in NOIC across all groups tested ( n = 21). * P < 0.05; data are shown as mean ± SEM. Full size image There was a similar separation between groups in bacteria analyzed from cecal samples (Fig. S4A ). LEfSe results from cecal samples show elevated Bacilli , Actinobacteria , Erysipelotrichia , and Gammaproteobacteria in rats fed early life sugar, and elevated Clostridia in the controls (Fig. S4B, C ). Abundances at the different taxonomic levels in fecal and cecal samples are shown in (Figs. S5 and S6 ). Regression analyses did not identify these altered cecal bacterial populations as being significantly correlated to NOIC memory performance. Early life Parabacteroides enrichment impairs memory function To determine whether neurocognitive outcomes due to early life sugar consumption could be attributable to elevated levels of Parabacteroides in the gut, we experimentally enriched the gut microbiota of naïve juvenile rats with two Parabacteroides species that exhibited high 16S rRNA sequencing alignment with OTUs that were increased by sugar consumption and were negatively correlated with behavioral outcomes in rats fed early life sugar. P. johnsonii and P. distasoni species were cultured individually under anaerobic conditions and transferred to a group of antibiotic-treated young rats in a 1:1 ratio via oral gavage using the experimental design described in Methods and outlined in Fig. 3A , and from ref. 28 . To confirm Parabacteroides enrichment, 16SrRNA sequencing was performed on rat fecal samples for SAL–SAL, ABX-SAL, and ABX-PARA groups. Alpha diversity was analyzed using observed OTUs (Fig. 3B ), where both ABX-SAL and ABX-PARA fecal samples have significantly reduced alpha diversity when compared with SAL–SAL fecal samples, suggesting that antibiotic treatment reduces microbiome alpha diversity. Further, either treatment with antibiotics alone or antibiotics followed by Parabacteroides significantly alters microbiota composition relative to the SAL–SAL group (Fig. 3C ). Taxonomic comparisons from 16S rRNA sequencing analysis were analyzed by analysis of the composition of microbiomes (ANCOM). Differential abundance on relative abundance at the species level (Fig. 3D ) was tested across samples hypothesis-free. Significant taxa at the species level were corrected for using FDR-corrected P values to calculate W in ANCOM. Comparing all groups resulted in the highest W value of 144 for the Parabacteroides genus, which was enriched in ABX-PARA fecal samples after bacterial gavage with an average relative abundance of 55.65% (Fig. 3E ). This confirms successful Parabacteroides enrichment for ABX-PARA rats post-gavage when compared to either ABX-SAL (average relative abundance of 5.47%) or ABX-SAL rats (average relative abundance of 0.26%). Fig. 3: Intestinal Parabacteroides is enriched by antibiotic treatment and oral gavage of P. distasonis and P. johnsonii . A Schematic showing the timeline for the experimental design of the Parabacteroides transfer experiment. B Alpha diversity based on 16S rRNA gene profiling of fecal matter ( n = 7–8) represented by observed operational taxonomic units (OTUs) for a given number of sample sequences. C Principal coordinates analysis of weighted UniFrac distance based on 16S rRNA gene profiling of feces for SAL–SAL, ABX-SAL, and ABX-PARA enriched rats ( n = 7–8). D Average taxonomic distributions of bacteria from 16S rRNA gene sequencing data of feces for SAL–SAL, ABX-SAL, and ABX-PARA enriched animals ( n = 7–8). E Relative abundances of Parabacteroides in fecal microbiota for SAL–SAL, ABX-SAL, and ABX-PARA enriched animals ( n = 7–8) (ANCOM). PN post-natal day, IP GTT intraperitoneal glucose tolerance test. Data are presented as mean ± S.E.M. * P < 0.05, ** P < 0.01, *** P < 0.001. n.s. not statistically significant, SAL–SAL rats treated with saline, ABX-SAL rats treated with antibiotics followed by sterile saline gavage, ABX-PARA rats treated with antibiotics followed by a 1:1 gavage of Parabacteroides distasonis and Parabacteroides johnsonii . Full size image All rats treated with antibiotics showed a reduction in food intake and body weight during the initial stages of antibiotic treatment, however, there were no differences in body weight between the two groups of antibiotic-treated animals by PN50, at the time of behavioral testing (Fig. S7A–C ). Similar to a recent report 47 , Parabacteroides enrichment in the present study impacted body weight at later time points. Animals who received P. johnsonii and P. distasonis treatment showed reduced body weight 40 days after the transfer, with significantly lower lean mass (Fig. S7D–F ). There were no differences in percent body fat between groups, nor were there significant group differences in glucose metabolism in the IPGTT (Fig. S7G ). Importantly, the body weights in the ABX-PARA group did not significantly differ from the ABX-SAL control group at the time of behavioral testing. Results from the hippocampal-dependent NOIC memory task showed that while there were no differences in total exploration time of the combined objects on days 1 or 3 of the task, indicating similar exploratory behavior, animals enriched with Parabacteroides showed a significantly reduced discrimination index in the NOIC task compared with either control group (Fig. 4A–C ), indicating impaired performance in hippocampal-dependent memory function. When tested in the perirhinal cortex-dependent NOR task 30 , animals enriched with Parabacteroides showed impaired object recognition memory compared with the antibiotic-treated control group as indicated by a reduced novel object exploration index (Fig. 4D ). These findings show that, unlike sugar-fed animals, Parabacteroides enrichment impaired perirhinal cortex-dependent memory processes in addition to hippocampal-dependent memory. Fig. 4: Early life enrichment with Parabacteroides negatively impacts neurocognitive function. A , B Early life enrichment with a 1:1 ratio of P. johnsonii and P. distasonis had no effect on total exploration time in the Novel Object in Context (NOIC) task. C Discrimination index was significantly reduced by enrichment with P. johnsonii and P. distasonis , indicating impaired hippocampal function ( n = 7,8; F (2, 19) = 4.92; P < 0.05, one-way ANOVA with Tukey’s multiple comparison test). D There was a significant reduction in the exploration index in the Novel Object Recognition (NOR task), indicating impaired perirhinal cortex function ( n = 7,8; F (2, 19) = 3.61; P < 0.05, one-way ANOVA with Tukey’s multiple comparison test). E , F There were no differences in time spent or a number of entries into the open arm by animals with P. johnsonii and P. distasonis enrichment in the Zero Maze task for anxiety-like behavior ( n = 7,8; one-way ANOVA). G , H There were no differences in distance traveled or time spent in the center arena in the Open Field task ( n = 7,8; one-way ANOVA). SAL–SAL saline–saline control, ABX-SAL antibiotics-saline control, ABX-PARA antibiotics- P. johnsonii and P. distasonis enriched, PN post-natal day; data shown as mean ± SEM; * P < 0.05. Full size image Results from the zero maze showed no differences in time spent in the open arms nor in the number of open arm entries for the Parabacteroides -enriched rats relative to controls (Fig. 4E, F ), indicating that the enrichment did not affect anxiety-like behavior. Similarly, there were no differences in distance traveled or time spent in the center arena in the OF test, which is a measure of both anxiety-like behavior and general activity in rodents (Fig. 4G, H ). Together these data suggest that Parabacteroides treatment negatively impacted both hippocampal-dependent perirhinal cortex-dependent memory function without significantly affecting general activity or anxiety-like behavior. Early life sugar consumption and Parabacteroides enrichment alter hippocampal gene expression profiles To further investigate how sugar and Parabacteroides affect cognitive behaviors, we conducted transcriptome analysis of the hippocampus samples. Figure S1A, C shows the results of principal component analysis revealing moderate separation based on RNA sequencing data from the dorsal hippocampus of rats fed sugar in early life compared with controls. Gene pathway enrichment analyses from RNA sequencing data revealed multiple pathways significantly affected by early life sugar consumption, including four pathways involved in neurotransmitter synaptic signaling: dopaminergic, glutamatergic, cholinergic, and serotonergic signaling pathways. In addition, several gene pathways that also varied by sugar were those involved in kinase-mediated intracellular signaling: cGMP-PKG, RAS, cAMP, and MAPK signaling pathways (Fig. 5A , Table S1 ). Fig. 5: Effect of early life sugar or targeted Parabacteroides enrichment on hippocampal gene expression. A Pathway analyses for differentially expressed genes (DEGs) at a P value < 0.01 in hippocampal tissue punches from rats fed early life sugar compared with controls. Upregulation by sugar is shown in red and downregulation by sugar in blue. B A heatmap depicting DEGs that survived the Benjamini–Hochberg corrected FDR of P < 0.05 in rats fed early life sugar compared with controls. Warmer colors (red) signify an increase in gene expression and cool colors (blue) a reduction in gene expression by treatment (CTL control, SUG early life sugar; n = 7/group). C A heatmap depicting DEGs that survived the Benjamini–Hochberg corrected FDR of P < 0.05 in rats with early life Parabacteroides enrichment compared with combined control groups. Warmer colors (red) signify an increase in gene expression and cool colors (blue) a reduction in gene expression by treatment ( n = 7, 14). D Pathway analyses for differentially expressed genes (DEGs) at a P value < 0.01 in rats enriched with Parabacteroides compared with combined controls. Upregulation by Parabacteroides transfer is shown in red and downregulation in blue. The dotted line indicates ±0.25 log2 fold change. Full size image Analyses of individual genes across the entire transcriptome using a stringent FDR criterion further identified 21 genes that were differentially expressed in rats fed early life sugar compared with controls, with 11 genes elevated and 10 genes decreased in rats fed sugar compared to controls (Fig. 5B ). Among the genes impacted, several genes that regulate cell survival, migration, differentiation, and DNA repair were elevated by early life sugar access, including Faap100 , which encodes an FA core complex member of the DNA damage response pathway 48 , and Eepd1 , which transcribes an endonuclease involved in repairing stalled DNA replication forks, stressed from DNA damage 49 . Other genes associated with endoplasmic reticulum stress and synaptogenesis were also significantly increased by sugar consumption, including Klf9 , Dgkh , Neurod2 , Ppl , and Kirrel1 50 , 51 , 52 , 53 . Several genes were reduced by dietary sugar, including Tns2 , which encodes tensin 2, important for cell migration 54 , RelA , which encodes an NF/kB complex protein that regulates the activity-dependent neuronal function and synaptic plasticity 55 , and Grm8 , the gene for the metabotropic glutamate receptor 8 (mGluR8). Notably, reduced expression of the mGluR8 receptor may contribute to the impaired neurocognitive functioning in animals fed sugar, as mGluR8 knockout mice show impaired hippocampal-dependent learning and memory 56 . Figure S1A, B, D shows the results of the principal component analysis of dorsal hippocampus RNA sequencing data indicating a moderate separation between rats enriched with Parabacteroides and controls. Gene pathway analyses revealed that early life Parabacteroides treatment, similar to effects associated with sugar consumption, significantly altered the genetic signature of dopaminergic synaptic signaling pathways, though differentially expressed genes were commonly affected in opposite directions between the two experimental conditions (Fig. S8 ). Parabacteroides treatment also impacted gene pathways associated with metabolic signaling. Specifically, pathways regulating fatty acid oxidation, rRNA metabolic processes, mitochondrial inner membrane, and valine, leucine, and isoleucine degradation were significantly affected by Parabacteroides enrichment. Other pathways that were influenced were those involved in neurodegenerative disorders, including Alzheimer’s disease and Parkinson’s disease, though most of the genes affected in these pathways were mitochondrial genes (Fig. 5D , Table S2 ). At the level of individual genes, dorsal hippocampal RNA sequencing data revealed that 15 genes were differentially expressed in rats enriched with Parabacteroides compared with controls, with 13 genes elevated and two genes decreased in the Parabacteroides group compared with controls (Fig. 5C ). Consistent with results from gene pathway analyses, several individual genes involved in metabolic processes were elevated by Parabacteroides enrichment, such as Hmgcs2 , which is a mitochondrial regulator of ketogenesis and provides energy to the brain under metabolically taxing conditions or when glucose availability is low 57 , and Cox6b1 , a mitochondrial regulator of energy metabolism that improves hippocampal cellular viability following ischemia/reperfusion injury 58 . Parabacteroides enrichment was also associated with incased expression of Slc27A1 and Mfrp , which are each critical for the transport of fatty acids into the brain across capillary endothelial cells 59 , 60 . Discussion Dietary factors are a key source of gut microbiome diversity 28 , 46 , 61 , 62 , 63 and emerging evidence indicates that diet-induced alterations in the gut microbiota may be linked with altered neurocognitive development 28 , 63 , 64 , 65 . Our results identify species within the genus Parabacteroides that are elevated by habitual early life consumption of dietary sugar and are negatively associated with hippocampal-dependent memory performance. Further, targeted microbiota enrichment of Parabacteroides perturbed both hippocampal- and perirhinal cortex-dependent memory performance. These findings are consistent with previous literature in showing that early life consumption of Western dietary factors impairs neurocognitive outcomes 10 , 11 , and further suggest that altered gut bacteria due to excessive early life sugar consumption may functionally link dietary patterns with cognitive impairment. Our previous data show that rats are not susceptible to habitual sugar consumption-induced learning and memory impairments when 11% sugar solutions are consumed ad libitum during adulthood, in contrast to effects observed in the present and previous study in which the sugar is consumed during early life development 23 . It is possible that habitual sugar consumption differentially affects the gut microbiome when consumed during adolescence vs. adulthood. However, a recent report showed that adult consumption of a high fructose diet (35% kcal from fructose) promotes gut microbial “dysbiosis” and neuroinflammation and cell death in the hippocampus, yet without impacting cognitive function 66 , suggesting that perhaps neurocognitive function is more susceptible to gut microbiota influences during early life than during adulthood. Indeed, several reports have identified early life critical periods for microbiota influences on behavioral and neurochemical endpoints in germ-free mice 5 , 75 . However, the age-specific profile of sugar-associated microbiome dysbiosis and neurocognitive impairments remains to be determined. Given that the adolescent rats consuming SSBs compensated for these calories by consuming less chow, it is possible that reduced nutrient (e.g., dietary protein) consumption may have contributed to the deficits in hippocampal function. However, we think this is unlikely, as adolescent SSB access did not produce any substantial nutrient deficiency that would restrict growth, as evidenced by the similarities in body weight between the experimental and control group. Furthermore, prior studies that directly examined the effects of adolescent caloric (and thereby nutrient) restriction on learning and memory in rats found that there were no differences in hippocampal-dependent memory function when rats were restricted by ~40% from PN 25 to PN 67 67 , Importantly, the parameters in this study closely match those in the present study, as our adolescent SSB access was given over a similar developmental period prior to behavioral testing, and produced a ~40% reduction in total chow kcal consumption. Thus, it is likely that excessive sugar consumption and not nutrient deficiency led to memory deficits, although future work is needed to more carefully examine these variables independently. While our study reveals a strong negative correlation between levels of fecal Parabacteroides and performance in the hippocampal-dependent contextual episodic memory NOIC task, as well as impaired NOIC performance in rats given access to a sugar solution during adolescence, sugar intake did not produce impairments in the perirhinal cortex-dependent NOR memory task. This is consistent with our previous report in which rats given access to an 11% sugar solution during adolescence were impaired in hippocampal-dependent spatial memory (Barne’s maze procedure), yet were not impaired in a nonspatial task of comparable difficulty that was not hippocampal-dependent 23 . Present results revealing that early life sugar consumption negatively impacts hippocampal-dependent contextual-based object recognition memory (NOIC) without influencing NOR memory performance is also consistent with previous reports using a cafeteria diet high in both fat content and sugar 68 , 69 . On the other hand, enrichment of P. johnsonii and P. distasonis in the present study impaired memory performance in both tasks, suggesting a broader impact on neurocognitive functioning with this targeted bacterial enrichment approach. Gene pathway analyses from dorsal hippocampus RNA sequencing identified multiple neurobiological pathways that may functionally connect gut dysbiosis with memory impairment. Early life sugar consumption was associated with alterations in several neurotransmitter synaptic signaling pathways (e.g., glutamatergic and cholinergic) and intracellular signaling targets (e.g., cAMP and MAPK). A different profile was observed in Parabacteroides -enriched animals, where gene pathways involved with metabolic function (e.g., fatty acid oxidation and branched-chain amino acid degradation) and neurodegenerative disease (e.g., Alzheimer’s disease) were altered relative to controls. Given that sugar has effects on bacterial populations in addition to Parabacteroides , and that sugar consumption and Parabacteroides treatment differentially influenced peripheral glucose metabolism and body weight, these transcriptome differences in the hippocampus are not surprising. However, gene clusters involved with dopaminergic synaptic signaling were significantly influenced by both early life sugar consumption and Parabacteroides treatment, thus identifying a common pathway through which both diet-induced and gut bacterial infusion-based elevations in Parabacteroides may influence neurocognitive development. Though differentially expressed genes were commonly affected in opposite directions in Parabacteroides enriched animals compared with early life sugar treated animals, it is possible that perturbations to the dopamine system play a role in the observed cognitive dysfunction. For example, while dopamine signaling in the hippocampus has not traditionally been investigated for mediating memory processes, several recent reports have identified a role for dopamine inputs from the locus coeruleus in regulating hippocampal-dependent memory and neuronal activity 70 , 71 . Interestingly, endogenous dopamine signaling in the hippocampus has recently been linked with regulating food intake and food-associated contextual learning 72 , suggesting that dietary effects on gut microbiota may also impact feeding behavior and energy balance-relevant cognitive processes. It is important to note that comparisons between the gene expressional analyses in the Parabacteroides enrichment and sugar consumption experiments should be made cautiously given that there were slight differences in the timing of the hippocampus tissue harvest between the two experiments (PN 65 for sugar consumption vs. PN 83 for the Parabacteroides enrichment). Further, future work is needed to determine whether differences in gene expression observed in each experiment translates to differential expression at the protein level. It is also worth emphasizing that the levels of Parabacteroides conferred by our enrichment study were substantially higher than in the dietary sugar study, and thus it is not surprising that Parabacteroides enrichment would confer a different impact on host physiology, hippocampal gene expression, and neurocognition compared to Parabacteroides elevations associated with SSB consumption. Regardless of these caveats in comparing the two models, our data extend the field by highlighting a specific bacterial population that (1) is capable of negatively impacting neurocognitive development when experimentally enriched, and (2) is elevated by early life consumption of dietary sugar with levels correlating negatively with hippocampal-dependent memory performance. Many of the genes that were differentially upregulated in the hippocampus by Parabacteroides enrichment were involved in fat metabolism and transport. Thus, it is possible that Parabacteroides conferred an adaptation in the brain, shifting fuel preference away from carbohydrate toward lipid-derived ketones. Consistent with this framework, Parabacteroides were previously shown to be upregulated by a ketogenic diet in which carbohydrate consumption is drastically depleted and fat is used as a primary fuel source due. Furthermore, enrichment of Parabacteroides merdae together with Akkermansia muciniphila was protective against seizures in mice 28 . It is possible that P. distasonis reduces glucose uptake from the gut, enhances glucose clearing from the blood, and/or alters nutrient utilization in general, an idea further supported by the recent finding that P. distasonis is associated with reduced diet- and genetic-induced obesity and hyperglycemia in mice 47 . The present findings produce several opportunities for further mechanistic investigation. For example, how do diet-induced alterations in gut bacteria impact the brain? Several possible mechanisms have been investigated and proposed, such as impaired gut barrier function and endotoxemia 63 , 73 , perhaps related to altered short-chain fatty acid production 66 , 74 . Moreover, it is well-known that the liver is negatively impacted by excessive fructose consumption 75 , and emerging evidence highlights a gut microbiome–liver axis with crosstalk via bile acids and cytokines 76 . It is possible that dietary sugar-induced microbiota changes alter the hepatic–gut axis, thus contributing to altered cognitive function. Indeed, an altered bile acid profile due to gut microbiota-produced bile acid secondary metabolites is associated with cognitive dysfunction in Alzheimer’s Disease in humans 77 . Taken together, our collective results provide insight into the neurobiological mechanisms that link early life unhealthy dietary patterns with altered gut microbiota changes and neurocognitive impairments. Currently, probiotics, live microorganisms intended to confer health benefits, are not regulated with the same rigor as pharmaceuticals but instead are sold as dietary supplements. Our findings suggest that gut enrichment with certain species of Parabacteroides is potentially harmful to neurocognitive mnemonic development. These results highlight the importance of conducting rigorous basic science analyses on the relationship between diet, microorganisms, brain, and behavior prior to widespread recommendations of bacterial microbiome interventions for humans. Data availability All data are available upon request. The 16S rRNI microbiome sequencing data are available through Qiita (ID 13651 and 11255) and the RNA sequencing data are available through the NCBI Gene Expression Omnibus, GSE150091.
Sugar practically screams from the shelves of your grocery store, especially those products marketed to kids. Children are the highest consumers of added sugar, even as high-sugar diets have been linked to health effects like obesity and heart disease and even impaired memory function. However, less is known about how high sugar consumption during childhood affects the development of the brain, specifically a region known to be critically important for learning and memory called the hippocampus. New research led by a University of Georgia faculty member in collaboration with a University of Southern California research group has shown in a rodent model that daily consumption of sugar-sweetened beverages during adolescence impairs performance on a learning and memory task during adulthood. The group further showed that changes in the bacteria in the gut may be the key to the sugar-induced memory impairment. Supporting this possibility, they found that similar memory deficits were observed even when the bacteria, called Parabacteroides, were experimentally enriched in the guts of animals that had never consumed sugar. "Early life sugar increased Parabacteroides levels, and the higher the levels of Parabacteroides, the worse the animals did in the task," said Emily Noble, assistant professor in the UGA College of Family and Consumer Sciences who served as first author on the paper. "We found that the bacteria alone was sufficient to impair memory in the same way as sugar, but it also impaired other types of memory functions as well." Guidelines recommend limiting sugar The Dietary Guidelines for Americans, a joint publication of the U.S. Departments of Agriculture and of Health and Human Services, recommends limiting added sugars to less than 10 percent of calories per day. Data from the Centers for Disease Control and Prevention show Americans between the ages 9-18 exceed that recommendation, the bulk of the calories coming from sugar-sweetened beverages. Considering the role the hippocampus plays in a variety of cognitive functions and the fact the area is still developing into late adolescence, researchers sought to understand more about its vulnerability to a high-sugar diet via gut microbiota. Juvenile rats were given their normal chow and an 11% sugar solution, which is comparable to commercially available sugar-sweetened beverages. Researchers then had the rats perform a hippocampus-dependent memory task designed to measure episodic contextual memory, or remembering the context where they had seen a familiar object before. "We found that rats that consumed sugar in early life had an impaired capacity to discriminate that an object was novel to a specific context, a task the rats that were not given sugar were able to do," Noble said. A second memory task measured basic recognition memory, a hippocampal-independent memory function that involves the animals' ability to recognize something they had seen previously. In this task, sugar had no effect on the animals' recognition memory. "Early life sugar consumption seems to selectively impair their hippocampal learning and memory," Noble said. Additional analyses determined that high sugar consumption led to elevated levels of Parabacteroides in the gut microbiome, the more than 100 trillion microorganisms in the gastrointestinal tract that play a role in human health and disease. To better identify the mechanism by which the bacteria impacted memory and learning, researchers experimentally increased levels of Parabacteroides in the microbiome of rats that had never consumed sugar. Those animals showed impairments in both hippocampal dependent and hippocampal-independent memory tasks. "(The bacteria) induced some cognitive deficits on its own," Noble said. Noble said future research is needed to better identify specific pathways by which this gut-brain signaling operates. "The question now is how do these populations of bacteria in the gut alter the development of the brain?" Noble said. "Identifying how the bacteria in the gut are impacting brain development will tell us about what sort of internal environment the brain needs in order to grow in a healthy way."
10.1038/s41398-021-01309-7
Physics
Researchers take a step toward quantum mechanical analysis of plant metabolism
Jochen Braumüller et al. Analog quantum simulation of the Rabi model in the ultra-strong coupling regime, Nature Communications (2017). DOI: 10.1038/s41467-017-00894-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-017-00894-w
https://phys.org/news/2017-10-quantum-mechanical-analysis-metabolism.html
Abstract The quantum Rabi model describes the fundamental mechanism of light-matter interaction. It consists of a two-level atom or qubit coupled to a quantized harmonic mode via a transversal interaction. In the weak coupling regime, it reduces to the well-known Jaynes–Cummings model by applying a rotating wave approximation. The rotating wave approximation breaks down in the ultra-strong coupling regime, where the effective coupling strength g is comparable to the energy ω of the bosonic mode, and remarkable features in the system dynamics are revealed. Here we demonstrate an analog quantum simulation of an effective quantum Rabi model in the ultra-strong coupling regime, achieving a relative coupling ratio of g / ω ~ 0.6. The quantum hardware of the simulator is a superconducting circuit embedded in a cQED setup. We observe fast and periodic quantum state collapses and revivals of the initial qubit state, being the most distinct signature of the synthesized model. Introduction Finding solutions to many quantum problems is a very challenging task 1 . The reason is the exponentially large number of degrees of freedom in a quantum system, requiring computational power and memory that easily exceed the capabilities of present classical computers. A yet to be demonstrated universal digital quantum computer of sufficient size would be capable of efficiently solving most quantum problems 1 , 2 . A more feasible approach to achieve a computational speedup in the near future is quantum simulation 1 , 2 , 3 . In the framework of analog quantum simulation, a tailored and well-controllable artificial quantum system is mapped onto a quantum problem of interest in order to mimic its dynamics. Since the same equations of motion hold for both systems, the solution of the underlying quantum problem is inferred by observing the time evolution of the artificially built model system, while making use of its intrinsic quantumness. This scheme may be applied to the simulation of complex quantum problems, in the spirit originally proposed by Feynman 1 . Quantum simulation was performed on various experimental platforms. Examples of analog quantum simulation are the study of fermionic transport 4 and magnetism 5 with cold atoms and the simulation of a quantum magnet and the Dirac equation with trapped ions 6 , 7 . The exploration of non-equilibrium physics was proposed with an on-chip quantum simulator based on superconducting circuits 8 , 9 . Digital simulation schemes with superconducting devices were demonstrated for fermionic models 10 and spin systems 11 . The quantum Rabi model in quantum optics describes the interaction between a two-level atom and a single quantized harmonic oscillator mode 12 , 13 . In the weak coupling regime, which may still be strong in the sense of quantum electrodynamics (QED), a rotating wave approximation (RWA) can be applied and the Rabi model reduces to the Jaynes–Cummings model 14 , which captures most relevant scenarios in cavity and circuit QED. In the ultra-strong coupling (USC) and deep strong coupling regimes, where the coupling strength is comparable to the mode energies 15 , the counter rotating terms in the interaction Hamiltonian can no longer be neglected and the RWA breaks down. As a consequence, the total excitation number in the quantum Rabi model is not conserved. Except for one recent paradigm of finding an exact solution 16 , an analytically closed solution of the quantum Rabi model does not exist due to the lack of a second conserved quantity which renders it non-integrable. The quantum Rabi model, in particular in the USC regime and beyond, exhibits non-classical features and rising interest in it is inspired by strong advances of experimental capabilities 15 , 17 , 18 , 19 . The specific spectral features of the USC regime and the consequent breakdown of the RWA were previously observed with a superconducting circuit by implementing an increased physical coupling strength 20 , 21 . A similar approach involving a flux qubit coupled to a single-mode resonator allowed to access the deep strong coupling regime in a closed system 22 . The USC regime was reached before by dynamically modulating the flux bias of a superconducting qubit, reaching a coupling strength of about 0.1 of the effective resonator frequency 23 . In our approach, we engineer an effective quantum Rabi Hamiltonian with an analog quantum simulation scheme based on the application of microwave Rabi drive tones. By a decrease of the subsystem energies, the USC condition is satisfied in the effective rotating frame, allowing to observe the distinct model dynamics. The scheme may be a route to efficiently generate non-classical cavity states 24 , 25 , 26 and may be extended to explore relevant physical models such as the Dirac equation in (1 + 1) dimensions. Its characteristic dynamics is expected to display a Zitterbewegung in the spacial quadrature of the bosonic mode 27 . This dynamics has been observed with trapped ions 7 , likewise based on a Hamiltonian that is closely related to the USC Rabi model. It has been shown recently that a quantum phase transition, typically requiring a continuum of modes, can appear already in the quantum Rabi model under appropriate conditions 28 . The experimental challenge is projected to the coupling requirements in the model which may be accomplished with the simulation scheme presented. This can be a starting point to experimentally investigate critical phenomena in a small and well-controlled quantum system 29 . With a digital simulation approach, the dynamics of the quantum Rabi model in USC conditions was similarly studied very recently 30 . In our experiment we simulate the quantum Rabi model in the USC regime achieving a relative coupling strength of up to 0.6. Dependent on our experimental parameters, we observe periodically recurring quantum state collapses and revivals in the qubit dynamics, being a distinct signature of USC. The collapse-revival dynamics appears most clearly in the absence of the qubit energy term in the model, according to the expectation from master equation simulations. In addition, we use our device to simulate the full quantum Rabi model and are able to observe the onset of an additional substructure in the qubit time evolution. With this proof of principle experiment we validate the experimental feasibility of the analog quantum simulation scheme and demonstrate the potential of superconducting circuits for the field of quantum simulation. Results Simulation scheme The quantum Rabi Hamiltonian reads $$\frac{{\hat H}}{\hbar } = \frac{\epsilon }{2}{\hat \sigma _z} + \omega {\hat b^\dag }\hat b + g{\hat \sigma _x}\left( {{{\hat b}^\dag } + \hat b} \right),$$ (1) with \(\epsilon\) the qubit energy splitting, ω the bosonic mode frequency and g the transversal coupling strength. \({\hat \sigma _i}\) are Pauli matrices with \({\hat \sigma _z}\left| \rm{g} \right\rangle = -\left| \rm{g} \right\rangle\) and \({\hat \sigma _z}\left| \rm{e} \right\rangle = \left| \rm{e} \right\rangle\) , where \(\left| \rm{g} \right\rangle\) , \(\left| \rm{e} \right\rangle\) denote eigenstates of the computational qubit basis. \({\hat b^\dag }\) \(( {\hat b} )\) are creation (annihilation) operators in the Fock space of the bosonic mode. Both elements of the model are physically implemented in the experiment, with a small geometric coupling \(g \ll \epsilon ,\omega\) , such that the RWA applies and Eq. ( 1 ) takes the form of the Jaynes–Cummings Hamiltonian. In order to access the USC regime, we follow the scheme proposed in ref. 27 . It is based on the application of two transversal microwave Rabi drive tones coupling to the qubit. The USC condition is created in a synthesized effective Hamiltonian in the frame rotating with the dominant drive frequency. In this engineered Hamiltonian, the effective mode energies are set by the Rabi drive parameters. The Jaynes–Cummings Hamiltonian in the laboratory frame with both drives applied takes the form $$\begin{array}{*{20}{l}} {\frac{{{{\hat H}_{\rm{d}}}}}{\hbar }} \hfill & = \hfill & {\frac{\epsilon }{2}{{\hat \sigma }_z} + \omega {{\hat b}^\dag }\hat b + g\left( {{{\hat \sigma }_ - }{{\hat b}^\dag } + {{\hat \sigma }_ + }\hat b} \right)} \hfill \\ {} \hfill & {} \hfill & { + {{\hat \sigma }_x}{\eta _1}\,{\rm{cos}}\left( {{\omega _1}t + {\varphi _1}} \right) + {{\hat \sigma }_x}{\eta _2}{\rm{cos}}\left( {{\omega _2}t + {\varphi _2}} \right),\,} \hfill \\ \end{array}$$ (2) with η i the amplitudes and ω i the frequencies of drive i . φ i denotes the relative phase of drive i in the coordinate system of the qubit Bloch sphere in the laboratory frame. Within the RWA where \({\eta _i}{\rm{/}}{\omega _i} \ll 1\) , the φ i enter as relative phases of the transversal coupling operators \({e^{ - {i\varphi _i}}}{\hat \sigma _ + } + {\rm{h}}{\rm{.c}}{\rm{.}}\) , where \({\hat \sigma _ \pm } = 1{\rm{/}}2\left( {{{\hat \sigma }_x} \pm i{{\hat \sigma }_y}} \right)\) denote Pauli’s ladder operators. In the following, we set φ i = 0 to recover the familiar \({\hat \sigma _x}\) coupling without loss of generality. Going to the frame rotating with ω 1 and neglecting terms rotating with \({e^{ \pm 2i{\omega _1}t}}\) renders the first driving term time-independent, yielding $$\begin{array}{*{20}{l}}{\frac{{{{\hat H}_1}}}{\hbar }} \hfill & = \hfill & {\left( {\epsilon - {\omega _1}} \right)\frac{{{{\hat \sigma }_z}}}{2} + \left( {\omega - {\omega _1}} \right){{\hat b}^\dag }\hat b + g\left( {{{\hat \sigma }_ - }{{\hat b}^\dag } + {{\hat \sigma }_ + }\hat b} \right)} \hfill \\ {} \hfill & {} \hfill & { + \frac{{{\eta _1}}}{2}{{\hat \sigma }_x} + \frac{{{\eta _2}}}{2}\left( {{{\hat \sigma }_ + }{e^{i\left( {{\omega _1} - {\omega _2}} \right)t}} + {{\hat \sigma }_ - }{e^{ -i \left( {{\omega _1} - {\omega _2}} \right)t}}} \right).} \hfill \\ \end{array}$$ (3) The η 1 -term is now the significant term and we move into its interaction picture. Satisfying the requirement ω 1 − ω 2 = η 1 and applying a RWA yields the effective Hamiltonian in the ω 1 frame $$\frac{{{{\hat H}_{{\rm{eff}}}}}}{\hbar } = \frac{{{\eta _2}}}{2}\frac{{{{\hat \sigma }_z}}}{2} + {\omega _{{\rm{eff}}}}{\hat b^\dag }\hat b + \frac{g}{2}{\hat \sigma _x}\left( {{{\hat b}^\dag } + \hat b} \right).$$ (4) We define the effective bosonic mode energy ω eff ≡ ω − ω 1 , which is the parameter governing the system dynamics. Noting \({\eta _1} \gg {\eta _2}\) , which is a necessary condition for the above approximation to hold, the effective qubit frequency η 2 and effective bosonic mode frequency ω eff can be chosen as experimental parameters in the simulation. The complete coupling term of the quantum Rabi Hamiltonian is recovered, valid in the USC regime and beyond, while the geometric coupling strength is only modified by a factor of two, resulting in g eff = g /2. It is therewith feasible to tune the system into a regime where the coupling strength is similar to or exceeds the subsystem energies. This is achieved by leaving the geometric coupling strength essentially unchanged in the synthesized Hamiltonian, while slowing down the system dynamics by effectively decreasing the mode frequencies to ≲ 8 MHz. Thermal excitations of these effective transitions can be neglected since they couple to the thermal bath excitation frequency ~ 1 GHz of the cryostat via their laboratory frame equivalent frequency of ω 1 /2 π ~ 6 GHz. We want to point out that the coupling regime is defined by g eff / ω eff , rather than involving the Rabi frequency η 1 , which does not enter the synthesized Hamiltonian. While the simulation scheme requires \(\left| {{\epsilon} - {\omega _1}} \right| \ll {\eta _1}\) , the qubit frequency does not enter the effective Hamiltonian. The time evolution of the qubit measured in the laboratory frame is subject to fast oscillations corresponding to the Rabi frequency η 1 . Accordingly, the qubit dynamics in the engineered quantum Rabi Hamiltonian Eq. ( 4 ), valid in the ω 1 frame, can be inferred from the envelope of the evolution in the laboratory frame. The derivation of Eq. ( 4 ) can be found in ref. 27 and is detailed in Supplementary Note 1 . A similar drive scheme based on a Rabi tone was previously used in experiment to synthesize an effective Hamiltonian with a rotated qubit basis 31 . For the qubit and the bosonic mode degenerate in the laboratory frame, a distinct collapse-revival signature appears in the dynamics of the quantum Rabi model under USC conditions. Quantum simulation device The physical implementation of the quantum simulator is based on a superconducting circuit embedded in a typical circuit QED setup 32 , 33 , see Fig. 1 . The atomic spin of the quantum Rabi model is mapped to a concentric transmon qubit 34 , 35 . It is operated at a ratio of Josephson energy to charging energy E J / E C = 50 and an anharmonicity α / h = ω 12 /2 π − ω 01 /2 π = −0.36 GHz ~ –E C / h = –0.31 GHz, close to resonance with the bosonic mode at 5.948 GHz. ω ij denote the transition frequencies between transmon levels i , j . The energy relaxation rate of the qubit at the operation point is measured to be 1/ T 1 = 0.2 × 10 6 s −1 . An on-chip flux bias line allows for a fast tuning of the qubit transition frequency as the concentric transmon is formed by a gradiometric dc SQUID. The bosonic mode of the model is represented by a harmonic λ /2 resonator with an inverse lifetime \(\kappa \sim 3.9 \times {10^6}\,{{\rm{s}}^{ - {\rm{1}}}}\) that is limited by internal loss. Following the common convention, we use κ as the inverse photon lifetime of a linear cavity, which may be extracted as the full width at half maximum of a resonance signature in frequency space. Via Fourier transformation one can see that this means the cavity relaxes to its groundstate at a rate of κ /2. In a separate experiment we find the internal quality of similar microstrip resonators to be limited to about 1.2 × 10 4 in the single photon regime, corresponding to a loss rate of 3.1 × 10 6 s −1 . Microwave simulations indicate that the quality is limited by radiation. The sample fabrication process is detailed in Supplementary Note 2 . Fig. 1 Quantum simulation device. a Optical micrograph with the atomic spin represented by a concentric transmon qubit, highlighted in red and the λ /2 microstrip resonator ( blue ) constituting the bosonic oscillator mode. The readout resonator couples to the qubit capacitively and is read out with an open transmission line (TL) via the reflection signal of an applied microwave tone or pulse. The second resonator visible on chip is not used in the current experiment and is detuned in frequency from the relevant bosonic mode by ~0.5 GHz. The scale bar corresponds to 1 mm. b Effective circuit diagram of the device Full size image Sample characterization The quantum state collapse followed by a quantum revival is the most striking signature of the ultra-strong and close deep strong coupling regime of the quantum Rabi model and emerges for qubit and bosonic mode being degenerate in the laboratory frame. We calibrate this resonance condition by minimizing the periodic swap rate of a single excitation between qubit and bosonic mode for the simple Jaynes–Cummings model in the absence of additional Rabi drives. Figure 2 shows the measured vacuum Rabi fluctuations in the resonant case (a) and dependent on the qubit transition frequency (b). For initial state preparation of the qubit and readout we detune the qubit by 95 MHz to a higher frequency. This corresponds to switching off the resonant interaction with the bosonic mode. Supplementary Note 3 describes experimental details on flux pulse generation. Rabi vacuum oscillations can be observed during the interaction time Δ t and yield a coupling strength g /2 π = 4.3 MHz, in good agreement with the spectroscopically obtained result, see Supplementary Note 8 . Fig. 2 Vacuum Rabi oscillations between qubit and bosonic mode. a The qubit is initially dc-biased on resonance with the bosonic mode, while it is detuned for state preparation and readout. The solid black line in the inset depicts the fast flux pulses applied to the flux bias line and indicates the qubit frequency on the given axis. Qubit and bosonic mode are on resonance during an interaction time Δ t . A frequency fit ( red ) of the vacuum Rabi oscillations yields 2 g /2 π = 8.5 MHz. With the decay rate Γ = (2.08 ± 0.03) × 10 6 s −1 of the envelope and the qubit decay rate 1/ T 1 = (0.2 ± 0.12) × 10 6 s −1 we extract the bosonic mode decay rate κ = (3.9 ± 0.13) × 10 6 s −1 . Error bars denote a statistical s.d. as detailed in the Methods. b For departing from the resonance condition ( blue line ) by varying the dc bias current I , we observe the expected decrease in excitation swap efficiency and an increase in the vacuum Rabi frequency. The qubit population is given in colors and we applied a numerical interpolation of data points Full size image Quantum state collapse and revival As the collapse-revival signature of the quantum Rabi model in USC conditions manifests most clearly for a vanishing qubit term, we initially set η 2 = 0, yielding the effective Hamiltonian in the qubit frame $$\frac{{\hat H}}{\hbar } = {\omega _{{\rm{eff}}}}{\hat b^\dag }\hat b + \frac{g}{2}{\hat \sigma _x}\left( {{{\hat b}^\dag } + \hat b} \right).$$ (5) Figure 3a shows the applied measurement sequence which is based on the one in Fig. 2 but extended by a drive tone of amplitude η 1 . The bosonic mode is initially in the vacuum state and the qubit is prepared in one of its basis states \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) , which are thermally impure. Qubit and bosonic mode are on resonance during the simulation time Δ t . The drive is applied at a frequency ω 1 detuned from the common resonance point by ω eff , setting the effective bosonic mode frequency in the rotating frame. Measured data for ω eff /2 π = 8 MHz is displayed in Fig. 3b , corresponding to \({g_{{\rm{eff}}}}{\rm{/}}{\omega _{{\rm{eff}}}}\sim 0.3\) . Data points show the experimentally simulated time evolution of the qubit prepared in \(\left| {\rm{e}} \right\rangle\) . A fast quantum state collapse followed by periodically returning quantum revivals can be observed. The ground state of the qubit subspace in the driven system as well as in the synthesized Hamiltonian, Eq. ( 5 ), is in the equatorial plane of the qubit Bloch sphere and is occupied after a time \(\Delta t \gg {T_1},1{\rm{/}}\kappa\) . It is diagonal in the \(\left| \pm \right\rangle\) basis, with \(\left| \pm \right\rangle = 1{\rm{/}}\sqrt 2 \left( {\left| {\rm{e}} \right\rangle \pm \left| {\rm{g}} \right\rangle } \right)\) . The revival dynamics can be understood with an intuitive picture in the laboratory frame. The eigenenergies in the \(\left| \pm \right\rangle\) subspaces take the form of displaced vacuum $${\omega _{{\rm{eff}}}}\left( {{{\hat b}^\dag } \pm \frac{g}{{2{\omega _{{\rm{eff}}}}}}} \right)\left( {\hat b \pm \frac{g}{{2{\omega _{{\rm{eff}}}}}}} \right) + {\rm{const}}.,$$ (6) which is a coherent state that is not diagonal in the Fock basis. The prepared initial state in the experiment is therefore not an eigenstate in the effective basis with the drive applied such that many terms corresponding to the relevant Fock states n of the bosonic mode participate in the dynamics with phase factors exp{ i nω eff t }, \(n \in {{\Bbb N}^ + }\) . While contributing terms get out of phase during the state collapse, they rephase after an idling period of 2 π / ω eff to form the quantum revival. The underlying physics of this phenomenon is fundamentally different from the origin of state revivals that were proposed for the Jaynes–Cummings model 36 . Here, the preparation of the bosonic mode in a large coherent state with α ≳ 10 is required and non-periodic revivals are expected at times ∝ 1/ g eff rather than ∝ 1/ ω eff 37 , as demonstrated in Supplementary Fig. 7 . The blue line in Fig. 3b corresponds to a classical master equation simulation of the qubit dynamics in the rotating frame in the two-level approximation. It includes the second excited level of the transmon 38 and decay terms in the underlying Liouvillian according to measured values. Refer to Supplementary Note 5 for further details. Figure 3c, d shows a classical simulation and the quantum simulation for ω eff /2 π = 5 MHz with the qubit prepared in one of its eigenstates \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) . The population of the bosonic mode takes a maximum during the idling period and adopts its initial population at 2 π / ω eff in the absence of dissipation, see Fig. 3e . The fast oscillations in Fig. 3c, d correspond to the Rabi frequency \({\eta _1}{\rm{/}}2\pi \sim 50\,{\rm{MHz}}\) . This value is chosen such that the requirement \({\eta _1}{\rm{/}}{\omega _{{\rm{eff}}}} \gg 1\) is fulfilled while staying well below the transmon anharmonicity, avoiding higher level populations. Deviations in the laboratory frame simulation traces are due to a uncertainty in the Rabi frequency that is extracted from Fourier transformation of measured data. The broadening in frequency space is mainly caused by the beating in experimental data, which is an experimental artifact. The relevant dynamics of the USC quantum Rabi Hamiltonian corresponds to the envelope of measured data. Since the laboratory frame dissipation is enhanced for a larger ratio of photon population in the bosonic mode, the accessible coupling regime is bound by the limited coherence of the bosonic mode, in particular. This is reflected in a dependence of the coherence envelope of the quantum revivals on the ratio g / ω eff , see Supplementary Fig. 7 , reflecting that the excitation number is no longer a conserved quantity in the quantum Rabi model. We find a better agreement with experimental data for using a slightly increased value for the geometric coupling strength in the master equation simulation than extracted from vacuum Rabi oscillations. See Supplementary Notes 3 and 6 for a discussion and a summary of the relevant parameters. Fig. 3 Quantum state collapse and revival with only the dominant Rabi drive applied. a Schematic pulse sequence and overview on the relative frequencies used in the experiment. b Quantum simulation of the periodic recurrence of quantum state revivals for ω eff /2 π = 8 MHz. The blue line corresponds to a master equation simulation of the qubit evolution in the rotating frame. c , d Master equation and quantum simulation of the qubit time evolution for initial qubit states \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) and ω eff /2 π = 5 MHz, corresponding to \({g_{{\rm{eff}}}}{\rm{/}}{\omega _{{\rm{eff}}}}\sim 0.5\) . The red line shows the qubit population evolution of the driven system in the laboratory frame, Eq. ( 2 ), while the blue lines follow the qubit evolution in the synthesized Hamiltonian Eq. ( 4 ), likewise extracted from a classical master equation simulation. The deviation between the envelope of the laboratory frame data and the rotating frame data in c reflects the approximations of the simulation scheme. Experimental data shows the difference between two measurements for the qubit prepared in \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) , respectively, in order to isolate the qubit signal. e Measured population evolution of the bosonic mode, extracted from the sum of the two successive measurements and fitted to classically simulated data. f – i Qubit time evolution for varying relative phase φ 1 of the applied drive. The initial qubit state is prepared on the equator of the Bloch sphere \(\left| {\rm{g}} \right\rangle\) ± \(\left| {\rm{e}} \right\rangle\) . Dispersive shifts induced by the bosonic mode are subtracted based on its classically simulated population evolution. Error bars throughout the figure denote a statistical s.d. as detailed in the Methods Full size image The validity of the analog simulation scheme proposed in ref. 27 and used in this letter is confirmed by master equation simulations given in Supplementary Fig. 4 . For ideal conditions, we demonstrate that the dynamics of the qubit and the bosonic mode in the quantum Rabi model is well reproduced by the constructed effective Hamiltonian and that the population of the bosonic mode is independent of the Rabi drive amplitude η 1 , despite of it forming a large energy reservoir that is provided to the circuit. In the experiment we face a parasitic coupling of the Rabi tones to the bosonic mode that is degenerate to the qubit and spatially close by in the circuit. This leads to an excess population of the bosonic mode, however without disturbing the functional evolution of its population. This is evident as the evolution of the simple harmonic Hamiltonian \({\hat H_{\rm{h}}}{\rm{/}}\hbar = {\omega _{{\rm{eff}}}}{\hat b^\dag }\hat b + \frac{1}{2}{\eta _{\rm{r}}}( {{{\hat b}^\dag } + \hat b} )\) agrees with the expectation for the quantum Rabi model up to a scaling factor, where the last term corresponds to the parasitic drive of strength η r transformed to the rotating frame. By performing the displacement transformation \(\hat D = {\rm{exp}}\{ { - {\eta _{\rm{r}}}{\rm{/}}( {2{\omega _{{\rm{eff}}}}} )( {{{\hat b}^\dag } - \hat b} )} \}\) , this contribution translates into a qubit tunneling term \(\propto {\hat \sigma _x}\) , giving rise to a sub-rotation of the effective frame. The resulting dynamics complies with the envelope defined by the ideal Hamiltonian with the tunneling term absent and therefore maps to the ideal quantum Rabi model, leaving its dynamics qualitatively unaffected. The transformations described are detailed in Supplementary Note 1 , with master equation simulations supporting these statements in Supplementary Fig. 6 . In Fig. 3d we made use of the topological symmetry of simulations with initial qubit states \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) , by subtracting two successive measurements with the qubit prepared in its eigenstates \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) , respectively, in order to cancel out the additional dispersive shift induced by the bosonic mode. As described in the Methods, we obtain the population evolution of the bosonic mode, depicted in Fig. 3e , by summing two successive measurements with the qubit prepared in \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) , respectively. We can infer its effective population by a fit to master equation simulations in the absence of a parasitic drive of the bosonic mode. Since the maximum population is around unity while the qubit is in the equatorial state, the non-conservation of the total excitation number is apparent. While the phase of the qubit Bloch vector is not well defined for initial states \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) , the qubit state carries phase information when prepared on the equatorial plane of the Bloch sphere via a π /2 pulse. Figure 3f–i shows the qubit time evolution with varying relative phase φ 1 between initial state and applied drive, plotted in the original qubit basis, as calibrated in a Rabi oscillation experiment. Experimentally, the orientation of the coordinate system is set by the first microwave pulse and we apply the Rabi drive with a varying relative phase φ 1 , corresponding to the angle between qubit Bloch vector and rotation axis of the drive in the equatorial plane. When both are perpendicular, φ 1 = ± π /2, similar oscillations including the state revival can be observed, assuming a steady state in the equatorial plane. For the case where φ 1 = 0, π , qubit oscillations in the laboratory frame are suppressed while the baseline is shifted up or down due to the detuning of the Rabi drive. The substructure emerges from the swap interaction term between qubit and bosonic mode that may be regarded as a perturbation as \({\eta _1} \gg g\) . Classical master equation simulations confirm that the basis shift, dependent on the prepared initial qubit state, is enhanced by the presence of the second excited transmon level and by a spectral broadening of the applied Rabi drive. The experimentally observed shift is not entirely captured by the classical simulation which we attribute to missing terms in the master equation that may be related to qubit tuning pulses and are unknown at present. See Supplementary Note 4 for a further discussion of the effect. Dependent on φ 1 , we observe a varying maximum photon population of the bosonic mode in classical simulations and indicated in the measured dispersive shift of the readout resonator. The qubit population as depicted in Fig. 3f–i is retrieved from measured raw data by subtracting the contribution of the bosonic mode. A deviation of the effective qubit basis is likewise observed for preparing the qubit in one of its eigenstates \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) . Full quantum Rabi model In order to simulate the full quantum Rabi model including a non-vanishing qubit energy term we switch on the second drive, η 2 ≠ 0, see Fig. 4a . Quantum simulations are performed with the qubit initially in \(\left| {\rm{g}} \right\rangle\) , subject to thermal excess population. The drive tones are up-converted in two separate IQ mixers while sharing a common local oscillator input to preserve their relative phase relation. For the simulation scheme to be valid, we need to fulfill the constraint ω 2 = ω 1 − η 1 , see the schematics in Fig. 4b . This is achieved by initially applying a simulation sequence with η 2 = 0 in order to obtain the frequency equivalent of the Rabi frequency η 1 from a Fourier transformation of the qubit time evolution. Subsequently, we apply the same sequence with a finite η 2 , φ 1 = φ 2 and ω 2 set by obeying the above constraint. Figure 4c shows a master equation simulation of the complete quantum Rabi model for η 2 = 0 ( black ) and η 2 ≠ 0 ( red ), respectively. The main difference is an emerging substructure between quantum revivals and an increase of the revival amplitude in the presence of the qubit energy term. The substructure before the first revival is not reproduced in measured data, see Fig. 4d , which we attribute to ring up dynamics of the applied drives, such that the frequency constraint of the simulation scheme is not satisfied at small Δ t . In addition, the parasitic drive of the bosonic mode contributes in parts to the suppression of the substructure. Convergence of the experimental simulation however can be observed better at later simulation times, where we observe an increase in the revival amplitude and more pronounced oscillations after the first revival, in agreement with the classical simulation. These signatures vanish in check measurements for intentionally violating the above constraint or applying the weak Rabi drive with a phase delay φ 1 ≠ φ 2 , see Supplementary Note 7 . We estimate the frequency equivalent of \({\eta _2}{\rm{/}}2\pi \sim 3\,{\rm{MHz}}\) via comparing the relative peak heights of both drive tones with a spectrum analyzer. With ω eff /2 π = 6 MHz we approach a regime where \(2{g_{{\rm{eff}}}}{\rm{/}}\sqrt {\omega _{\rm{eff}}{\eta _2}{\rm{/}}2}\, >1\) , marking the quantum critical point in the related Dicke model 39 . Fig. 4 Simulation of the full quantum Rabi model. a Schematic pulse sequence used in the experiment. b Overview on the relative frequencies of the bosonic mode and the applied drives. The constraint η 1 = ω 1 − ω 2 is sketched. c Master equation simulations for vanishing qubit term η 2 = 0 ( black ) and with non-vanishing qubit term η 2 > 0 ( red ). The blue line corresponds to the classical simulation for η 2 /2 π = 3 MHz. d Quantum simulation for equal parameters. The dispersive shift of the readout resonator induced by the bosonic mode is subtracted based on classically simulated data. Error bars denote a statistical s.d. as detailed in the Methods Full size image The limitations imposed by the low coherence in the slowed down effective frame can be mitigated in a future experiment by employing a high-quality 3D cavity featuring a dc bias and a dedicated Rabi drive antenna coupling to the qubit. Fast tuning pulses may be realized by making use of the ac Stark shift induced by an off-resonant tone. A device with stronger suppression of parasitic couplings to the bosonic mode would not further require a classical post processing, which allows to extend the presented scheme to regimes where classical simulations become very inefficient. Discussion We have demonstrated analog quantum simulation of the full quantum Rabi model in the ultra-strong and close deep strong coupling regime. The distinct quantum state collapse and revival signature in the qubit dynamics was observed, validating the experimental feasibility of the proposed scheme 27 . The main limitation of the scheme is an effective slowing down of the system dynamics, while the laboratory frame dissipation rates are maintained in the synthesized frame. In analogy to the measure of cooperativity in standard QED, we find the ratio \({g_{{\rm{eff}}}}{\rm{/}}\sqrt {\kappa {\rm{/}}{T_1}} \sim 30\) , rendering the qubit and bosonic mode decay rates an ultimate limitation for the simulation quality. The decelerated system dynamics in the effective frame however allows for the observation of quantum revivals on a timescale of ~100 ns, while the revival rate in the laboratory frame USC quantum Rabi model is on an sub-nanosecond scale, being experimentally hard to resolve. The small transmon anharmonicity limits the Rabi frequency to below ~100 MHz ~0.3 | α | in order to avoid higher level populations and suppress parasitic coupling to the bosonic mode. The accessible coupling regime is not limited by the simulation scheme, however we can experimentally observe quantum revivals only up to a coupling regime where \({g_{{\rm{eff}}}}{\rm{/}}{\omega _{{\rm{eff}}}}\sim 0.6\) due to the finite coherence in our circuit. While the presented dynamics can still be efficiently simulated on a classical computer, a true quantum supremacy will onset when incorporating more harmonic modes, leading to an exponential growth of the joint Hilbert space. Substituting the single-quantized mode by a continuous bosonic bath renders our setup a viable tool for investigating the spin boson model in various coupling regimes, which recently attracted experimental interest in the context of quantum simulations 40 , 41 . The presented simulation scheme can be applied for a continuum of modes, such that an engineered bath in a restricted frequency band is collectively shifted by the applied Rabi drive frequency. This can become a route to address the infrared cutoff issue in a tailored bosonic bath and to observe a quantum phase transition in the spin boson model. Methods Experimental technique The quantum circuit is mounted in an aluminum box and cooled below ~50 mK. It is enclosed in a cryoperm case for additional magnetic shielding. Qubit preparation and manipulation microwave pulses are generated by heterodyne single sideband mixing and applied to the same transmission line used for readout. To ensure phase control of the drive tones with respect to the qubit Bloch sphere coordinate system fixed by the first excitation pulse, we use a single microwave source for qubit excitation and the drives required by the simulation scheme. Different pulses are generated by heterodyne IQ mixing with separate IQ frequencies and amplitudes. The bosonic mode resonator is located far away from the transmission line which reduces parasitic driving. Readout of the qubit state is performed dispersively by means of a separate readout resonator located at ω r /2 π = 8.86 GHz in a projective measurement of the \({\hat \sigma _z}\) operator with a strong readout pulse of 400 ns duration. Further details on the experimental setup are given in Supplementary Note 3 . Protocol for extracting the qubit population In the simulation experiments presented in Figs. 3 and 4 , we note a modulated low-frequency bulge in the recorded dispersive readout resonator shift that does not agree with the expected qubit population evolution. By comparing with the classical master equation simulation, we can recognize the population evolution of the bosonic mode which reflects the governing fundamental frequency ω eff of the effective Hamiltonian. By simulating the full circuit Hamiltonian including qubit, bosonic mode and readout resonator, we find that the effect is induced by an additional photon exchange coupling f between the bosonic mode and the readout resonator. The coupling is facilitated by the electric fields of the resonators and is potentially mediated by the qubit. See Supplementary Note 5 for the complete system Hamiltonian. In the diagonalized subspace of the two resonators, the bosonic mode can induce a cross-Kerr like photon number dependent shift ∝ f 2 on the harmonic readout resonator as it inherits nonlinearity from the qubit. By adding or subtracting two subsequent simulation traces with the qubit prepared in either of the initial states \(\left| {\rm{g}} \right\rangle\) , \(\left| {\rm{e}} \right\rangle\) , we can isolate the signals corresponding to the population of the qubit and the bosonic mode. This measurement protocol is based on the symmetry of the qubit signal for preparing eigenstates, while the bosonic mode induced shift is always repulsive and does not change its sign. The photon exchange coupling f therefore provides indirect access to the population of the bosonic mode without a dedicated readout device available. Specifically monitoring the population of the bosonic mode and performing a Wigner tomography would highlight another hallmark signature of the USC regime, namely the efficient generation of non-classical cavity states 30 . Due to a lack of such a symmetry in case the relative phases of the Rabi drives are relevant, the qubit population can be retrieved from measured raw data based on the expectation for the bosonic mode population as obtained from the classical master equation simulation. In this procedure, the dispersive shift ∝ f 2 remains as the only free fit parameter. See Supplementary Note 5 for more details on the described protocol. Data acquisition We readout the qubit state by observing the dispersive shift of the readout resonator which is acquired via a 400 ns long readout pulse. Full time traces, recording the readout pulse, are 2 × 10 3 fold pre-averaged per trace on our acquisition card. Successively, the data is sent to the measurement computer where we extract the IQ quadratures by Fourier transformation. We typically average over ~30 acquired traces to obtain a reasonable signal to noise ratio. Due to the reflection setup, most information is stored in the phase quadrature of the recorded signal. The given error bars represent the s.d. of the mean, as calculated from the pre-averaged data points and propagated according to Gauss. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Hurricanes, traffic jams, demographic development – to predict the effect of such events, computer simulations are required. Many processes in nature, however, are so complicated that conventional computers fail. Quantum simulators may solve this problem. One of the basic phenomena in nature is the interaction between light and matter in photosynthesis. Physicists of Karlsruhe Institute of Technology (KIT) have now made a big step towards quantum mechanics understanding of plant metabolism. This is reported in the Nature Communications journal. "A quantum simulator is the preliminary stage of a quantum computer. Contrary to a quantum computer, however, it is not able to make any calculations, but is designed for the solution of a certain problem," says Jochen Braumüller of KIT's Physikalisches Institut (Institute of Physics). As the high efficiency of photosynthesis cannot be understood completely with classical physical theories, researchers like Braumüller use a quantum model. Together with scientists of the Institut für Theoretische Festkörperphysik (TFP, Institute for Theoretical Solid-State Physics), he demonstrated for the first time in an experiment that quantum simulations of the interaction between light and matter work in principle. The interaction between light and matter in photosynthesis can be described as an interaction of photons of light with the atoms of matter on the microscopic level. The high efficiency of this mechanism of nearly 100 percent suggests that it is subject to rules of quantum physics, which is difficult to simulate with classical computers and simple bits. In standard computing, information is represented by a switch that can store information as zero or one. Quantum bits, by contrast, are able to assume the states of zero and one at the same time according to quantum physics rules. Hence, quantum computers or the simpler quantum simulators can solve the problem more quickly and efficiently. Braumüller and his co-authors have now developed one of the first functioning components for a quantum simulator of light-matter interaction: Superconducting circuits as quantum bits represent the atoms, while electromagnetic resonators represent the photons. The physicists succeeded in producing an effect with the quantum bit and the resonator assuming two opposite states at the same time. "Qubit and resonator are coupled," says Michael Marthaler of KIT's TFP. "This is also the reason for the exponentially improved calculation capacity compared to classical computers." Fulfilling of this fundamental principle of quantum mechanics has demonstrated feasibility of analog quantum simulation with superconducting circuits, the researchers say. As a next step, they plan to extend their system via many other building blocks. "Classical simulation of this extended system would take longer than the age of the universe," says Martin Weides, who has been heading a working group at KIT's Physikalisches Institut since 2015. If the planned quantum mechanics simulation is successful, this will be a "milestone on the way towards a universal quantum computer."
10.1038/s41467-017-00894-w
Nano
Researchers observe moiré trions in H-stacked transition metal dichalcogenide bilayers
Xi Wang et al, Moiré trions in MoSe2/WSe2 heterobilayers, Nature Nanotechnology (2021). DOI: 10.1038/s41565-021-00969-2 MoiHongyi Yu et al, Moiré excitons: From programmable quantum emitter arrays to spin-orbit–coupled artificial lattices, Science Advances (2017). DOI: 10.1126/sciadv.1701696 Kyle L. Seyler et al, Signatures of moiré-trapped valley excitons in MoSe2/WSe2 heterobilayers, Nature (2019). DOI: 10.1038/s41586-019-0957-1 Journal information: Nature , Science Advances , Nature Nanotechnology
http://dx.doi.org/10.1038/s41565-021-00969-2
https://phys.org/news/2021-09-moir-trions-h-stacked-transition-metal.html
Abstract Transition metal dichalcogenide moiré bilayers with spatially periodic potentials have emerged as a highly tunable platform for studying both electronic 1 , 2 , 3 , 4 , 5 , 6 and excitonic 4 , 7 , 8 , 9 , 10 , 11 , 12 , 13 phenomena. The power of these systems lies in the combination of strong Coulomb interactions with the capability of controlling the charge number in a moiré potential trap. Electronically, exotic charge orders at both integer and fractional fillings have been discovered 2 , 5 . However, the impact of charging effects on excitons trapped in moiré potentials is poorly understood. Here, we report the observation of moiré trions and their doping-dependent photoluminescence polarization in H-stacked MoSe 2 /WSe 2 heterobilayers. We find that as moiré traps are filled with either electrons or holes, new sets of interlayer exciton photoluminescence peaks with narrow linewidths emerge about 7 meV below the energy of the neutral moiré excitons. Circularly polarized photoluminescence reveals switching from co-circular to cross-circular polarizations as moiré excitons go from being negatively charged and neutral to positively charged. This switching results from the competition between valley-flip and spin-flip energy relaxation pathways of photo-excited electrons during interlayer trion formation. Our results offer a starting point for engineering both bosonic and fermionic many-body effects based on moiré excitons 14 . Main Heterostructures of monolayer semiconductors offer a powerful platform to explore light–matter interactions. Using MoSe 2 /WSe 2 heterobilayers as an example, a type II band alignment leads to interlayer excitons with long population and polarization lifetimes 15 . With careful stacking to produce atomically clean heterostructure interfaces, a moiré superlattice can be formed with a periodically varying interlayer separation and electronic bandgap 16 , 17 . This periodic potential modulation of the electronic structure in real space functions as an ordered nanodot array and fundamentally modifies the interlayer exciton properties 8 , 9 , 10 , 11 , 12 , 18 , 19 , 20 , 21 . Quantum-dot-like moiré exciton photoluminescence (PL) with twist-angle control of valley polarization and Landé g -factor has been reported 10 , 12 , 13 . Strong photon anti-bunching of moiré excitons has also been observed 13 , implying its potential for quantum optoelectronic applications. In addition to hexagonal moiré superlattices, strain-induced one-dimensional moiré structures have been identified 9 , which lead to linearly polarized moiré exciton luminescence, in contrast with the circularly polarized luminescence of the unstrained heterobilayers. Besides moiré excitons, their charged counterparts, such as trions add a new dimension to controlling the interactions between excitons in the moiré traps (Fig. 1a ). Unlike bosonic particles (excitons), the charged counterparts, trions, obey fermionic statistics. Transport properties and responses to electric and magnetic fields are significantly different between the two classes of particles. In addition to dipolar interaction, the trions can interact with each other through the long-range Coulomb interaction. The realization of charged moiré excitons is important for allowing an optically accessible platform to explore fermionic many-body effects. However, while rapid progress has been made in characterizing neutral moiré interlayer excitons, their charged counterparts remain elusive. It is unclear whether moiré trions exist, and, if so, how they compare to neutral excitons. Fig. 1: Moiré trions are formed with electrostatic gating. a , Schematics of neutral and charged excitons in moiré traps. b , Schematics of double-BN encapsulated devices with dual graphite gates. c , PL intensity plot as a function of doping and photon energy. At both electron and hole doping, new sets of peaks form at ~7 meV lower energy, with a rapid drop in the PL intensity of the charge neutral moiré exciton. Insets depict the charge configuration of both neutral and charged excitons. V g , gate voltage; a.u., arbitrary units. d , Temperature-dependent PL intensity plot at a fixed hole doping, showing the shallow potential of moiré traps. Inset is the PL spectrum collected at 1.6 K. e , PL spectrum of negative moiré trions versus electric field applied out-of-plane with fixed doping. Source data Full size image In this work, we demonstrate formation of interlayer moiré trions in superlattices formed by stacked WSe 2 and MoSe 2 monolayers. Through electrostatic gating, we find that once the sample is electron- or hole-doped, new sets of moiré trion PL peaks appear ~7 meV lower in energy with respect to charge neutral moiré excitons. The Zeeman splitting of oppositely polarized trion PL peaks is consistent with electron–hole valley-pairing under H-type alignment. In addition, the spin optical polarization of the emitter is tuned with doping. When the system is hole doped, valley polarization of the positively charged moiré trion is inverted compared to both neutral excitons and negatively charged trions. Such polarization inversion arises from the competition between spin-flip but valley-conserved and spin-conserved but valley-flipped relaxation channels of electrons in the formation of interlayer trions. The population lifetimes of moiré trions and excitons are also measured by time-resolved PL. Combining polarization-resolved excitation and detection, the valley polarization lifetime is found to be hundreds of nanoseconds. The structure of the device with a dual-gated geometry is shown in Fig. 1b . The WSe 2 /MoSe 2 heterobilayer is assembled using standard dry-transfer techniques with hexagonal boron nitride (BN) encapsulation and semi-transparent thin graphite top and bottom gates to independently control doping and vertical electric field effects. Edge contacts are used to connect both MoSe 2 and WSe 2 layers. All samples are H-stacked, that is, with nearly 60 o twist angle. For the device presented in the main text, the moiré lattice constant is about 11–12 nm, as determined by piezoresponse force microscopy 9 , 22 (Supplementary Fig. S 1 ). Multiple devices have been studied and show consistent results. All measurements are performed with laser excitation at 1.713 eV (close to the WSe 2 A exciton resonance) at a temperature of 1.6 K, unless otherwise specified. We first describe PL measurements of interlayer moiré excitons as a function of doping (Fig. 1c ). Due to similar thicknesses of top and bottom BN, varying doping with a fixed zero displacement field is achieved by sweeping both gates together. The optical excitation power is 50 nW with the diameter of the laser beam spot around 1 μm. The detection is unpolarized. Within the gate range corresponding to negligible doping (corroborated by gate-dependent reflectance measurements of the intralayer WSe 2 and MoSe 2 excitons 23 , Supplementary Fig. S 2 ), we observe several discrete PL lines near 1.396 eV, with linewidths as narrow as about 100 µeV. As reported previously 10 , 12 , 13 , these emissions are from neutral interlayer excitons trapped in a moiré potential ( \(M_X^0\) ). The emission energy is nearly constant as doping varies, consistent with the fixed displacement field during gate sweep. When the gate voltage becomes large enough to effectively inject electrons/holes to the heterobilayers, the \(M_X^0\) PL intensity rapidly drops and a new set of peaks with narrow linewidths emerges that have lower energy. The doping dependence of these peaks resembles the oscillator strength transfer from neutral to charged excitons observed in monolayers 24 , 25 . Their narrow linewidths, similar to \(M_X^0\) linewidths, suggest that they correspond to positively ( \(M_T^ +\) ) and negatively ( \(M_T^ -\) ) charged interlayer trions trapped in the moiré potential by hole- and electron-doping, respectively. Figure 1d plots \(M_T^ +\) PL intensity as a function of temperature. The PL spectrum at 1.6 K is overlaid on top. The PL intensity quickly fades above 10 K, consistent with the shallow moiré potential of about 30 meV in H-stacked MoSe 2 /WSe 2 heterobilayers 21 . Note that the landscape of moiré potentials for electrons and holes is different. These facts can be responsible for the detailed exciton property difference between moiré traps, including PL intensity, energy and exact gate range. For instance, this moiré potential inhomogeneity is likely to be responsible for the slight persistence of some neutral (trion) peaks into the hole-doped (charge neutral) regime (see Supplementary Fig. S 3 for additional data taken at different spots). In MoSe 2 /WSe 2 heterobilayers, the lowest energy conduction band is in the MoSe 2 layer and the highest energy valance band is in the WSe 2 layer 26 . The band offsets are several hundred meV. Therefore, the energetically favourable configuration for moiré trions consists of two electrons (holes) in the same layer and one hole (electron) in the opposite layer 27 , as depicted in the insets of Fig. 1c . For example, \(M_T^ +\) has the two holes in the WSe 2 layer and the electron in the MoSe 2 layer. Due to inhomogeneous moiré effects, we cannot assign the exact one-to-one correspondence between trions and neutral excitons. To obtain the trion binding energy, we extract the energy difference between the centre of the group of peaks (white frames in Supplementary Fig. S 4 ), which is about 7 meV. This peak difference is consistent across the different spots in the same sample and is confirmed by the second sample (Supplementary Fig. S 5 ). Free interlayer trions with about 10 meV PL linewidth have been reported in MoSe 2 /WSe 2 heterobilayers 28 , where positively and negatively charged ones have binding energies of 10 and 15 meV, respectively. The weaker binding energy of moiré trapped trions is intuitively expected, as the confinement from the moiré potential has limited the variation of the wavefunction to reach the maximum binding. The exact quantification of the trion binding energy in the moiré potential is a non-trivial task due to the close length scales of moiré confinement and Bohr radius, as well as the comparable energy scales of the quantization in the moiré potential and the trion binding energy. Future theory work is needed to address this interesting problem. Interlayer excitons have permanent out-of-plane electric dipoles, which enable tuning of their energy via the d.c. Stark effect 29 . Although the trions are trapped by the moiré potential, their energies can also be tuned by the d.c. Stark effect. Applying an out-of-plane displacement field, all three emissive species ( \(M_T^ -\) , \(M_T^ +\) , \(M_X^0\) ) show similar linear energy shifts in PL spectra. As an example, Fig. 1e shows the PL peak positions of \(M_T^ -\) as a function of out-of-plane electric field. The energy shift Δ is ~30 meV by a change of electric field E of ~0.1 V nm −1 . We estimate Δ / E as 0.2927 ± 0.0001 e nm for \(M_T^ -\) , 0.2874 ± 0.0015 e nm for \(M_T^ +\) and 0.2906 ± 0.0007 e nm for \(M_X^0\) (Supplementary Fig. S 6 ). From simple electrostatics, Δ / E in transition metal dichalcogenides (TMDs) is approximately \(\frac{\varepsilon_{{\rm{BN}}}}{\varepsilon_{{\rm{TMD}}}}ed\) . Using ε BN ~3 and ε TMD ~7 for the out-of-plane dielectric constants of BN and TMDs (refs. 4 , 30 ), respectively, we estimate an effective interlayer separation d of about 0.68 nm. As demonstrated previously 10 , 12 , 13 , Landé g -factors are a fingerprint to distinguish excitons trapped in moiré potentials from those bound to atomic defects. The g -factor of a bright interlayer exciton is determined by the valley index ( τ c , τ v ) of its constituent electron and hole. Here, valley index τ = ±1 corresponds to ±K valleys. Due to the smooth trapping potential of a moiré superlattice, moiré excitons inherit the g- factors of free interlayer excitons. For H-stacked MoSe 2 /WSe 2 heterobilayers, the g -factor of \(M_X^0\) in the triplet configuration has been identified as about −16 (refs. 10 , 31 ). Figure 2 plots PL intensities as a function of perpendicular magnetic field for three fixed doping levels corresponding to \(M_T^ -\) (Fig. 2a ), \(M_X^0\) (Fig. 2b ) and \(M_T^ +\) (Fig. 2c ). Due to valley Zeeman splitting, the peak energies with σ − \((E_{\sigma ^ - })\) and σ + \((E_{\sigma ^ + })\) polarized PL differ in the presence of the magnetic field. Each moiré exciton species exhibits the X pattern in the intensity plot. Although inhomogeneity of moiré traps gives rise to a distribution of neutral and charged moiré exciton peak energies, the peak energy shifts as a function of magnetic field are nearly the same for all moiré emitters. This behaviour is characteristic of excitons trapped in a moiré potential 10 . Fig. 2: Zeeman splitting of both neutral and charged moiré excitons. The obtained effective g- factors are consistent with moiré traps in H-type alignment. a – c , Magneto-PL of negatively charged, neutral and positively charged moiré excitons. The excitation is linearly polarized with both right-circularly and left-circularly polarized emission detection. d – f , Energy differences between right- and left-circularly polarized PL as a function of magnetic field extracted from panels a – c . The corresponding effective g -factors are calculated based on linear fits, as shown in the figures. Source data Full size image We can define the Zeeman splitting between the PL peaks as \({\Delta}E = E_{\sigma ^ + } - E_{\sigma ^ - }\) , which is to be distinguished from the valley Zeeman splitting. The latter has a value determined by the valley pairing (R versus H stacking) and spin pairing (singlet versus triplet) only, while Δ E above can be different from the valley Zeeman splitting by a sign determined by the valley optical selection rule 10 , that is, negative (positive) if the K valley emits σ + ( σ − ) polarized light. The extracted Δ E values for \(M_T^ -\) , \(M_X^0\) and \(M_T^ +\) are shown in Fig. 2d – f . Linear fits of Δ E yield a g- factor of −16.08 ± 0.01 for \(M_X^0\) and effective g -factors ( g ′) of −16.07 ± 0.01 and −16.44 ± 0.07 for \(M_T^ -\) and \(M_T^ +\) , respectively. Neglecting the possible contribution to the Zeeman splitting by the carrier interaction effects, the effect of magnetic field on trion energy can be seen as the sum of the Zeeman shift of a valley exciton and that of the extra carrier. Thus, the extra electron/hole in a trion only shifts the transition energy but does not contribute to the spectroscopic Zeeman splitting of Δ 𝐸 , which is defined as the PL peak energy difference in the magnetic field. The similarity of these g and g ′ values (about −16) supports our assumption and the assignment of all of these narrow lines as excitons/trions trapped in the smooth moiré potential traps. It also implies that the electron–hole pairs involved in the radiative recombination all have the spin-triplet configuration. A unique feature of the moiré trap is the three-fold rotation symmetry of local atomic registry, which results in circularly polarized valley optical selection rules 21 . To investigate moiré trion valley polarization, we performed helicity-resolved PL measurements as a function of doping. Figure 3a,b shows the PL intensity plots with σ + (Fig. 3a ) and σ − (Fig. 3b ) polarized detection under σ + -polarized optical pumping. Figure 3c shows the corresponding valley polarization. Here, we define valley polarization as \(\rho = \frac{{I_{\sigma ^{+} } - I_{\sigma ^{-} }}}{{I_{\sigma ^{+} } + I_{\sigma ^{-} }}}\) , where \(I_{\sigma ^ \pm }\) is the σ ± polarized PL intensity. The polarization resolved spectra collected at selected doping levels are also shown in Fig. 3d . As demonstrated 10 , \(M_X^0\) is strongly co-circularly polarized. On doping with excess carriers, the optical polarization of the moiré trions is tunable. Both \(M_T^ -\) and \(M_T^ +\) have appreciable circular polarization in their PL, but opposite sign (see also an additional device D2 with data in Supplementary Fig. S 5 ). As shown in Fig. 3c , \(M_T^ -\) is co-circularly polarized with the pump, like \(M_X^0\) , with the degree of polarization as high as 90%, whereas \(M_T^ +\) is changed to cross-circularly polarized PL with much smaller ρ . Fig. 3: Doping-dependent valley polarization of moiré trions. a , b , Helicity-resolved PL intensity plots as a function of doping. The σ + and σ − polarized components are shown in a and b , respectively, with σ + -polarized excitation. c , Degree of PL polarization ( ρ ) as a function of doping, calculated from a and b . d , Helicity-resolved PL spectra at selected doping n , for negatively charged \(M_T^ -\) (top), neutral \(M_X^0\) (middle) and positively charged \(M_T^ +\) (bottom) moiré excitons. The PL from \(M_T^ -\) and \(M_X^0\) are co-circularly polarized, whereas PL from \(M_T^ +\) is cross-circularly polarized. e , Schematics illustrating doping-dependent PL polarization of the moiré exciton/trions. The conduction (dashed lines) and valence (solid lines) are from MoSe 2 and WSe 2 , respectively. Electrons and holes are represented by solid and open circles. Red and blue indicate the spin orientations. The density of charge carriers (electrons/holes) is indicated by the size of circles (solid/open) shown in the inset. The valley index of the majority hole (electron) population determines the PL helicity of \(M_T^ -\) and \(M_X^0\) ( \(M_T^ +\) ). Source data Full size image There are a few possible explanations for the gate-tunable PL polarization. For instance, gating can switch the lowest energy moiré sites with opposite valley optical selection rules in a supercell 21 . The experimental signature of this effect has been recently reported 32 . This optical selection rule switching should lead to an opposite sign of the polarization Zeeman splitting between \(M_T^ -\) and \(M_T^ +\) . However, this is not the case here. Instead, such a polarization reversal behaviour is more appropriately explained by the competition between spin-conserved valley-flip relaxation channels and valley-conserved spin-flip relaxation channels of electrons during the interlayer trion formation process. For all three charge configurations, the recombining electron–hole pair is trapped at the same moiré site. Without altering the optical selection rule, the measured polarization Zeeman splitting (or g -factor) would not change appreciably both in sign and magnitude, consistent with our experimental observation (Fig. 2 ). Below we illustrate the physical picture under σ + -polarized excitation with details given in Supplementary Section S 1 and Fig. S 7 . In both electron-doped and charge-neutral cases, the spin-valley index of the photo-excited hole determines the emission polarization of the \(M_T^ -\) or \(M_X^0\) . For simplicity, the valley notations of WSe 2 and MoSe 2 are assigned as ±K and ±K′, respectively. In the H-stacking configuration (that is, MoSe 2 is rotated nearly 60° with respect to WSe 2 ), K (−K) and −K′ (K′) are nearly aligned in the momentum space. When the σ + excitation is at the WSe 2 monolayer exciton resonance, the photo-created holes are predominantly at the K valley band edge, with the polarization protected by the spin-valley locking. The K valley hole in WSe 2 is momentum-aligned with the −K′ valley electron in the MoSe 2 band edge to form a recombining pair in the spin-triplet configuration (Fig. 3e , top and middle panels). From the observed co-circular PL polarization with the pump, we can further determine that these moiré excitons/trions are trapped at the \(H_h^h\) local of the moiré supercell 33 . The high helicity indicates that the valley polarization is well-protected by spin-valley locking. The slight difference in polarization between \(M_X^0\) and \(M_T^ -\) can arise from the difference of electron population distribution between the valleys as the former is from photoexcited electrons while the latter has a contribution from electrostatic doping. We have also examined the excitation frequency dependence of the PL polarization. On excitation at the MoSe 2 monolayer exciton resonance, the σ + excitation creates holes in the MoSe 2 K′ valley, which relaxes to the WSe 2 band edge, ending up either in the −K valley through spin-flip interlayer hopping, or the K valley through valley-flip hopping. The competition of these two relaxation channels determines the hole valley polarization at the WSe 2 band edge, resulting in a reduced ( \(M_T^ -\) ) or even vanishing emission polarization ( \(M_X^0\) ). These understandings are consistent with our polarization-dependent PL excitation spectroscopy, as shown in Supplementary Fig. S8 . For the hole-doped case, the \(M_T^ +\) consists of two holes at opposite valleys. Thus, its electron valley configuration determines the emission polarization (Fig. 3e , bottom panel). Unlike the hole, the photo-created electron needs to go to the MoSe 2 band edge by interlayer hopping and energy relaxation, which also requires either a spin-flip or a valley-flip in H-stacked MoSe 2 /WSe 2 . Spin-flip and valley-flip of the electron in the relaxation process can be facilitated by the \({{{\mathrm{{\Gamma}}}}}_5\) phonon at the zone centre and the \(K_3\) phonon at the zone edge 34 , respectively. Previous work 34 has found that the valley-flip rate exceeds the spin-flip one. These relaxation channels can place the photo-created electrons at either valley of the MoSe 2 band edge (see Supplementary Fig. S 7 ). Their competition leads to a smaller polarization with a reversed helicity of the \(M_T^ +\) PL compared to that of both \(M_T^ -\) and \(M_X^0\) . Lastly, we examined the valley relaxation dynamics of both neutral and charged moiré excitons by time and polarization resolved PL at selected gate voltages. Figure 4 presents the decay of co-polarized (red) and cross-polarized (blue) PL of \(M_T^ -\) (Fig. 4a ) and \(M_X^0\) (Fig. 4b ) of Device 2. The polarization of \(M_T^ +\) is a bit small for a reliable measurement. The excitation power is 100 nW. The corresponding valley polarization dynamics are shown in Fig. 4c,d . Single exponential fits (solid lines) to valley polarization decay yield lifetimes of about 370 ns and 1 µs for \(M_T^ -\) and \(M_X^0\) , respectively. These lifetimes are much longer than those of monolayer excitons and trions, indicating the possibilities for engineering both bosonic and fermionic many-body effects with dynamic control based on moiré emitter arrays. Fig. 4: Moiré exciton and trion dynamics. a , b , Time- and helicity-resolved PL of \(M_T^ -\) ( a ) and \(M_X^0\) ( b ). c , d , The corresponding valley polarization dynamics. Solid lines are exponential fits to valley polarization decay, with lifetimes of around 370 ns and 1 µs for \(M_T^ -\) and \(M_X^0\) , respectively. Source data Full size image Methods Sample fabrication Mechanically exfoliated monolayers of MoSe 2 and WSe 2 were stacked using a dry-transfer technique. The crystal orientation of the individual monolayers was first determined by linear-polarization resolved second-harmonic generation before transfer. The alignment angle of MoSe 2 and WSe 2 was double-checked using piezoresponse force microscopy during transfer before encapsulating the top and bottom with hexagonal BN. The BN encapsulation (10–30 nm) provided an atomically smooth substrate. PL measurements PL measurements were performed using a home-built confocal microscope in reflection geometry. The sample was mounted in an exchange-gas cooled cryostat equipped with a 9 T superconducting magnet in Faraday configuration. The sample temperature was kept at 1.6 K unless otherwise specified. A power-stabilized and frequency-tunable narrow-band continuous-wave Ti:sapphire laser (M2 SolsTiS) was used to excite the sample unless otherwise specified. The PL was spectrally filtered from the laser using a long-pass filter before being directed into a spectrometer. The PL signals were dispersed by a diffraction grating (1,200 grooves per mm) and detected on a silicon CCD camera. Polarization-resolved PL was obtained based on a combination of quarter-wave plates, half-wave plates and linear polarizers for excitation and collection. Time-resolved PL data were acquired using a time-correlated single-photon counting module (PicoHarp 300) with a supercontinuum fibre laser (pulse duration around 10 ps; repetition rate around 3 MHz; average power 100 nW) at 720 nm for excitation and a silicon avalanche photodiode for detection. The narrow emission lines were spectrally filtered (collection width of about 2 meV) through a spectrometer before detection on the avalanche photodiode. Calibration of doping density and electric field The doping densities in the heterobilayer were determined from the applied gate voltages based on a parallel-plate capacitor model 23 . The thickness of BN was determined by atomic force microscopy. Both top and bottom BN flakes of the device presented in the main text were 20 nm thick. The doping density was calculated as C b Δ V b + C t Δ V t , where C t and C b are the capacitances of the top and bottom gates and Δ V t and Δ V b are the applied gate voltages relative to the level of the valence/conduction band edge. The geometric capacitance C t = C b was about 133 nF with dielectric constant ε BN ~3 (ref. 35 ). The out-of-plane electric displacement field was calculated using D = ( V b C b − V t C t )/2 ε 0 and the electric field was calculated using E = D / ε BN . Data availability Source data are provided with this paper. All other data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request.
In physics, the moiré pattern is a specific geometrical design in which sets of straight or curved lines are superposed on top of each other. Recent studies have found that bilayers of transition metal dichalcogenide materials arranged in moiré patterns could be particularly promising for studying electronic phenomena and excitons (i.e., concentrations of energy in crystals formed by an excited electron and an associated hole). Transition metal dichalcogenide moiré bilayers have advantageous characteristics for studying both electronic and excitonic physical phenomena, including strong Coulomb interactions. Past research studies have successfully used these systems to make several interesting discoveries, such as exotic charge orders at both integer and fractional fillings. Researchers at University of Washington and other institutes worldwide have recently carried out a study specifically examining a Transition metal dichalcogenide moiré system comprised of molybdenum diselenide (MoSe2)/tungsten diselenide (WSe2) heterobilayers, Their paper, published in Nature Nanotechnology, reports the observation of moiré-arranged trions (i.e., localized excitations consisting of three charged particles) in H-stacked MoSe2/WSe2 heterobilayers. "Periodic moiré potential naturally occurs in transitional metal dichalcogenides moiré superlattices. Several years ago, we envisioned that the periodic potential can function as arrays of quantum dots," Wang Yao, one of the researchers who carried out the study, told TechXplore. "Based on this idea, our team demonstrated charge neutral moiré excitons in twisted MoSe2/WSe2 heterobilayers in 2019." The work builds on the group's previous studies focusing on transitional metal dichalcogenides moiré superlattices. While in their past research, the team was able to observe charge-neutral moiré excitons in twisted MoSe2/WSe2 heterobilayers, in their new study, they tried to add the electrostatic control of the carrier density to the same moiré system. This ultimately enabled them to realize charged moiré excitons, which are also known as moiré trions. "In our experiments, we measured the light emission from the heterolayers we examined," Xu explained. "By focusing on emission properties (linewidth, polarization, intensity, energy etc) as a function of carrier doping, magnetic field and temperature, we were able to identify moiré trions." The findings could have important implications for the future development of new nanotechnology, as well as for the study of excitonic phenomena. In their future work, the team hopes to utilize moiré systems to investigate different physical phenomena. "We showed that moiré potential can also trap charged excitons," Xu said. "Combined with the charge neutral ones, the heterobilayer can be used as a platform for studying both bosonic and fermionic many-body effects based on moiré excitons. In our next studies, we plan to study both equilibrium and non-equilibrium many body effects based on the moiré systems."
10.1038/s41565-021-00969-2
Biology
Swinging on 'monkey bars': Motor proteins caught in the act
H. Imai etc., 'Large-scale flexibility in cytoplasmic dynein stepping along the microtubule, Nature Communications DOI: 10.1038/ncomms9179 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms9179
https://phys.org/news/2015-09-monkey-bars-motor-proteins-caught.html
Abstract Cytoplasmic dynein is a dimeric AAA + motor protein that performs critical roles in eukaryotic cells by moving along microtubules using ATP. Here using cryo-electron microscopy we directly observe the structure of Dictyostelium discoideum dynein dimers on microtubules at near-physiological ATP concentrations. They display remarkable flexibility at a hinge close to the microtubule binding domain (the stalkhead) producing a wide range of head positions. About half the molecules have the two heads separated from one another, with both leading and trailing motors attached to the microtubule. The other half have the two heads and stalks closely superposed in a front-to-back arrangement of the AAA + rings, suggesting specific contact between the heads. All stalks point towards the microtubule minus end. Mean stalk angles depend on the separation between their stalkheads, which allows estimation of inter-head tension. These findings provide a structural framework for understanding dynein’s directionality and unusual stepping behaviour. Introduction Dyneins are a group of motor proteins that move along microtubules (MTs) to cause the beating of the axoneme in cilia and flagella and to perform vital and diverse transport and tethering roles in the cytoplasm of eukaryotic cells, for instance transporting mRNA, growth factors and β-amyloid precursor protein 1 , 2 . Dynein also transports the nucleus in neurons, which is essential to human development and maintenance of healthy neuronal activities 3 , 4 . Growing numbers of neurodegenerative diseases and developmental problems are now known to result from mutations in dynein or dynein-binding proteins 5 , 6 , and dynein-mediated processes are implicated in cancer 7 . So, in addition to its intrinsic interest, understanding dynein mechanism is critical for future treatment of disease. Dyneins have an unusual structure in which each ring-like ATPase head attaches to the MT via a slender, coiled-coil stalk at the tip of which is a small, globular, MT-binding subdomain that we term the stalkhead. (The stalkhead has also been called the MT binding domain (MTBD) but the meaning of MTBD is ambiguous for axonemal dyneins, since the stalkhead binds to one MT doublet and the tail binds to an adjacent doublet; by contrast, ‘stalkhead’ is intuitively understood and more concise.) The atomic structure of the motor domain is known 8 , 9 , 10 . The heart of the motor domain is a AAA + superfamily mechano-enzyme 11 , in which six AAA + motifs form a ring that hydrolyses ATP 1 . Associated with the AAA + ring (hereafter simply ‘ring’) is a C-terminal sequence that is implicated in determining the stall force and run length of the motor 12 , 13 , and that is unusually short in the much-studied yeast dynein 13 , 14 . Cytoplasmic dynein includes two identical heavy chains, each forming tail and motor domains ( Fig. 1a ) 1 . The complex N-terminal tail incorporates additional polypeptides, binds cargoes and dimerizes the motors 15 , 16 , 17 . Between the tail and ring is the linker 18 that provides the power stroke of the motor 19 by switching from bent (primed) to straight (unprimed) 8 , 9 , 10 , 20 . The ring, C-terminal sequence and linker together constitute the head, from which the stalk extends ( Fig. 1a ) 21 . Figure 1: Cryo-EM of dimeric dynein motors bound to MTs in 2 mM Mg-ATP. ( a ) Domains in the amino acid sequence of D. discoideum dynein heavy chain and (right) cartoon depicting the domain architecture of the dimer. The N-terminal tail (residues 1-1387, brown) dimerizes the motor in vivo . The motor includes six AAA + modules: AAA1 (blue)-AAA6 (red). Stalk and stalkhead emerges from AAA4, the strut from AAA5 and sequence numbers indicate the invariant prolines at the stalk-stalkhead junction. Linker (magenta and pale grey) and C-terminal domain (black) lie on opposite faces of the AAA + ring. Proposed moving part of the linker (L, magenta) is indicated. ( b ) Recombinant chimeric dynein dimerized by GST and with stalkhead from human axonemal dynein heavy chain 7; (right) cartoon depicting this dimer. ( c – e ) Cryo-EM of GST-380H7 mixed with MT and ∼ 2 mM Mg-ATP. Contrast is inverted (protein is pale) and dynein’s stepping direction (to the MT minus end) is towards the right in all Figures. ( c ) Dynein particles are crowded along the sides of the MT; such regions were not further analysed. ( d ) In sparse regions of less densely decorated MTs can be seen ‘superposed’ dimers (arrows) in which only a single ring is visible. ( e ) Enlarged view showing ‘superposed’ dimer (single arrow), an ‘offset’ dimer in which both rings are visible (double arrow) and a group of dimers comprising more than two rings (arrowhead). ( f ) Pixel values calculated from the head domains relative to adjacent MT (see Supplementary Fig. 4 ) of offset and superposed dimers and of monomers (mean±s.e.m.). ( g , h ) Image analysis of dimers stringently isolated on the MT surface (see text). Average (upper panel) and variance (lower panel) images of MT-aligned offset dimers ( g ) and superposed dimers ( h ) with MT at the top. Higher variance is darker grey. In offset dimer average ( g ) only one ring is apparent but the second ring’s variable position is revealed in the variance image (arrows). ( i ) Monomeric dynein bound in the absence of nucleotide. Average appears closer to MT than dimers because some particles were attached to MT protofilaments lying closer to the MT axis in this view. Scale bars, ( c – e ) 40 nm; ( g - i ) 20 nm. Full size image Individual cytoplasmic dynein dimers can make runs of many steps along a MT, that is, they are processive. For many dyneins, the processive molecule includes a complex between the tail and other proteins or protein complexes such as BicD and dynactin 22 , 23 . For yeast and Dictyostelium dyneins, replacement of the tail with glutathione S-transferase (GST; Fig. 1b ) yields a simpler dimer that still processively steps along MT 13 , 24 , 25 . The structural mechanism of cytoplasmic dynein’s processive stepping along MTs is unclear. Tracking fluorescently-tagged dynein suggested uncoordinated stepping by the two heads 26 , 27 , but they must also communicate since the properties of dimers on the MT are different from those of monomers 13 , 28 . Dynein’s unusual architecture suggests that its stepping mechanism may be fundamentally different from the other transport motors, kinesin and myosin. The structural basis of processive stepping by cytoplasmic dynein has not previously been investigated. Understanding of myosin-5’s movement along actin filaments was greatly aided by visualizing it on its actin track by using negative stain electron microscopy at rate-limiting (micromolar) ATP concentrations 29 . Such a study on dynein would be problematic because MT diameter is much greater than actin filament diameter, which could lead to structural collapse of both MT and dynein during drying of the stained specimen. Therefore, to understand dynein stepping, we have used the technically more demanding unstained cryo-electron microscopy (cryo-EM) technique to discover the structures of stepping dynein. Unlike many recent applications of cryo-EM, our aim here is not to produce a high resolution structure of the stepping molecule, but rather to trap a dynamic motor in action, so that we can see the range of structures that dynein adopts. We have succeeded in imaging dynein-MT complexes at near-physiological (millimolar) ATP and revealed a great diversity of structures. Results Cryo-EM of cytoplasmic dynein dimers moving on MTs in physiological ATP In the presence of millimolar MgATP, we found most artificially-dimerized Dictyostelium discoideum cytoplasmic dynein motors (GST-380) are detached from MTs. Replacement of the stalkhead with that of human axonemal dynein 7 (DNAH7) to form GST-380H7 produces a dimer that has a similar structure in the absence of MTs as GST-380 ( Supplementary Fig. 1 ), but stronger binding to MTs in TIRF-M stepping assays ( Supplementary Movie 1 ). The longer duration of runs together with the higher attachment rate of GST-380H7 compared with that of GST-380 also indicate that GST-380H7 has a much higher affinity to MTs than GST-380 ( Supplementary Table 1 ), and this chimeric dynein moves robustly along MTs ( Supplementary Fig. 2a ; Supplementary Movie 2 ). Cryo-EM in the presence of ∼ 2 mM Mg-ATP shows almost all GST-380H7 dimers are bound to the MTs ( Fig. 1c–e ), including at their flared ends ( Supplementary Fig. 2 ). Dynein forms a fringe along either side of the MT, not all around it ( Fig. 1c ), indicating that these rapidly stepping molecules use the few seconds taken to prepare the specimen to favour the MT protofilaments nearest the centre of the thin (typically 50 nm) liquid film. This simplifies the analysis of the configurations of the stepping dynein molecules. Characterization of MT-bound dimer configurations Raw images have sufficient contrast to show the head and stalk domains ( Fig. 1d , arrows), but greater structural detail is revealed using single-particle image processing 30 . We first determined MT polarity 31 and hence the direction of dynein stepping (see Supporting Methods). From 1,082 such MTs we identified 10,080 dynein-MT complexes. To avoid ambiguity of assignment of heads into dimers we then selected a subset of isolated molecules in which two heads <40-nm apart lacked neighbours within ±40 nm along the MT. Forty nanometres exceeds the maximum head–head separation observed in MT-free GST-dynein dimers ( Supplementary Fig. 1c ). This stringent criterion yielded 374 molecules showing two variably spaced heads, referred to as offset dimers. A second group of 322 molecules showed only a single head >40 nm from its nearest neighbour (single arrows, Fig. 1d,e ). For such heads, summed, normalized pixel values of individual molecules are almost twice those of monomeric dynein (that is, lacking the N-terminal GST) bound to MTs, and are within error the same as those of the combined heads of offset dimers ( Fig. 1f ; method in Supplementary Fig. 4 ). This shows that such heads are dimers with superposed heads, not monomers. Thus GST-380H7 dimers stepping along MT adopt a variety of configurations, including an abundant one in which the two heads are very closely superposed. Global averages of MT-aligned offset and superposed dimers show indications of the stalk binding to the MT, oriented at a moderate angle to the MT surface and pointing towards the minus end of the MT ( Fig. 1g,h ). However, the second head of offset dimers is smeared out and structural details are obscured, partly because dynein’s head position is a variable distance from the MT surface. To see detail and quantify this flexibility, we further analysed superposed and offset configurations. Structure of the superposed dimer Further alignment of superposed dimers using only features of dynein, shows that they display the structure of a circular ring with pronounced central channel ( Fig. 2a ). Extensive classifications designed to reveal slight displacements between the two rings did not detect any. Thus, the two rings are accurately superposed and both lie in approximately parallel planes that are also parallel to the MT long axis such that they are seen face-on in this side decoration of MT. The perpendicular distance from the MT surface to the ring centre is 14.4 nm±3.0 nm (mean±s.d., n =322). Figure 2: Structure and flexibility of superposed dimers. ( a , b ) Averaged images of superposed dimers from sparse regions. ( a ) Classified to reveal the rings and stalks connecting the motor to the track. Stalk flexibility alters the distance between the rings and MT surface. ( b ) Classified to reveal the GST (arrows) variably positioned between the ring and the MT surface. Number of images in each class varies between 10 and 23. ( c – e ) Averaged images of dimers from crowded regions. ( c ) Well-populated classes show the stalk and stalkheads clearly despite stalk flexibility. ( d ) Combined images including those from ( c ) shows flexibility occurs mainly in the plane of the image (double-headed arrow). Hinge between stalk and stalkhead (white arrow) and binding site at the interface of tubulin subunits (bracket) are indicated. ( e ) Class averages of aligned head domains showing the ring substructure. Prominent peripheral features (asterisks in middle panel) are compatible with features (asterisks) seen in computed projections ( g ) of a model ( h ) of the ADP-dynein superposed dimer bound to the MT (chimera of dynein motor from PDB 3VKG and stalkhead from PDB 3J1T; see Methods and Supplementary Figs 5–7 ). ( f ) Diagram depicting four MT protofilaments, staggered axially by 0.9 nm, each comprising dimers of α- (grey) and β- (white) tubulin subunits at 8.3-nm intervals. Stalkhead binding sites (black triangles) deduced for the superposed dynein configuration on adjacent protofilaments are illustrated. ( h ) Surface rendered model of the superposed dimer. Motor structure coloured as in Fig. 1a . AAA + domains (1–6) are indicated. The GST dimer is depicted connecting to the linkers by magenta springs. ( i ) Enlargement of boxed (stalkhead) region of ( h ) showing one stalk depicted as backbone ribbons and showing location of invariant prolines P3371 and P3496 (red spacefill) and the position of the flexible hinge (black ring). ( j ) Model in ( h ) viewed looking towards the MT minus end; the front-to-back arrangement of the two motors shows how the linker (L) and C-terminal domain (C) of the two motors face one another (red arrow). Scale bars in ( a ) for ( a – c , e ), and in ( d ) for ( d , f – h and j ) are 8 nm. Full size image The GST dimer was located, by classification around the superposed rings, variably positioned between the rings and the MT surface, close to the stalk on the MT plus end side (arrows, Fig. 2b ). It is likely that attachment of bulky probes to the GST for stepping studies 25 , 26 , 32 , 33 sterically hinders this conformation. Because GST marks the end of dynein’s linker, this location means that the linkers must be in front of the rings (that is, nearer the observer) in this view ( Fig. 2h ) 8 , 9 , 20 . This location also means that both linkers are close to the unprimed conformation ( Fig. 2h ), which further indicates that both stalkheads of the superposed dimer are bound to the MT, as this linker position is associated with tight binding to MTs 34 . The MT-bound stalk is clearly visible ( Fig. 2 ). Classification around the superposed rings did not detect a second stalk, indicating that the two stalks are superposed. Binding sites on adjacent MT protofilaments can produce such stalk superposition because protofilaments are staggered axially by only 0.9 nm in a left-handed helix 35 compared with the ∼ 2-nm stalk diameter ( Fig. 2f ). The stalkhead nearer the observer is thus 0.9 nm ahead of the other stalkhead. Greater detail of the ring, stalk, stalkhead and adjacent MT lattice of the superposed dimer is revealed by selecting ∼ 1,500 more examples from the full data set ( Fig. 2c,e ) to improve signal-to-noise in image averages. The superposed stalkheads are attached to the MT at the subunit interface within the tubulin dimer ( Fig. 2d ) and join the stalk at an abrupt kink ( Fig. 2d ). Superposed dimers show striking variation in ring distance from the MT, which arises principally from flexion at the stalk-stalkhead junction ( Fig. 2a,c,d ; Supplementary Movie 3 ). Since the rings remain in plane at all angles, this junction acts as a hinge rather than a universal joint, with its rotation axis tangential and orthogonal to the axis of the cylindrical MT. Given both GST and hinge flexibility, it is remarkable that the rings are so often so closely superposed. This suggests that superposition might be maintained by direct head–head interactions. Atomic model of the superposed dimer on MT To gain insights into possible interactions between the two superposed motors and the structural basis of flexibility at the stalk-stalkhead hinge, we used recent structural data 8 , 36 to build atomic models of the MT-bound superposed dimer to compare with the EM data ( Supplementary Figs 5–7 ). The back-to-back head arrangement found in a D. discoideum crystal structure of monomers 8 corresponds poorly with our data ( Supplementary Fig. 7a–c ), and the stalkheads face in opposite directions, incompatible with simultaneous binding to the polar MT. Combinations of the two different head-stalk crystal structures provide four distinct front-to-back dimeric models ( Supplementary Fig. 6d–g ). Such models are compatible with EM data in stalk angle and linker location, but the rings are foreshortened because they are tilted out of the plane of the EM view ( Supplementary Fig. 7d–g ). Rotating the rings into this plane ( Supplementary Fig. 6h ) produces a model ( Fig. 2h ; Supplementary Movie 4 ) that closely resembles the head structures seen in our data ( Fig. 2g cf. Fig. 2c,e ). In this configuration, the linker of the motor further from the observer is juxtaposed to the C-terminal domain of the nearer motor ( Fig. 2j ), suggesting these are the components that might interact to maintain superposition of the rings. It is notable that unlike most dyneins (including human and D. discoideum dyneins), the much-studied yeast dynein lacks most of the C-terminal domain 13 , 14 , so such contacts, if present, would be different. The atomic model of dynein on MT shows that the stalk-stalkhead hinge coincides with the location of the invariant proline residues 14 in the stalk ( Fig. 2i ). Here the two stalk helices are superposed in this view, which allows hingeing in the plane observed without stretching the helices. Flexion at the same position and in the same plane was also found in molecular dynamics simulations of the stalkhead plus stalk in the absence of MT 37 , which suggests that the behaviour of this hinge is qualitatively independent of strong attachment of the stalkhead to MT. Structure and diversity of offset dimers Both rings of offset dimers are revealed by careful alignment and classification ( Fig. 3a ). They appear circular, with the central channel clearly visible, regardless of ring–ring separation distance. Thus the plane of each ring lies approximately parallel to the axis of the MT, like the rings in superposed dimers. In many dimers the rings partially overlap ( Fig. 3a , bottom row) implying that such rings must be offset azimuthally (that is, around the MT axis) to avoid steric clash. GST is not seen, indicating that its position is highly variable within the offset dimer. Variable position of the aligned rings with respect to the MT smears out the MT (cf Fig. 1g ) and consequently also the stalks, which are rarely visible in these averages. Nevertheless, the ring-MT distances of both leading and trailing rings are very similar to those of superposed dimers, indicating that both motors are bound to MT ( Supplementary Fig. 8 ). Figure 3: Flexibility and intramolecular tension in offset dimers. ( a ) Example class averages of offset dimers classified to show the relative position of the two rings (see Methods) and arranged in order of decreasing ring–ring separation. The leading ring is shown at a constant position. ( b ) Deduced stalk angle (left cartoon) shown as cumulative percentage plot for trailing (cyan) and leading (red) motors, with superposed dimers (black) for comparison. Right cartoon illustrates the mean angle and the wide spread of head positions due to stalk hingeing, with numbers referring to the angle of each subsidiary motor cartoon away from the mean, expressed as a multiple of s.d. ( c ) Histograms of distance between ring centre and stalkhead position (cartoon). Trailing motor (mean 13.5±3.1 nm s.d.), leading motor (13.6±2.9 nm) and superposed dimer (13.8±2.4 nm). ( d ) Image averages of motors aligned according to stalkhead position, excluding motors with extreme stalk angles for clarity. Images shown at the same scale as histograms in ( c ). ( e ) Scatter plots of stalk angle of trailing and leading motors (see cartoon in ( g ); note GST (grey) connecting the motors) plotted as a function of the separation between stalkheads in each dimer. Also shown, running averages (darker points), using a window size of 100 data points, and linear regression lines (black, trailing: y =40.0+0.222x; leading: y =44.1−0.200x; n =359; slopes significantly different from zero at P <0.05, one-tailed t -test). ( f ) As ( e ), but depicting difference between leading and trailing stalk angle of dimers. ( g ) s.d. of the running average values shown in ( f ). ( h ) Histogram of axial ring separation measured from trailing to leading ring (cartoons). Most dimers have positive ring separation, others have negative separation, illustrated by the inset cartoons. ( i ) Histogram of stalkhead separation (cartoon). Scale bar, 20 nm ( a ), 10 nm ( d ). Full size image To visualize the stalks in offset dimers, we surmised that, as in superposed dimers, observed variation in ring-MT distance arises from flexion of the motor as a rigid body at the stalk-stalkhead hinge. The constant distance from ring centre to hinge allows calculation of stalk angle for each motor by trigonometry and thus prediction of stalkhead location on the MT in the minus end direction from the ring ( Supplementary Fig. 8a ). Alignment of the stalkheads based on this prediction, indeed reveals variably angled stalks with stalkheads attaching both leading and trailing motors to the MT ( Fig. 3c,d ; Supplementary Movie 5 ). This confirms that both motors of offset dimers are bound to the MT, and that both stalks point to the minus end. The calculated stalk angles are remarkably similar for leading and trailing motors (41.9°±13.7° and 42.7°±13.7°, respectively, (mean±s.d.; Fig. 3b ) as well as for superposed dimers from sparse regions analysed in this way (41.5°±11.5°; n =318). The stalk-stalkhead hinge causes the distance the head lies behind the stalkhead to vary over a wide range ( ∼ 20 nm; Fig. 3c ). Relative independence of the two motors linked through the GST tether is implied by the broad scatter of stalk angles of leading and trailing motors ( Fig. 3e ; Supplementary Fig. 8c ). Consequently, and because of the wide range of stalk angles, the two stalks of the dimer sometimes cross, in which case the leading stalkhead attaches to the (low angle) trailing ring ( Fig. 3h ). We define such a motor as a leading motor as it is the stalkhead that is anchored to the MT. The broadly distributed axial separation of the rings thus includes negative values ( Fig. 3h ). The axial separation between the stalkheads of offset dimers has a broad distribution ( Fig. 3i ). The discrete peaks expected from the spacing of MT-binding sites ( Fig. 2f ) are absent, possibly due to alignment and measurement errors; flexibility elsewhere in the motor that compromises the assumptions of the trigonometric method; and dimers attached to two protofilaments that form a seam in the 14_3 MT 35 and are thus further offset by ∼ 4 nm. The distribution includes very small separations indicating that some dimers have superposed stalkheads but offset rings. Most stalkheads are separated by ∼ 8 nm, fewer by ∼ 16 nm and few by ∼ 24 nm. Mechanical properties of dynein The flexibility shown by our images of dynein allows deductions to be made which advance understanding of the mechanical properties of the molecule that underlie its function. The stalk-stalkhead hinge is a major site of compliance within the MT-bound dimer ( Supplementary Movies 3 and 5 ; see also ref. 37 ). Applying the equipartition theorem to the measured variance in angle 38 yields estimates of the torsional stiffness for this spring-like hinge in leading, trailing and superposed moieties of 72, 71 and 101 pN nm rad −2 , respectively. Given that the mean stalk angle is ∼ 42°, this yields apparent cantilever stiffness of these attached motors (that is, their resistance to axial forces applied at the base of the stalk) of 1.07, 1.03 and 1.53 pN nm −1 , respectively (see Methods). The higher stiffness of the superposed dimer may be explained by the two springs being connected in parallel through interactions between the superposed heads, rather than being almost independent. The data provide evidence of mechanical communication between the two offset motors. If stalkheads are attached far apart on the MT, the GST tether should pull the heads together, producing a shallower leading stalk angle and a steeper trailing stalk angle. Such trends are indeed apparent despite the broad scatter: both regression slope and running average of stalk angles show the predicted dependence on stalkhead separation ( Fig. 3e ), it appears linear and it is similar in magnitude for leading and trailing stalks (−0.20° nm −1 and +0.22° nm −1 , respectively). Likewise, the pairwise difference between leading and trailing stalk angle of a dimer has the predicted negative dependence on stalkhead separation (−0.42° nm −1 ; Fig. 3f ). The s.d. of this measurement also markedly decreases with increasing stalkhead separation ( Fig. 3g ), indicating that the positions of the two heads become more correlated. The regression equations ( Fig. 3e ) can be used to estimate, for any given stalkhead separation, the mean axial ring centre separation. Thus, stalkhead separations of 0.9, 8.3, 16.6 and 24.9 nm (see Fig. 2f ) yield mean ring centre separations of −0.1, 6.6, 14.2 and 21.7 nm, respectively. The time-averaged tension developed within the offset dimer can be estimated by combining the torsional stiffness of the stalk-stalkhead hinge derived above with the dependence of stalk angle on stalkhead separation. This yields estimates per nm of stalkhead separation of 0.031 and 0.033 pN using leading and trailing motor data, respectively, which averages 0.032 pN. Thus, for dimers with stalkheads separated by 8.3, 16.6 or 24.9 nm, the time-averaged tension would be 0.26, 0.53 or 0.79 pN, respectively. All components of the dynein dimer will experience this tension in series, and thus the stiffness of the elastic linkage between the two rings in this GST dimer can be estimated as 0.035 pN nm −1 by dividing the increment of tension by the increment of ring–ring separation (see above). The increase of inter-head tension with step length will bias towards shorter steps. Discussion We succeeded in seeing dimeric dynein attached to MTs in ATP by using Dictyostelium cytoplasmic dynein motors dimerized by GST at the end of the linker. They also have the stalkhead replaced by one derived from a human dynein, which increases the fraction of dimers found attached to the MT in ATP by cryo-EM. Several lines of evidence indicate that this construct shares many properties with intact dynein. First, for our construct attached to MT in ATP, the stalk angle and the position of GST, close to the MT surface, are consistent with the stalk angle and the position of the linker-tail junction of intact dynein bound in rigor on MT 39 . Second, in the absence of MT the range of head–head separations allowed by GST dimerization ( Supplementary Fig. 1c ) is comparable to intact dynein 22 , 23 , 39 , 40 . Third, our constructs form compact phi particles ( Supplementary Fig. 1b,e ) with a motor arrangement similar to intact dynein 22 , 40 , 41 . Finally yeast GST-dynein movement on MTs is comparable to intact yeast dynein 25 . Thus, there are strong grounds for supposing that our findings are informative for understanding the behaviour of intact cytoplasmic dynein. Our cryo-EM of stepping dynein reveals a great diversity of structures. Not only is there the variety in the separation of stalkheads along the MT anticipated from step-measuring studies, but also a flexibility at the stalk-stalkhead junction that produces a continuum of structures that is especially marked for offset dimers. We have been able to exploit this flexibility to deduce estimates of mechanical properties of dynein attached to its track. When stalkheads are attached far apart along the MT, the internal stress within the dynein dimer is expressed partly in a change in the angles of both stalks, allowing the heads to be less far apart, but principally by an extension of the linkage between the two heads. We deduce a value of 0.035 pN nm −1 for the elastic spring constant of this head–head linkage. Modelling the behaviour of a dimerized yeast dynein 33 has suggested a similar value (0.083 pN nm −1 ). The molecular basis of this elastic linkage is unclear. If it were an unstructured polypeptide behaving as a worm-like chain, the deduced low stiffness over the wide observed range of head separations would require a polypeptide of several hundred amino acids, which is not present in our dimeric dynein. An alternative which invites future investigation is flexion of the two linker subdomains away from the unprimed conformations that they adopt in the superposed dimer. A natural interpretation of our finding that superposed and offset heads are equally abundant is that these structures alternate during stepping and that each has a similar lifetime. An example of how such a stepping cycle could coordinate with ATPase activity and how the two stalkheads would move along the MT is shown in Fig. 4b . Strict alternation is not, however, implied by our data and other examples that demonstrate more freedom, including stepping along single protofilaments and movements between protofilaments, are shown in Supplementary Fig. 9 . If the lifetimes of the two structures are unequal, the one with shorter lifetime should occur more often during a stepping sequence to yield the observed similar abundance. Figure 4: Summary of dimer configurations in ATP and tentative stepping model. ( a ) Dimer configurations identified in this study. GST dimer is shown as grey ellipse. ( b ) An example of a two-step advance along the MT, tentatively correlated with consumption of two molecules of ATP and showing the stalkhead binding sequence both as viewed from the rings and in the view seen in our data. The mobile part of the linker is illustrated (magenta) switching to the primed conformation during stalkhead detachment, as well as a possible trajectory of each detached stalkhead along the MT protofilament. In this example, 8 nm stepping occurs along two adjacent protofilaments and superposed and offset dimers alternate during stepping, illustrated here as an inchworm progression. Examples of alternative stepping patterns are illustrated in Supplementary Fig. 9 . ( c ) In superposed dimers, the stalkhead-stalk hinge allows flexing. In offset dimers, the compliant GST-linker bridge between rings (black coil spring) additionally allows quasi-independent flexing of the heads and pulls the heads towards one another with increasing stalkhead separation. Full size image An important parameter for a processive, cargo-carrying motor protein is its duty ratio, that is, the fraction of the motor cycle during which it is strongly attached to its track. The duty ratio of the dimeric dynein that we have imaged is ∼ 0.9 if determined assuming independent heads 42 and ∼ 600-nm mean run length on MT ( Supplementary Table 1 ; see Methods). This is a higher value than our previous estimate (0.6) for dimeric D. discoideum dynein 28 , and much higher than that of monomer (0.2) ref. 28 . By EM we have not detected dimers with one detached motor, suggesting that each motor is almost always attached, that is, has a duty ratio near 1. What structural mechanisms could enhance processivity of the dimer over that of monomer to reduce the probability of both heads detaching simultaneously? In the superposed dimer the two motors appear identical, so it is unclear why either motor would behave differently from a monomer, unless this is mediated by contacts between the two heads. In offset dimers, we find that intramolecular force biases the orientation of the two stalks in opposite directions (which has been shown to alter the force required to detach them from the MT 24 , 33 ), and this force may have further impacts on head structure, such as linker position. When dynein is pulling against a load, formation of the superposed dimer may be favoured because the advancing head may more often attach only 0.9 nm ahead of its partner, and the compact structure may then provide additional stability, helping dynein function as an anchor. This reasoning thus predicts that the superposed configuration is dominant in, for example, the mitotic spindle (see ref. 1 ). The ratio of the frequency of superposed and offset rings among molecules with superposed stalkheads (318/ ∼ 31; see Fig. 3i , first bar) yields an estimate of ∼ 10 for the equilibrium constant for the head–head association when the two motors are attached to the MT under no load, and resistive load would be expected to increase this value. Our results provide direct evidence for the mechanism of dynein stepping in ATP, since we have observed dynein directly during ATP-fuelled movement along MT. This establishes that (1) the angle between stalk and MT is ∼ 42° for both leading and trailing motors; (2) the ring of the attached motor maintains a constant orientation in which its plane is parallel to the axis of the MT; (3) the linkers point towards the MT track, rather than away from it ( Fig. 4 ). In our atomic model derived from earlier EM and crystal structure data ( Fig. 2h ) the stalk angle is similar ( ∼ 45°), but steeper angles of 50°-70° are seen for monomeric dynein densely labelling MTs 36 and for axonemal dyneins 43 , 44 , 45 . With this motor configuration on the MT, and as is shown in Fig. 4b , when ATP binding to the ring causes stalkhead detachment from 46 (or weak binding to 40 , 47 ) the MT, the subsequent linker swing to the primed position is in a direction which biases the stalkhead towards the minus end of the MT, that is, in the known direction of dynein movement. This is because, in order for the stalkhead to rebind stereospecifically to the MT, the stalk must be pointing towards the MT minus end. Rebinding to the MT, followed by linker swing back to the unprimed position (that is, the power stroke), will then drag dynein’s cargo towards the MT minus end. Perhaps the most striking feature of stepping dynein is the great flexibility between the ATPase domain and the track binding domain ( Fig. 4c ), which is in marked contrast to myosin and kinesin motors. Because the hinge is close to the MT surface, at the stalk-stalkhead junction, the dynein head swings over a wide range ( ∼ 20 nm) compared with the ∼ 8 nm spacing of binding sites on the MT. This suggests that fluorescent tags attached to the heads for stepping studies 26 , 27 may not reliably report the position of the MT-bound stalkheads. During stepping, this flexing of the attached motor will allow the detached stalkhead of its partner motor great freedom to explore the surface of the MT to find its next binding site. This, together with extensibility between the two heads of the dimer, which will differ between the native dimer and the artificial dimers used here and elsewhere, provides a structural basis for the great range of step sizes seen in dynein stepping studies 24 , 25 , 26 , 27 , 48 . This inherent flexibility of the dynein motor, and the quasi-independent flexibility of the two motors of offset dimers, implies that it is wrong to imagine stepping dynein as having a single structure, even when both motors are attached to the MT. It will therefore be a challenge to determine the structure of any dynein-MT complex at high resolution, since current methods for this all combine data from many molecules. Dynein flexibility also raises new questions about the nature of the allosteric communication between the ATPase cycle in the head and the MT binding affinity of the stalkhead that is vital to dynein’s many cellular functions. Methods Dynein expression and purification D. discoideum cytoplasmic dynein was expressed in cells derived from the D. discoideum Ax2 strain. The plasmid carrying the dynein gene was introduced into the Dictyostelium cells by electroporation, and transformed cells were selected in HL-5 medium supplemented with 10 μg ml −1 blasticidin S, 10 μg ml −1 G418 and 10 μg ml −1 tetracycline on culture dishes at 22 °C for 1 week. The transformed cells were grown at 22 °C with shaking until the cell density reached ∼ 5 × 10 6 cells per ml. The medium was then replaced with one without tetracycline and blasticidin S to induce the expression of the recombinant dynein. After being cultured for an additional ∼ 24 h, the cells were collectd by centrifugation. Purification was carried out at 4 °C or on ice. The cells were lysed with sonication in PMG buffer (100 mM PIPES-KOH, 4 mM MgCl 2 , 0.1 mM EGTA, 0.9 M glycerol, 1 mM 2-mercaptoethanol, 10 μg ml −1 chymostatin, 10 μg ml −1 pepstatin A, 50 μg ml −1 leupeptin, 500 μM PMSF and 0.1 mM ATP (pH 7.0)) supplemented with 10 mM imidazole, centrifuged at 24,000 g for 20 min, and the supernatant was centrifuged further at 187,000 g for 60 min. The resulting high-speed supernatant was mixed with nickel nitrilotriacetic acid (Ni-NTA) agarose (Qiagen, Hilden, Germany) for 1 h. After the resin had been washed with PMG buffer supplemented with 20 mM imidazole, the adsorbed proteins were eluted with PMG buffer supplemented with 250 mM imidazole. Eluted fractions were supplemented with 5 mM EGTA, 0.1 mM EDTA and 150 mM NaCl and mixed with 0.5 ml of FLAG-M2 affinity gel (Sigma) for 1 h. After the resin had been washed with PMEGS buffer (100 mM PIPES-KOH, 150 mM NaCl, 4 mM MgCl 2 , 5 mM EGTA, 0.1 mM EDTA, 0.9 M glycerol, 1 mM DTT, 10 μg ml −1 chymostatin, 10 μg ml −1 pepstatin A, 50 μg ml −1 leupeptin, 500 μM PMSF and 0.1 mM ATP (pH 7.0)) and PMEG30 buffer (30 mM PIPES-KOH, 4 mM MgCl 2 , 5 mM EGTA, 0.1 mM EDTA, 0.9 M glycerol, 1 mM DTT, 10 μg ml −1 chymostatin, 10 μg ml −1 pepstatin, 50 μg ml −1 leupeptin, 500 μM PMSF and 0.1 mM ATP (pH 7.0)), the recombinant dynein was eluted with PMEG30 buffer supplemented with 200 μg ml −1 FLAG peptide (Sigma). The eluates were centrifuged at 100,000 g for 15 min and the supernatant was used for further study. Purified proteins were shipped from Japan to the UK in ice or stored on ice and used within 3 days 49 , 50 . For a monomeric wild-type motor domain construct, we used HFB380 (ref. 51 ). To generate dimeric GST-380 motor domains, Schistosoma japonicum GST coding region (nucleotides 258-917 of pGEX4T-3 vector (GenBank accession no U13855 )) was inserted between the His 6 -FLAG-tag and the 380-kDa motor domain (V1388-I4730) of HF380 construct 8 with bridging sequence GGAAAVDK between GST and dynein. As part of a larger study of stalkhead properties, we discovered stalkhead chimera constructs with enhanced MT-binding affinity (monomeric HFB380H7 and dimeric GST-380H7), by replacing the D. discoideum stalkhead sequence (A3372–K3495) with that of H. sapiens axonemal dynein heavy chain 7 (I2676–A2811; KIAA0944 in the HUGE database, Kazusa DNA Research Institute). Recombinant dynein constructs were expressed in D. discoideum and purified using Ni-NTA agarose (Qiagen) and anti-FLAG M2 affinity gel (Sigma-Aldrich) as described above. Negative-staining EM using 100 nM GST-380H7 (no MT) showed the preparation was all dimeric; no monomers were seen, indicating that neither GST dissociation nor proteolytic cleavage into monomers were at detectable levels. Single-molecule fluorescence motility assays To observe the movement of GST-380 and GST-380H7 dimers, we genetically inserted a SNAP-tag (New England Biolabs) into the AAA2 module of the motor domain (S2476-S2477) 13 and a biotin-tag between FLAG-tag and GST 52 . Purified SNAP-tagged GST-380 and GST-380H7 dimers (‘GST-380-SNAP’ and ‘GST-380H7-SNAP’) were fluorescently labelled with DY-647 (Dyomics) via the SNAP-tag by incubating 0.5 μM GST-380H7 with 5 μM SNAP-Surface 647 (New England Biolabs) overnight at 4 °C. The final labelling ratio of DY-647 per dynein motor domain was ∼ 0.5. Free DY-647 was removed by a Micro Bio-Spin chromatography column (Bio-Rad) 13 . Porcine tubulin was fluorescently labelled using Oregon Green 488 carboxylic acid succinimidyl ester (Invitrogen) following the procedure of Hyman et al . 53 Pelleted MTs were resuspended at high concentration in 0.1 M Na-HEPES, pH 8.6, 1 mM MgCl 2 , 1 mM EGTA and 40% (v/v) glycerol at 37 °C, reacted for 1 h with 10 mM dye, quenched with BRB80 (80 mM K-PIPES, 1 mM MgCl 2 and 1 mM EGTA, pH 6.8) supplemented with 100 mM K-glutamate, 40% glycerol and separated from unbound dye by centrifugation through a sucrose cushion. This was followed by two cycles of temperature dependent polymerization and depolymerisation in BRB80, collecting the pellet or supernatant respectively. MTs were prepared by incubating a mixture of 4 μM non-labelled tubulin and 40 pM Oregon Green 488-labelled tubulin with 0.2 mM Mg-guanosine-5'-((α,β) –methyleno) triphosphate (GMPCPP) at 37 °C for 30 min and then stabilized with 40 μM paclitaxel. Single-molecule imaging was carried out at room temperature (25 °C) 50 . MTs were immobilized on a surface of the glass chamber via tubulin antibody (TU-01, Millipore). After washing the chamber with assay buffer (20 mM PIPES, 10 mM K-acetate, 4 mM MgSO 4 , 1 mM EGTA, 0.4 mg ml −1 casein, 10 μM paclitaxel, 1% 2-mercaptoethanol, 10 mM glucose, 85 U ml −1 glucose oxidase, 1,300 U ml −1 catalase and 1 mM ATP, pH 7.0), 100 pM of dynein dimer was introduced into the chamber in the same solution. Movement of fluorescently labelled dynein was observed under an objective-type total internal reflection fluorescence microscope (Olympus IX71 equipped with an Olympus PlanApo, NA 1.45, × 100 objective lens). The sample was illuminated by a blue laser (Showa optronics, D488C-50) or a red laser (Showa optronics, D635C-35). Images were acquired at 100 frames per second with an iXon+ back-illuminated electron multiplier charge-coupled device camera (Andor). Movement of dynein, using the DY-647 fluorescence, was determined by a two-dimensional Gaussian fitting algorithm 54 using the custom software 55 . Run length and duration are the total distance of movement and the total time, respectively, during a single interaction event of a dynein molecule on a MT. The mean run length ( l mean ), mean duration ( τ mean ) and mean velocity ( v mean ) were calculated by fitting with the cumulative distribution functions: respectively, to provide binning-independent fittings 56 , where erf is the error function and l 0 (lower limit for detection), τ 0 (lower limit for detection) and σ 2 (variance) are fitting parameters. The duty ratio of each motor of dimeric dynein was estimated using a formalism in which the two heads are assumed to act independently 42 . Average number of steps per run was estimated from the measured average run length (600 nm; Supplementary Table 1 ) divided by an assumed step size. The value of duty ratio ( r ) giving the estimated number of steps was then obtained by successive approximation. We tested step sizes of 16.6, 8.3 and 4.15 nm, the last value representing an average from alternation between superposed and 8.3-nm offset heads (see Fig. 4b ). The average run length corresponds to 36, 72 and 144 steps, respectively, and these would require duty ratios of 0.86, 0.90 and 0.93, respectively. Thus, duty ratio is ∼ 0.9, relatively insensitive to step size. Dynein preparation for EM On receipt of dynein from Japan within 5 days of purification, FLAG peptide and nucleotides were immediately removed by exchanging the buffer with 10 mM PIPES-KOH (pH 7.0), 4 mM MgSO 4 and 1 mM EGTA using a centrifugal filter device (Amicon ultra-4, 30 kDa cutoff) and the dynein was both drop-frozen and stored in liquid nitrogen. Beads were quickly thawed just before preparing EM grids. MT preparation for EM Twice-cycled MAP-free porcine tubulin was from Cytoskeleton, Inc. GMPCPP was from Jena Bioscience. Tubulin in the presence of GMPCPP was polymerized using a modification of an earlier protocol 57 . Tubulin (8.1 μM) was incubated on ice in 80 mM PIPES-KOH (pH 6.8), 4 mM MgCl 2 , 1 mM EGTA and 0.64 mM GMPCPP for 20 min, then raised to 37 °C for 30 min for polymerization. To remove GMPCPP, the MT solution was centrifuged at 30,300 g for 5 min at 35 °C, the supernatant was discarded and the pellet was resuspended in 10 mM PIPES-KOH (pH 7.0), 4 mM MgSO 4 , 1 mM EGTA and 400 μM taxol at 27 °C to give 100 μM MT (MT concentration is expressed as tubulin dimer concentration in this paper). MTs were stored in liquid nitrogen by rapid freezing and were thawed rapidly just before preparing EM grids. Negative stain EM of dynein in the absence of MTs Carbon-coated copper EM grids were treated under an ultraviolet lamp for ∼ 10 min before applying dynein. 0.2 mM ATP was added to 15–30 nM GST-380 in 50 mM K-Acetate, 4 mM MgSO 4 , 1 mM EGTA, 10 mM PIPES-KOH (pH 7.0) on ice and immediately applied to the grid and negatively stained with 1% uranyl acetate. To reduce changes in structure arising from prolonged adsorption time on the carbon film, we also used the ‘rapid flush’ method devised in our group (see Fig. 1 legend in ref. 58 ): 50–70 μl 1% uranyl acetate was drawn into the tip of a 200 μl Pipetman, the volume dial turned to draw in 5 μl air, then turned again to draw up 5 μl dynein solution, kept separate from the stain by the small air gap. The entire contents of the tip were then ejected across the face of the carbon film, allowing dynein to briefly adsorb, followed within milliseconds by fixation by uranyl acetate. Excess stain was drawn from the side of the grid by filter paper in the usual way, and the grid dried at room temperature. 0.2 mM ATP was added to dynein just before application to the grid. We used 29 nM and 58 nM GST-380 in 50 mM K-Acetate, 4 mM MgSO 4 , 10 mM PIPES-KOH (pH 7.0) or 100 nM GST-380H7 in 2.7 mM MgSO 4 , 0.67 mM EGTA and 6.7 mM PIPES-KOH (pH 7.0). Electron micrographs were recorded with continuous illumination either using a JEOL JEM1200EX EM operated at 80 kV, nominal magnification × 40,000 on film, or using an FEI Tecnai T12 EM operated at 120 kV, nominal magnification × 30,000 using a CCD camera. Cryo-EM of dynein on MTs For GST-380H7 dimeric dynein on MTs in the presence of ATP, the mixture contained 330 nM GST-380H7 dynein, 3.7 μM MT, 2.1 mM MgSO 4 , 0.53 mM EGTA, 38 μM taxol, 0.09% Tween-20, 5.3 mM PIPES-KOH (pH 7.0) and 3.6 mM ATP. The dynein was mixed with MTs without nucleotides, then ATP was added. For dense decoration of dynein on MTs ( Fig. 1c ), the concentration of MTs was halved. For HFB380H7 monomeric dynein on MTs in the absence of nucleotides, the mixture contained 10–30 nM HFB380H7 dynein, 1 μM MT, 2 mM MgSO 4 , 0.5 mM EGTA, 46 μM taxol, 0.1% Tween-20 and 5 mM PIPES-KOH (pH 7.0). Specimen (2.5 μl) was applied to a lacey carbon grid (Agar Scientific S166-4), manually blotted and frozen by plunging into liquid ethane, and subsequently stored in liquid nitrogen. The time from the addition of ATP to freezing was 36–46 s. For GST-380H7, within this time, over 3 mM total ATP would remain in the solution, based on ∼ 10 s −1 per head ATPase rate for dimeric dynein in the presence of 4 μM MT. This is calculated from steady-state MT-activated ATPase data on this dimeric dynein measured in 10 mM PIPES-KOH (pH 7.0), 50 mM K-acetate, 4 mM MgSO 4 , 1 mM EGTA, 10 μM paclitaxel and 1 mM DTT by using a coupled enzymatic assay kit (EnzChek phosphate assay kit, Molecular Probes) 13 . This yielded k cat of 15±0.9 s −1 per head and K m of 2.5±0.7 μM MT. To prevent detachment of the motor from MTs 59 we did not glow discharge the grids and we used manual blotting, by touching the edges of the grid then blotting a single side of the grid for up to 3.5 s using filter paper. The liquid film was typically ∼ 50-nm thick, since intersections between two MTs show no flattening ( Fig. 1c ). The grids were transferred using a Gatan 626 holder filled with liquid nitrogen into an FEI Tecnai F20 EM with a field emission gun operated at 200 kV. The EM was operated in low-dose mode at a range of defocus ( ∼ 1.5–5.8 μm). We obtained 480 micrographs using a Gatan US4000SP CCD camera at nominal magnification × 25,000. The pixel size was calibrated as 0.454 nm using the 2.30 nm spacing of Tobacco Mosaic Virus (kind gift from Prof. Lomonossoff, John Innes Centre, UK). We used CTFFIND3 60 to estimate the defocus of micrographs and corrected them by phase-flipping using SPIDER 61 . Image contrast was inverted (protein is pale). Determination of MT polarity To determine MT polarity ( Supplementary Fig. 3 ), we used the Tubule J software 62 . Polarity determination was made possible using tubulin polymerized in the presence of GMPCPP to strongly favour MT with 14_3 structure 63 , 64 , the polarity of which can be obtained by computational image processing 31 , independently of dynein features. GST-380H7 dynein steps robustly along such GMPCPP-polymerized MTs ( Supplementary Table 1 ) (also ref. 57 ). We analysed 2,313 MTs, unambiguously determining the polarity of 1,082 14_3 MTs 31 . Briefly, (see also Supplementary Fig. 3 ) 14-protofilament MTs were selected by analysing the number of Moiré fringes. After computational straightening, 14_4 MTs were identified and rejected based on the spacing along the meridional direction between layer lines in the FFT at ∼ 4-nm spacing. MT polarity was then determined by inspecting the Moiré pattern within the Fourier-filtered image generated using only near-equatorial data ( Supplementary Fig. 3c,d ). Very rare 14_2 MTs were identified and rejected based on their shorter Moiré repeat distance 35 . Dynein particle picking Using the EMAN 1 BOXER 65 we manually selected 10,080 particles, which were dynein-sized lying alongside polarity-determined MTs. To select isolated dyneins on the MT ( Fig. 1d,e ), we used medium to high defocus (2.6–5.8 μm) micrographs, and selected from the total data set those which had only one object or two objects within 40 nm of one another, in either case with no neighbouring particles within 40 nm of these in both the plus- and minus-end directions. This produced n =711 isolated dyneins of which n =330 had only a single visible head (that is, superposed dimers) and n =381 had two visible heads (that is, offset dimers). To select more superposed dimers from within the remainder of the data set, we performed alignment and classification, selecting classes showing a single head not overlapped by parts of other heads. Such classes may contain single heads that are members of offset dimers with non-overlapping heads, but these are uncommon ( Fig. 3 ), and would be expected to form their own classes due to their weaker density. This yielded a further 1,510 particles. Dynein-MT particle alignment Image alignment, classification and all image analysis were performed using the SPIDER software 61 . Having determined MT polarity, we manually estimated the angle of the MT component of each particle with respect to the x axis in the micrographs, with MT plus end pointing left, and combined dyneins from both sides of the MT by mirroring those on one side of the MTs. Then reference-free alignment of the particles used as starting rotation angle the estimated angle of the MT within the micrograph. Initial alignment used all features for translational alignments but for rotational alignments we excluded the dynein motor itself (using an inner ring radius within the AP RA command) and included features from both edges of the adjacent MT. Reference-free alignment was performed for ten iterations. Images in which the MT was misaligned were excluded using conventional image classification techniques 30 , 66 to identify and remove classes with tilted MTs. Alignment of the isolated dynein particle data set (shown in Fig. 1g,h ) was refined by focusing on each ring within each particle. We manually identified the centre of each ring (producing two images from each offset dimer) and masked features outside this with a soft-edged mask. This produced a total of n =1,092 ring images (2 × 381 offset+330 superposed) which were combined into a single image stack for subsequent reference-free translational alignment. Maintaining the rotational alignment obtained in the previous step (to ensure MTs remain horizontal) we refined their translational alignment in two successive rounds, excluding those that were assigned excessively large translations arising from spurious image features (leaving n =374 offset dimers and n =322 superposed dimers). To create class averages showing the heterogeneity of ring configurations in offset dimers ( Fig. 3a ) we applied classification procedures to the ( x , y ) coordinates of each ring. Pixel intensity value measurement of dynein motor domain The dynein ring total pixel values measurement ( Fig. 1f ) was carried out after adjusting each image to ensure the background ice had a mean pixel value of zero. The projected sum of pixel values of each dynein dimer was then calculated as shown in Supplementary Fig. 4 , measured relative to the values of the adjacent MT in each individual particle ( Supplementary Fig. 4b ). Images of monomeric dynein were treated identically to those of superposed dimers, except that MT polarity was not determined. Atomic model building To create hypothetical atomic models of the dynein dimer strongly bound to a MT ( Supplementary Fig. 6 ), we first fitted the tubulin dimer that is part of a murine stalk-stalkhead-tubulin dimer atomic model obtained by cryo-EM analysis (PDB 3J1T 36 ) into a cryo-EM map of a 14_3 protofilament MT (emd_5194 (ref. 67 )). To complete the dynein motor we used the 2.8-Å crystal structures of the ADP-dynein motor domain from D. discoideum lacking the stalkhead (PDB 3VKG 8 ). This comprises two independent dynein monomers (chains A and B), that are arranged back-to-back in the crystal. Chains A and B also have different stalk angles relative to the ring ( ∼ 25°) 8 , which therefore repositions the rings in the MT-bound dynein models depending which chain is fused to the murine stalk. We superposed the stalk using the CCP4 LSQKAB program 68 , to fit the Cα atoms of the distal portions of CC1 and CC2 of the motor domain stalk coiled coil to the corresponding atoms of the murine stalk ( Supplementary Fig. 5 ). To create complete molecule A, we fitted residues 3350–3366 (CC1) and 3496–3514 (CC2) in chain A of 3VKG to residues 3264–3280 (CC1) and 3409–3427 (CC2) in chain A of 3J1T ( Supplementary Fig. 5b ). To create complete molecule B, we fitted residues 3350–3358 (CC1) and 3502–3514 (CC2) in chain B of 3VKG to residues 3264–3272 (CC1) and 3415–3427 (CC2) in chain A of 3J1T ( Supplementary Fig. 5c ). We then measured the distances between Cα atoms among the fitted residues. At the shortest distance among the fitted residues we fused the two structures together. This produced two alternative back-to-back models, in which necessarily only one of the stalks was attached to the MT ( Supplementary Fig. 6b,c ). In both these models the stalks are not superposed in the view seen in our data. Therefore we made a third back-to-back model by rotating chains A and B about an axis approximately parallel to the stalks until the stalks superposed ( Supplementary Fig. 6a ). To make four front-to-back models of dimeric dynein on the MT ( Supplementary Fig. 6d–g ), we deleted the detached dynein motor in the back-to-back models shown in Supplementary Fig. 6b,c . We then docked a second complete dynein molecule, either A or B, to a neighbouring tubulin dimer in an adjacent protofilament, by fitting the model into the MT cryo-EM map as above. To rotate each ring into the plane of view (see text), we broke the coiled-coil chains near the stalk-stalkhead junction. Molecular graphics were created using the UCSF Chimera package 69 . Deduction of dynein mechanical properties The torsion spring constant for the dynein stalk-stalkhead hinge was estimated by applying the Equipartition Theorem to the observed distribution of stalk angles, using κ = k B T / σ 2 , where κ is the torsion spring constant (N m rad −2 ), k B the Boltzmann constant, T absolute temperature (293 K in our case) and σ 2 the variance of measured lever angle (rad 2 ). To express stiffness as the apparent cantilever stiffness ( k in pN nm −1 ) of the MT-attached stalk (that is, the resistance experienced by a force applied parallel to the MT axis at the base of the stalk, where the linker ends), we equated the energy for rotation of the stalk through an angle θ ( κ . θ 2 /2) with the energy to bend the stalk to move the head through the same distance ( x ) as produced by the angular rotation (that is, kx 2 /2). For a cantilever, length L , at 90° to the MT axis, x = L .sin θ . Hence, for small displacements for which sin θ ≈ θ , k = κ / L 2 . When the cantilever is at angle ϕ (in our case ϕ ≈42°), the effective lever length is L .sin ϕ , hence k = κ /( L .sin ϕ ) 2 . We estimated L =12.3 nm by subtracting the head radius (6.5 nm (ref. 8 )) from our measured mean value (18.8 nm) of the distance from ring centre to hinge. To observe trends in mean stalk angle due to increasing intramolecular tension in offset dimers with increasing stalkhead separation ( Fig. 3e,f ), we generated running averages as follows. Using data sorted by stalkhead separation, the mean angle and mean stalkhead separation of the first 100 data points was calculated. The first data point was then removed, the 101st data point was added and the calculations repeated. The cycle was repeated until all data points had been included. Additional information How to cite this article: Imai, H. et al . Direct observation shows superposition and large scale flexibility within cytoplasmic dynein motors moving along microtubules. Nat. Commun. 6:8179 doi: 10.1038/ncomms9179 (2015). Accession codes Accessions GenBank/EMBL/DDBJ U13855
The first images of motor proteins in action are published in the journal Nature Communications today. These proteins are vital to complex life, forming the transport infrastructure that allows different parts of cells to specialise in particular functions. Until now, the way they move has never been directly observed. Researchers at the University of Leeds and in Japan used electron microscopes to capture images of the largest type of motor protein, called dynein, during the act of stepping along its molecular track. Dr Stan Burgess, at the University of Leeds' School of Molecular and Cellular Biology, who led the research team, said: "Dynein has two identical motors tied together and it moves along a molecular track called a microtubule. It drives itself along the track by alternately grabbing hold of a binding site, executing a power stroke, then letting go, like a person swinging on monkey bars. "Previously, dynein movement had only been tracked by attaching fluorescent molecules to the proteins and observing the fluorescence using very powerful light microscopes. It was a bit like tracking vehicles from space with GPS. It told us where they were, their speed and for how long they ran, stopped and so on, but we couldn't see the molecules in action themselves. These are the first images of these vital processes." An understanding of motor proteins is important to medical research because of their fundamental role in complex cellular life. Many viruses hijack motor proteins to hitch a ride to the nucleus for replication. Cell division is driven by motor proteins and so insights into their mechanics could be relevant to cancer research. Some motor neurone diseases are also associated with disruption of motor protein traffic. Credit: Nature Communications DOI: 10.1038/ncomms9179 The team at Leeds, working within the world-leading Astbury Centre for Structural Molecular Biology, combined purified microtubules with purified dynein motors and added the chemical fuel ATP (adenosine triphosphate) to power the motor. Dr Hiroshi Imai, now Assistant Professor in the Department of Biological Sciences at Chuo University, Japan, carried out the experiments while working at the University of Leeds. He explained: "We set the dyneins running along their tracks and then we froze them in 'mid-stride' by cooling them at about a million degrees a second, fast enough to prevent the water from forming ice crystals as it solidified. Then using a cryo-electron microscope we took many thousands of images of the motors caught during the act of stepping. By combining many images of individual motors, we were able to sharpen up our picture of the dynein and build up a dynamic idea of how it moved. It is a bit like figuring out how to swing along monkey bars by studying photographs of many people swinging on them." Dr Burgess said: "Our most striking discovery was the existence of a hinge between the long, thin stalk and the 'grappling hook', like the wrist between a human arm and hand. This allows a lot of variation in the angle of attachment of the motor to its track. "Each of the two arms of a dynein motor protein is about 25 nanometres (0.000025 millimetre) long, while the binding sites it attaches to are only 8 nanometres apart. That means dynein can reach not only the next rung but the one after that and the one after that and appears to give it flexibility in how it moves along the 'track'." Dynein is not only the biggest but also the most versatile of the motor proteins in living cells and, like all motor proteins, is vital to life. Motor proteins transport cargoes and hold many cellular components in position within the cell. For instance, dynein is responsible for carrying messages from the tips of active nerve cells back to the nucleus and these messages keep the nerve cells alive. Credit: Nature Communications DOI: 10.1038/ncomms9179 Co-author Peter Knight, Professor of Molecular Contractility in the University of Leeds' School of Molecular and Cellular Biology, said: "If a cell is like a city, these are like the truckers on its road and rail networks. If you didn't have a transport system, you couldn't have specialised regions. Every part of the cell would be doing the same thing and that would mean you could not have complex life." "Dynein is the multi-purpose vehicle of cellular transport. Other motor proteins, called kinesins and myosins, are much smaller and have specific functions, but dynein can turn its hand to a lot of different of functions," Professor Knight said. For instance, in the motor neurone connecting the central nervous system to the big toe—which is a single cell a metre long— dynein provides the transport from the toe back to the nucleus. Another vital role is in the movement of cells. Credit: Nature Communications DOI: 10.1038/ncomms9179 Dr Burgess said: "During brain development, neurones must crawl into their correct position and dynein molecules in this instance grab hold of the nucleus and pull it along with the moving mass of the cell. If they didn't, the nucleus would be left behind and the cytoplasm would crawl away." The study involved researchers from the University of Leeds and Japan's Waseda and Osaka universities, as well as the Quantitative Biology Center at Japan's Riken research institute and the Japan Science and Technology Agency (JST). The research was funded by the Human Frontiers Science Program and the Biotechnology and Biological Sciences Research Council (BBSRC).
10.1038/ncomms9179
Biology
A call to arms: Enlisting private land owners in conservation
Niall G. Clancy et al, Protecting endangered species in the USA requires both public and private land conservation, Scientific Reports (2020). DOI: 10.1038/s41598-020-68780-y Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-020-68780-y
https://phys.org/news/2020-07-arms-private-owners.html
Abstract Crucial to the successful conservation of endangered species is the overlap of their ranges with protected areas. We analyzed protected areas in the continental USA to assess the extent to which they covered the ranges of endangered tetrapods. We show that in 80% of ecoregions, protected areas offer equal (25%) or worse (55%) protection for species than if their locations were chosen at random. Additionally, we demonstrate that it is possible to achieve sufficient protection for 100% of the USA’s endangered tetrapods through targeted protection of undeveloped public and private lands. Our results highlight that the USA is likely to fall short of its commitments to halting biodiversity loss unless more considerable investments in both public and private land conservation are made. Introduction In 2010, 194 countries committed to halting biodiversity loss by adopting the Aichi Biodiversity Targets and Sustainable Development Goals 1 . Crucial to the success of this commitment is the protection of important habitat to support terrestrial and aquatic biodiversity. Highly protected areas (e.g., national monuments, national parks, and wilderness areas) that heavily restrict anthropogenic activities are the current mainstay for biodiversity conservation because, in general, well managed and effectively placed protected areas have been shown to increase species richness and abundance relative to unprotected areas 2 , 3 , 4 , 5 . As a result, the spatial extent of protected areas is used to monitor global progress towards achieving Aichi Biodiversity Targets and Sustainable Development Goals, and Aichi Target 11 specifically identified that 17% of terrestrial areas and inland waters needed to be protected by 2020 1 . Unfortunately, as 2019 came to a close only 15.2% of global land was located within protected areas 6 . In 2020, the Convention on Biological Diversity (CBD) will adopt a new global biodiversity framework for the post-2020 era with new targets set for 2050. To develop this new framework, the CBD will need to review its successes and failures with the previous framework. One of the major criticisms of Aichi Target 11 is that it is an area-based target that can be met with little relevance to the protection of biodiversity 7 . Although the designation of a protected area counts towards Aichi Target 11, protected areas do not have to be designated with the primary goal of protecting biodiversity. This is especially true in countries such as the United States of America (USA) where protected areas have historically been designated for reasons other than biodiversity, such as cultural and historical significance 8 , or lack of agricultural value 9 . As a result, recent analyses have brought into question whether existing protected areas are actually improving the conservation status of imperiled species 3 , 4 , and whether they should effectively be counted towards reaching Aichi Target 11. To determine the extent to which current protected areas are aiding in the protection of imperiled species and how much additional land is required to protect species, we must understand the overlap between protected areas and species ranges. Given that the ultimate goal of achieving Aichi Target 11 is to protect biodiversity and the services it provides, new lands added to reach current and future targets should be designated with the specific goal of protecting biodiversity, especially threatened and endangered species. Currently, only 7.1% of the USA’s land is in a highly protected status that is managed to preserve biodiversity 8 , 10 . The most common pathway by which the government designates new lands as protected is through the conversion of existing public land to a protected status [e.g., the conversion of Bureau of Land Management (BLM) land to a National Monument or Wilderness Area] 11 . However, the political appetite for the conversion of public lands to highly protected areas is not always favorable 12 . Furthermore, the availability of public lands for conversion to protected areas may be limited. For example, ~ 95% of the land in the state of Texas, USA, is privately owned. These political and logistical obstacles may mean that private lands such as conservation easements may need to take a more prominent role in meeting conservation targets in the USA 13 , 14 . Our study aimed to assess how current protected areas in the continental USA are contributing to the protection of its most imperiled species, and how the conversion of existing public and private lands to a highly protected status can aid the USA further in safeguarding those species. We used a null modeling approach 15 , 16 to analyze whether current protected areas include threatened and endangered species and sub-species (hereon referred to as ‘endangered species’) better than if they had been placed at random. We then assessed how many endangered species had 30% of their range inside the current layout of protected areas. In the absence of species-specific population viability analysis, a rule of thumb suggests that 30% of a species’ range must be protected for it to persist in the wild 17 . We then further analyzed whether or not the capacity to protect more endangered species exists through the targeted conversion of undeveloped public and private lands to a highly protected status. For all of our assessments, we use an ecoregion-based approach, as ecoregions have been shown to represent the broadest inclusion of diverse habitats and species 18 . Results and discussion While biodiversity conservation is cited as a priority for many existing highly protected areas, our null modeling results indicate that the placement of protected areas in the USA has largely failed to include at-risk species. For a large number of ecoregions, especially in the western states, we found that endangered species currently have less of their range contained within protected areas than if these areas had been placed at random (Fig. 1 a). Across the entire continental USA, we found that highly protected areas in 55% of ecoregions were worse at protecting the ranges of endangered species than if their location had been chosen at random within the same ecoregion (Fig. 1 a). An additional 25% of ecoregions performed no better than random. This lack of coverage for at-risk species likely stems from the motivations for initial placement, as many protected areas were placed based on scenic beauty 8 or poor agricultural potential 9 . The remaining 20% of ecoregions that performed better than random were coastal regions with moderate to high numbers of endangered species (Fig. 1 a, b). Since coastal areas tend to have high human population densities 19 , coastal protected areas may be performing better than random because sensitive species have already been extirpated outside protected areas, and current species’ ranges now exist primarily in those protected areas. Figure 1 Null modeling results and endangered species richness within the USA. (a) Difference between the current number of endangered species with any part of their range inside highly protected areas and the average results of 1,000 random placements of those highly protected areas. Warm colors show regions that performed worse than a random placement, while cool colors indicate regions that performed better than random. Ecoregion data were obtained from The Nature Conservancy 34 . (b) Number of endangered tetrapod species per 5 km 2 pixel, species distribution data were obtained from The USFWS Environmental Conservation Online System 47 . Full size image The USA is not alone in the underperformance of its protected areas in helping to preserve biodiversity. While protected areas globally are associated with higher levels of biodiversity 2 , regional and country-level studies have found that protected areas in Australia, Canada, Laos and parts of the neotropics are also safeguarding endangered species ranges, endemic species, or total species richness worse than if protected areas had been randomly placed 20 , 21 , 22 , 23 , 24 , 25 , 26 . The diversity of countries with inadequate biodiversity protection suggests that this problem is not just in developed countries where valuable land may already have been co-opted for human use. These combined results suggest that even if we meet Aichi Target 11 by protecting 17% of global terrestrial areas and inland waters, there may be minimal benefits for endangered species. Thus, the USA and several other countries need to focus new conservation efforts on creating protected areas that are specifically placed to cover the ranges of endangered species to aid their survival. While the effectiveness of protected areas for preserving biodiversity integrates the geographical location, habitat quality, and human use of the protected area 20 , 27 , achieving biodiversity targets requires sufficient overlap between protected areas and the species of interest. We found that the creation of additional highly protected areas through the conversion of existing public lands to highly protected areas would increase the number of at-risk species that have 30% of their range protected. However, this conversion would not be sufficient to protect 30% of the range of a majority of the USA’s endangered species (Supplementary Table S1 ; Fig. 2 ) and is also unlikely to be politically feasible given the multiple use mandate of public lands in the USA 11 . Figure 2 The average percent of endangered species ranges within each type of land designation by tetrapod class. The red line shows the 30% threshold for adequate protection of a species. Full size image At present, only 21 (13%) endangered species meet the minimum threshold of having 30% of their range within protected areas (Supplementary Table S1 ). If private lands managed specifically for conservation are added to this analysis, we found that the number of species with > 30% of their range protected does not increase (Supplementary Table S1 ). This lack of additional species receiving adequate protection when private conservation easements are added is not surprising, as these areas are collectively about 24% of the size of highly protected areas. We found that the USA could protect a total of 59 (37%) of its endangered species by conferring highly protected status to United States Forest Service (USFS) and BLM lands (Supplementary Table S1 ; Fig. 2 ). Protection of additional federal (e.g., Bureau of Reclamation, Department of Defense lands, etc.) and state lands would raise this number to 68 (43%) endangered species (Supplementary Table S1 ; Fig. 2 ). One example of a species that can be adequately protected on federal and state land is the Northern Idaho ground squirrel ( Urocitellus brunneus ). Currently, only 1.9% of U. brunneus’ range falls within protected areas, but an additional 51.4% of its range could be protected if other federal lands, such as portions of the Payette National Forest, were highly protected (Fig. 3 ). Other species for which adequate protection is achievable on public lands include the Mexican spotted owl ( Strix occidentalis lucid a) and Virginia big-eared bat ( Corynorhinus townsendii virginianus ); (Fig. 3 ; Supplementary Table S1 ). Figure 3 Examples of endangered species that would benefit from different combinations of public and private land conservation. Map colors show species’ ranges; Urocitellus bruneus: orange , Anaxyrus baxteri : blue, Sternotherus depressus : pink, Strix occidentalis lucida: yellow, Cryptobranchus alleganiensis bishop : green, Puma concolor cougar : black. Bar graphs indicate the percent of species ranges within each type of land designation; black: currently protected federal land, green: U.S. Forest Service and Bureau of Land Management, pink: other federal lands (e.g., Bureau of Reclamation, Department of Defense, etc.); blue: state land; grey: undeveloped private land; white: private cropland and developed land. Landcover data obtained from Bureau Land Management GIS repository 51 , species distributions were obtained from the USFWS Environmental Conservation Online System 47 . Capacity on Public Land indicates that at least 30% of the given species’ range is within federal or state lands (Y), or if less than 30% of the given species range is within federal or state lands (N). Photo credits clockwise from top-left: USFS Region 4; Ryan Moehring/USWFS; John P. Friel; Connie Bransilver; Brian Gratwicke; and Gary L. Clark. Full size image One of the most important findings from our analyses is that the USA has not lost its capacity to protect 100% of its endangered tetrapods adequately. Using the National Land Cover Database 28 , we found that all of the remaining 91 endangered species (57%) can be adequately protected through increased conservation on a combination of undeveloped public and private lands (Figs. 2 , 3 ; Supplementary Table S1 ). Example species that would benefit from public and private land conversion to a protected status include the Ozark hellbender ( Cryptobranchus alleganiensis bishopi ), flattened musk turtle ( Sternotherus depressus ), Florida panther ( Puma concolor cougar ), and Wyoming toad ( Anaxyrus baxteri ); (Fig. 3 ). The optimal configuration of a new USA protected area network will require systematic planning because multiple configurations could adequately protect all of the USA’s endangered species. Spatial prioritization will require a framework that takes into account both economic (e.g., cost for protecting different sites, lost opportunities costs 29 , 30 ), and biological (e.g., network connectivity) 14 factors. Our analyses suggest that to adequately protect its endangered species, in addition to greater protection of public lands, the USA would need to make considerable investments in private land conservation through efforts such as conservation easements 31 . Reflecting on their growing importance in the field of conservation, a recent review of the private lands literature found that conservation easements were the most frequently addressed form of biodiversity protection on private lands 32 . Crucial to the implementation of these endeavors will be a greater understanding of factors that increase the success of actions on private lands, such as engagement with local stakeholders, public acceptance, and financial incentives 13 . Despite the great strides made towards meeting Aichi Target 11, our results highlight that protected areas in the USA are failing to sufficiently protect biodiversity because there is poor spatial overlap between endangered species and the placement of current protected areas. While the capacity exists on public lands to double the number of endangered species sufficiently protected (Fig. 2 ), over half of all endangered tetrapods in the continental USA require conservation on private lands for at least 30% of their range to be protected. Importantly, our analyses indicate that the capacity to meet this 30% threshold for all of the continental USA’s endangered species still exists on undeveloped public and private lands. To truly safeguard biodiversity 33 , we must ensure that new protected areas are not only designated in sufficient quantity, but also in locations suitable to imperiled species. Thus, to adequately protect all of the continental USA’s endangered species, a new protected area network must consider utilizing both public and private lands. In our analysis, we focused on species listed as threatened or endangered; however, this list of species should be considered conservative as there are many vulnerable species with declining populations that have not yet been listed. It is critical that protected lands contributing to the progress of Aichi Target 11 sufficiently protect the world’s most vulnerable species. Failure to do so undermines our true commitments to the Aichi Biodiversity Targets, the Sustainable Development Goals, and our fight against biodiversity loss. Methods Null modeling To assess if current highly protected areas are conserving endangered species, we used a null modeling approach 16 to determine whether highly protected areas within each of the Environmental Protection Agency’s (EPA) level-II ecoregions 34 contained within their borders more endangered species than if they were placed at random 15 . Ecoregions are designated by the EPA based on ecosystem components such as soil, landform, and major vegetation types 35 , and have been used extensively in conservation planning studies to ensure the creation of networks that represent the broadest range of diverse habitats and species. Protected areas were those designated as either Protected Areas Database of the United States (PAD-US) GAP 1 or 2 (“managed for biodiversity with no extractive uses” 10 ). The locations of these protected areas were obtained from the database maintained by the United States Geological Survey (USGS) Gap Analysis Program (GAP) using ESRI ArcGIS 10. Protected areas below GAP 2 were also removed because they are not managed explicitly for biodiversity conservation 36 . All protected areas less than 5 km 2 were removed from the analysis as we did not want to over-represent the number of endangered species that had part of their range protected. We understand that for many smaller species, having less than 5 km 2 of their range in a protected area would be sufficient to ensure their survival, however, this would not be the case for many larger species that require large amounts of land. Because our study integrates across many species with varying body sizes and home ranges, we set a conservative cut-off of 5 km 2 . We do not wish to diminish the importance of small protected areas, and understand they are important as refuges 37 and act as corridors for species with large home ranges 38 , 39 . In addition, our analyses do not account for the fact that many species have seasonal distributions, requiring them to migrate between different locations using corridors. Our goal was to focus on larger protected areas that have the potential to support populations within their boundaries. Analyses were conducted using packages sp, raster, rgdal, rgeos, tmap, abind, dplyr, and maptools 40 , 41 , 42 , 43 , 44 , 45 in the statistical programming package R 46 . Protected areas that spanned two or more ecoregions were split into subordinate parts. Range shapefiles for each endangered tetrapod species within the continental USA were obtained from the U.S. Fish & Wildlife Service’s Environmental Conservation Online System (ECOS) database 47 . The number of endangered species in each ecoregion, whose ranges were located within protected area boundaries, was recorded. To create a random distribution by which the current placement of protected areas would be compared, we used shapefiles of each protected area and then randomly moved their location in both position and orientation within its given ecoregion and re-sampled 1,000 times for endangered species occurrence. We opted to constrain the analysis at the ecoregion scale because ecoregions-based conservation planning has been shown to be effective at protecting species-, community-, and ecosystem-level diversity 18 , 35 , 48 , 49 . Due to the random placement of the protected areas, there is the potential for protected areas to overlap. However, a key assumption of the null modeling approach is that the placement of the protected areas is random, so we did not constrain the placement of the areas. Given that overlapping protected areas would reduce the total area designated as protected in each ecoregion, our analysis of whether existing protected areas are performing better than random should be considered conservative, as overlaps cannot occur in the existing locations, but can in the modeled locations. A count of total unique endangered species was tabulated for each ecoregion for the existing protected area distribution, and for each of the 1,000 iterations. We considered existing protected areas to be performing “better” or “worse” than random only if there was at least a ± 1 species difference between the two. Public versus private lands comparison In order to identify potential alternative methods through which endangered species could receive sufficient protection, we examined the conservation options available on undeveloped public and private lands. We considered a threshold of 30% of a species’ range included in protected areas as adequate protection. While each species will have a minimum population size that is necessary for the species to persist, the 30% threshold was used as it represents a baseline “rule-of-thumb” in the absence of species-specific population viability analyses 17 . Therefore, we sought to determine whether the USA’s highly protected areas are sufficient to cover 30% of each species’ range for the 159 unique populations of endangered tetrapods listed under the USA’s Endangered Species Act 47 . Following calculations of the amount of each species range covered by public protected areas, we then used the National Conservation Easement Database 50 to determine the number of additional species being protected on private lands managed specifically for conservation (i.e., conservation easements). Finally, we determined how many additional endangered species could be conserved through the targeted conversion of existing public lands to a highly protected status or through the protection of private lands through conservation easements. A common method for creating new highly protected areas in the USA is through the designation of national parks, monuments, or wilderness areas within the boundaries of existing federal and state (hereon, “public”) lands, especially lands managed by the USFS) and BLM 11 . Using ArcGIS 10, USFS and BLM shapefiles were extracted from the BLM National Surface Management Agency GIS 51 and protected areas, other federal, and state lands from the USGSGAP 35 . We calculated the total area of each species’ range within each land management type by intersecting endangered species’ range shapefiles with the public lands shapefile. Specifically, we looked at the number of endangered species that had > 30% of their ranges on public lands that are not designated as GAP 1 or 2, as these lands could be considered available for conversion to protected areas. We generated a private lands shapefile by merging all the public lands and protected areas shapefiles and subtracting this from the USA shapefile, leaving only private lands. The private lands shapefile was then used to mask the National Land Cover Database 11 so that the resulting raster contained land cover data only for private lands. The range of each species on undeveloped private lands was iteratively tabulated by overlaying the species’ shapefiles over the private lands only version of National Land Cover Database that we generated. Undeveloped private lands were those not categorized as any level of ‘developed’ or ‘cultivated crops.’ The proportion of each species total range within the continental USA was then calculated by land type. Data availability A list of endangered and threatened terrestrial tetrapods found within the continental USA, and the proportion of their range encompassed by land management type is provided in Supplementary tables 1 & 2.
In 1872 the United States created Yellowstone, the first National Park in the world. Since then many more parks, monuments, preserves, wildernesses and other protected areas have been created in the USA. Protected areas, like Yellowstone, are invaluable, but are they actually effective at preserving endangered species? And if not, how can future protected areas do better? A team of ecologists at Utah State University published a study in Scientific Reports to answer these questions. They used computer models to determine if protected areas in the USA preserve enough land inhabited by endangered species to adequately ensure their future survival in the wild. As it is, the situation is problematic: Of the 159 endangered mammal, bird, reptile and amphibian species in the continental USA only 21 are adequately preserved by existing protected areas. Creating new protected areas on public land is fraught with obstacles. Many protected areas are designated based on scenic beauty or lack of agricultural value and these criteria don't necessarily benefit at-risk species. Unfavorable political climates can also present problems. Trisha Atwood, Assistant Professor in the Department of Watershed Sciences and Ecology Center, and study co-author explained, "There has been a huge political push in the USA to reduce protected areas such as National Monuments. However, our results suggest that we not only need to increase the spatial coverage of protected areas in the USA, but we also need to ensure that we are protecting the places that contain critical habitat for endangered species." Another obstacle is the limited availability of public land. For example, in the state of Texas 95% of the land is privately owned. And according to the study even if all federal and state public lands were given protected area status more than half of the at-risk species in the USA would still be in danger of extinction. "We are not suggesting that protected areas are doing a bad job," said Edward Hammill, Assistant Professor in the Department of Watershed Sciences and Ecology Center and study co-author, "what we are suggesting is that there are many opportunities to increase protection." One of those opportunities is the creation of conservation easements on private land. Conservation easements are voluntary, legal agreements that restrict future development on private land. In exchange for contributing to conservation efforts, land owners retain their property rights and can receive tax benefits. One of the most important findings from the study is that with the help of private land owners the USA has not lost the capacity to adequately protect 100% of its endangered species. "It is unlikely that adequate conservation of endangered species will be achieved by increasing federal protected areas," said Hammill. "Our research highlights that private land owners represent an alternative route to achieving conservation goals." Atwood concluded, "These findings give me hope that we can still make a change for the better. But, if we are going to win the fight against extinction we are going to need the help of private land owners."
10.1038/s41598-020-68780-y
Earth
Climate cycles create California precipitation uncertainty
Lu Dong et al, Uncertainty in El Niño-like warming and California precipitation changes linked by the Interdecadal Pacific Oscillation, Nature Communications (2021). DOI: 10.1038/s41467-021-26797-5 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-26797-5
https://phys.org/news/2021-12-climate-california-precipitation-uncertainty.html
Abstract Marked uncertainty in California (CA) precipitation projections challenges their use in adaptation planning in the region already experiencing severe water stress. Under global warming, a westerly jet extension in the North Pacific analogous to the El Niño-like teleconnection has been suggested as a key mechanism for CA winter precipitation changes. However, this teleconnection has not been reconciled with the well-known El Niño-like warming response or the controversial role of internal variability in the precipitation uncertainty. Here we find that internal variability contributes > 70% and > 50% of uncertainty in the CA precipitation changes and the El Niño-like warming, respectively, based on analysis of 318 climate simulations from several multi-model and large ensembles. The Interdecadal Pacific Oscillation plays a key role in each contribution and in connecting the two via the westerly jet extension. This unifying understanding of the role of internal variability in CA precipitation provides critical guidance for reducing and communicating uncertainty to inform adaptation planning. Introduction Located on the western edge of the North America continent and influenced by the Pacific storm tracks, California (CA) has a distinct precipitation annual cycle, with a large fraction of precipitation falling within the winter season (December–January–February). Hence, winter precipitation is vital to the agriculture, ecosystems, and water resources in the region. Despite the very certain sign of precipitation frequency and intensity changes from multimodel ensembles, with decreased frequency and increased intensity, large uncertainty in the changes of annual or winter CA precipitation amount is evident due to the superposition of these two opposite contributions 1 , 2 , 3 , 4 . Influenced by both tropical forcing and mid-latitude westerlies 5 , the intermodel spreads of both the signs and magnitudes of CA precipitation changes under warming are large 2 , 3 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 . Climate model projection uncertainty has been robustly partitioned into its different sources at global scale 14 , 15 , but such partitioning may be highly variable at local-to-regional scale. Understanding the fractional contribution of uncertainty from different sources is important for informing the use of climate projections for regional adaptation planning. The total uncertainty conflates contributions of uncertainty from internal variability, the model response to forcing, and the emission scenarios 14 , 15 , 16 , 17 . With climate change projections conditioned on the emission scenarios, which are developed based on storylines of socioeconomic changes with no probabilities assigned, progress can be made in understanding uncertainty by focusing on the internal variability and model response uncertainty. We emphasize the decomposition of uncertainty into components of externally forced vs. internal variability because uncertainty in the response to external forcing is an important and potentially reducible uncertainty factor for targeted future research through model development and observational constraints. Although uncertainty from internal variability is irreducible, improving the decadal prediction of the relevant internal modes may also potentially reduce uncertainty in predicting the decadal trends in CA precipitation in the near future. Separating the uncertainties caused by the model response and internal variability is difficult in traditional multimodel ensembles from the Coupled Model Intercomparison Project phase 5 (CMIP5; ref. 18 ) and phase 6 (CMIP6; ref. 19 ), as most models only include a small number of realizations, which can not faithfully represent the range of internal variability 20 . While selecting subsets of the CMIP models may reduce model uncertainty to provide more consistent projections of future CA precipitation 11 , 21 , the internal variability is even more under-represented by the smaller subsets of models. The advent of large ensembles from several climate models presents an opportunity for isolating the internal variability from the model response uncertainty 22 . Internal variability, which arises from processes intrinsic to the atmosphere, the ocean, and the coupled ocean-atmosphere system via dynamic and thermodynamic interactions, makes an appreciable contribution to the precipitation changes on a decadal timescale 17 , 23 , especially at smaller spatial scales 24 . The dominant effect of internal variability in the 2010–2015 CA megadrought has been broadly recognized 25 , 26 , although anthropogenic warming is argued to enhance the probability of severe drought 27 , 28 . The Interdecadal Pacific Oscillation (IPO), or Pacific Decadal Oscillation (PDO), is a leading mode of internal variability at the decadal timescale featuring sea surface temperature (SST) variability in the Pacific Ocean. The IPO can be viewed as a manifestation of the integrated influences of the Pacific Ocean, including the El Niño-Southern Oscillation (ENSO) 29 , 30 . As described in previous studies 31 , 32 , 33 , the IPO impact on CA precipitation is manifested via the interdecadal modulation of ENSO teleconnections. Based on 21 CMIP3 models, a study 17 suggested that more than half of the intermodel spread in the precipitation changes under global warming over most of the extratropical regions are contributed by internal variability, which is estimated based on a single set of large ensemble simulations. In contrast, based on 36 CMIP5 models, another study 34 concluded that internal variability does not contribute substantially to the intermodel spread over broad regions, including CA. Similar conclusions were obtained by comparing the intermodel range from CMIP5 models with the intermember range from a single set of large ensemble simulations 11 . These controversial results motivate a need to combine the CMIP5 and CMIP6 models with several large ensemble simulations to quantify the contribution of internal variability to the total uncertainty of CA precipitation changes. Physically, future changes in CA winter precipitation under warming are related to an eastward extension of the North Pacific westerly jet that steers more storms towards CA, analogous to the El Niño-like teleconnection 7 , 11 , 34 . However, previous studies 7 , 11 , 12 , 34 based on CMIP5 models could not establish a close intermodel relationship between the CA precipitation changes and the El Niño-like warming (i.e., stronger SST warming in the tropical eastern Pacific relative to the western Pacific) under global warming, and suggested that the relationship may have been obscured by model deficiencies, such as the CA precipitation sensitivity to Niño 3.4 SSTs, CA precipitation climatology, and possible overestimation of tropical convection 11 , 35 . Similar to CA winter precipitation, there is also substantial uncertainty in the El Niño- vs. La Niña-like (i.e., stronger SST warming in the tropical western Pacific relative to the eastern Pacific) warming pattern over the tropical Pacific under global warming 36 , 37 , 38 . In contrast with the controversial role of internal variability in the CA winter precipitation, the possible role of internal variability in the future El Niño-like warming pattern has yet to be brought to the fore, although there is an ongoing debate on the relative roles of model bias 38 , 39 vs. internal variability 40 , 41 , 42 in the La Niña-like warming pattern observed in the recent decades. With the convolved contributions of model uncertainty and internal variability to the large uncertainty in the El Niño-like warming and CA precipitation, CMIP simulations offer limited opportunities to isolate the roles of internal variability in their respective future changes and their relationships. We conjecture that quantifying and understanding the contribution of internal variability to uncertainty in the El Niño-like warming may hold a key to better understanding and reducing the uncertainty of CA precipitation changes in the future. In this work, a total of 318 simulations including the large ensembles and multimodel ensembles show a marked contribution of internal variability of >80% and >70% to the total uncertainty in the decadal trends and future changes of CA precipitation, respectively. Importantly, internal variability also contributes >50% to the total uncertainty in the future change of El Niño-like warming pattern. Among the internal variability, the IPO is key to connecting the uncertainties of CA precipitation and El Niño-like warming pattern through its modulation of the Aleutian low and westerly jet extension over the North Pacific. Results To test our conjecture, we use the large ensemble simulations from three climate models (CESM1, CanESM2, MPI-ESM) with a total of 190 members, comparable to the total of 128 members in the CMIP5 and CMIP6 multimodel ensembles (See Methods). The three climate models of the large ensembles perform very well in capturing the climatological CA precipitation and westerly jet stream (Supplementary Fig. 1 ). Note that the large ENSO bias found in CCSM3 has been significantly reduced in CCSM4, CESM1, and CESM2, and the spatial patterns of the IPO, as well as their relationships to ENSO modulations are well simulated in the CESM models 43 . We find that the spatial patterns of the IPO simulated by all the three models show high pattern correlation coefficients (~0.7, statistically significant at the 99% level of confidence) with the observation over the Pacific Ocean. The interdecadal variability of the IPO is also reasonably well simulated as indicated by comparing the power spectra between the three models (grey lines) and observation (black lines in Supplementary Fig. 2 ). Furthermore, teleconnections of the IPO can be realistically represented in most climate models 44 . We estimate the total uncertainty by the spread of all 128 members from CMIP5 and CMIP6 (Supplementary Table 1 ) following the previous studies 7 , 11 , 34 , while internal variability is estimated by the intermember spread of the large ensemble from each of the three models (see Methods). Internal variability dominates uncertainty in CA winter precipitation decadal trends Over the northern mid-latitudes, the historical precipitation trend over the U.S. west coast during 1979–2019 is subject to large uncertainty compared to other land regions, as indicated by the large standard deviation (STD) across all 128 members of the CMIP5 and CMIP6 models (Fig. 1a ). Uncertainty from internal variability is comparable to uncertainty from the CMIP models not only over the U.S. west coast but also in the North Pacific storm track region (Fig. 1b ), which has been suggested to be closely linked to CA precipitation 7 , 34 , underscoring the large contribution of internal variability to the total uncertainty. Precipitation trends in the near-future (2020–2060) and far-future (2061–2099) show similar spatial patterns (not shown). Focusing on CA, uncertainties in the decadal precipitation trends (Supplementary Fig. 3d–f ) are larger than the mean trends (Supplementary Fig. 3a–c ) in the past (1979–2019), near-future (2020–2060) and far-future (2061–2099) decades based on both the single-model large ensembles and the multimodel ensembles, indicating large uncertainty in CA precipitation decadal trends. The largest uncertainty occurs in winter, when precipitation peaks during the year. The total uncertainty in the CMIP models contains contributions from model response uncertainty (under a given emission scenario) and internal variability. As a first step in separating the two contributions, we compared the ratio of the STD from internal variability estimated based on the three single-model large ensembles relative to the total uncertainty estimated based on the CMIP models. Averaged across the three large ensembles, internal variability explains >80% of the total uncertainty in CA winter precipitation decadal trends, which is robust for the trends in the past, near-future, and far-future decades (Fig. 1c ). The three large ensembles behave remarkably similarly (Supplementary Fig. 3g-i ), demonstrating the importance of internal variability relative to the model response uncertainty in CA winter precipitation change on decadal timescales. Fig. 1: Effect of internal variability on the large uncertainty of winter precipitation decadal trends over California. a Total uncertainty of winter precipitation trend during 1979–2019 based on the 128 members of CMIP5 and CMIP6 (CMIPs). b Internal uncertainty based on the average of intermember standard deviation (STD) from three large ensembles (including 100 MPI-ESM, 40 CESM1, and 50 CanESM2 simulations). c Fraction of the total uncertainty in California winter precipitation trend during 1979–2019, 2020–2060, 2061–2099 explained by internal variability based on the average of three large ensembles. Units: mm day −1 41year −1 . Full size image Major contributions from the IPO and its mechanistic connection to CA precipitation To identify the internal climate mode primarily responsible for the uncertainty in CA winter precipitation change, we calculate the intermember regression of surface temperature trend onto the CA precipitation trend based on the three large ensembles. The regression features warming in tropical central-eastern Pacific and cooling in North Pacific (Fig. 2a ), which resembles the positive IPO or PDO 31 pattern (Supplementary Fig. 4a ). Note that the highest correlation occurs north of the Bering Sea, which is unlikely to be caused by the IPO, as one previous study 45 found that surface temperature response is absent over there when the observed tropical SST featuring a negative IPO is prescribed in an atmospheric model. The dipole pattern is consistent among the three large ensembles and across the past, near-future, and far-future decades (Supplementary Fig. 5 ). Therefore, the IPO decadal trend may contribute importantly to uncertainty in CA winter precipitation trend. Although the influences of both the Aleutian low at the surface 31 , 46 and the upper-tropospheric westerly jet stream over the North Pacific 7 , 11 , 34 on the CA winter precipitation are well established, it is still unclear to what extent they contribute to the intermodel spread of the CA winter precipitation trend through the IPO decadal variation. Hence we further explore the mechanism for how the IPO modulates the uncertainty in decadal precipitation change over CA. Overall, simulations with a larger positive IPO trend is correlated with a larger CA precipitation increase (Fig. 2b ) via atmospheric teleconnection by deepening the Aleutian low (Fig. 2c ) and instigating an eastward extension of the westerly jet stream over the North Pacific (Fig. 2d ). Based on the 50 CanESM2 members during 1979–2019, the intermember relationship shows the linear trend in CA precipitation is negatively correlated with that in the Aleutian low (r = −0.77, black in Fig. 2e ) and positively correlated with that in the westerly jet extension over the North Pacific ( r = 0.76, red in Fig. 2e ). Consistently, linear trends in the IPO show a significant relationship with those in the Aleutian low (r = −0.68, black in Fig. 2f ) and westerly jet extension ( r = 0.79, red in Fig. 2f ), reconciling with the atmospheric circulation that modulates the decadal trends in CA precipitation under internal variability. These intermember relationships largely hold for all three large ensembles as well as the three periods (Supplementary Figs 6 , 7 ). In particular, correlation coefficients between the trends of Aleutian low and the IPO trends for 1979–2019 are −0.74 (CESM1), −0.68 (CanESM2), and −0.63 (MPI-ESM) (Supplementary Fig. 7a–c ), all statistically significant at the 99% level of confidence. They suggest that the atmospheric circulation mechanism plays a dominant role in the uncertainty of decadal precipitation trends under internal variability. Thus, the IPO is the key internal mode influencing the uncertainty in the decadal trend of CA winter precipitation in the past, near-future, and far-future. Fig. 2: The dominant internal mode influencing the uncertainty of California precipitation decadal trends and the mechanism. Intermember regressions of a surface temperature (TS) trend (K 41year −1 ) onto the California averaged precipitation (CA prec) trend. Intermember regressions of the linear trend in b precipitation (mm day −1 41year −1 ), c sea level pressure (SLP; hPa 41year −1 ), d 200hPa zonal wind (U200; m s −1 41year −1 ) onto the Interdecadal Pacific Oscillation (IPO) trend. Scatterplots of e the California precipitation trend ( x -axis) and f the IPO trend ( x- axis) versus the SLP trend over the Aleutian low (black, left y -axis) and the zonal wind at 200 hPa (U200) trend over the jet extension region (red, right y -axis). The climatological jet stream based on the historical simulations is shown in black contours in panel d . All the trends are based on the 50 members of CanESM2 during 1979–2019. The regression lines and the intermember correlations (r) are shown in corresponding color on top of the bottom panels. The averaging areas for calculating the quantities shown in the bottom panels are indicated as rectangles in panels a to d . Full size image To compare the relationship between the IPO and CA winter precipitation directly, the intermember correlation of the 41-year trends for the past, near-future, and far-future decades is examined based on the three large ensembles (Supplementary Fig. 8 ). Positive correlations are found for all the three large ensembles during the three periods, with correlation coefficients of 0.34, 0.57, and 0.44 for CESM1, CanESM2, and MPI-ESM, respectively, during 1979–2019, all statistically significant at the 95% confidence level. Recognizing the mechanistic and statistically significant connections between the IPO trends and the CA precipitation decadal trends, can the IPO be used to constrain the uncertainty of CA precipitation change under warming? Constraining the CA precipitation decadal trends by the IPO reduces uncertainty In previous studies, the model ability to capture the ENSO-precipitation relationship at inter-annual timescale is used as an emergent constraint to reduce the uncertainty in precipitation projections from the model bias perspective 11 , 47 . Here, we attempt to constrain the CA precipitation decadal change by considering the role of the IPO. Different from emergent constraints that use observations to constrain uncertainty in future projections due to model biases, the state of the IPO can be used to constrain uncertainty in future projections related to internal variability. To quantify the uncertainty in CA precipitation trends explained by the IPO, we exclude the IPO’s influence by removing the CA precipitation variations that are linearly related to the IPO index in each realization of the large ensembles (See Methods). The histogram and the fitted frequency distribution of CA precipitation trends narrow noticeably after removing the IPO’s influence, with the STD reduced by 0.3%, 16%, 12% for 1979–2019, 26%, 11%, 16% for 2020–2060, and 12%, 25%, 10% for 2061–2099, based on the CESM1, CanESM2, MPI-ESM ensembles, respectively (black and red in Fig. 3a–c , Supplementary Fig. 9 , Supplementary Table 2 ). Although these reductions of STD are modest, as only the variability that is linearly related to the IPO is removed, they are non-negligible and indicate the role of the IPO in increasing the chance of both the extreme positive and extreme negative precipitation trends. Conditioning on the observed IPO trend of −1.0 K (41year) −1 during 1979–2019 (Supplementary Fig. 4b ), the distribution of CA precipitation trends shifts towards drying (blue line in Fig. 3a and Supplementary Fig. 9a, b ), with the mean (blue dot) falling between the observed drying trends based on two observation datasets (purple dots in Fig. 3a and Supplementary Fig. 9a, b ). This implies a dominant role of the IPO in the observed decadal drying trend and cautions the interpretation of model-observation differences as model biases, as internal variability may account for a significant fraction of that difference. To further support this statement, members from the three large ensembles that produce drying trends in CA no less than the observed drying of −0.44 mm day −1 41year −1 estimated by GPCP are used to composite their SST trends (Supplementary Fig. 10 ). Averaging all the members which represents the response to the external forcing, the SST trend during 1979–2019 features warming in most of the global ocean based on all the three large ensembles (Supplementary Fig. 10a–c ). In contrast, the SST trend with external forcing removed in the members that reproduce the observed drying in CA features a negative IPO pattern in all the three models, similar to the observed IPO during 1979–2019 (Supplementary Fig. 10d–f ), confirming the critical role of the IPO in the recent CA drying. The latter suggests that uncertainty in projecting CA precipitation change in near-future could be reduced with improved decadal prediction of the IPO. Fig. 3: Constraining the California precipitation decadal trends by the Interdecadal Pacific Oscillation (IPO). a–c Histograms (bars) and 100-bin fitted frequency distribution (lines) of the California winter precipitation trends during a 1979–2019, b 2020–2060, and c 2061–2099 based on 50 members of CanESM2. The gray bars and the black fitted curves show the frequency of occurrence of the original trends; the red bars and curves are the same but with the IPO’s influence removed through linear regression against the IPO index in the individual runs; the blue bars and curves are the same but including the observed IPO trend for 1979–2019. The dots and error bars denote the ensemble mean and one standard deviation of the distribution represented by the corresponding color. The purple dots denote the observed precipitation trend based on GPCP (−0.44 mm day −1 41year −1 ) and CMAP (−0.77 mm day −1 41year −1 ) datasets. d Linear trend of California winter precipitation during 1979–2019. Observation (OBS) is the average of GPCP and CMAP; External is the average of CMIP5, CMIP6, and the three large ensembles (green bar). Trends for the three large ensembles account for the observed IPO trend (orange bars), with the gray bars showing the response to external forcing based on each large ensemble (multi-member mean of each large ensemble). e Time series of California precipitation (CA prec; in mm day −1 , black, right y -axis) and its 41-year running trend (mm day −1 41year −1 , red, left y -axis), with the significant trend at the 95% level of confidence shown in blue dots under external forcing. f Time series of total uncertainty estimated based on the CMIP5 and CMIP6 models (solid grey line), internal variability (dashed black line), and IPO-related internal variability (solid black line) of the 41-year running trend of California precipitation. Full size image The contributions of external forcing and the positive-to-negative phase transition of the IPO in the drying CA trend during 1979–2019 are further assessed quantitatively. Observed precipitation in CA shows a significant drying trend of −0.61 mm day −1 (41year) −1 , reducing the mean precipitation during the 41-year period by ~28% of the climatological precipitation, based on the average of GPCP 48 and CMAP 49 (Fig. 3d ). Neither the multi-models (green bar) nor any single-model large ensembles (grey bars) under external forcing can reproduce the observed drying trend (Fig. 3d ). Taking the observed IPO transition during 1979–2019 into account (See Methods), all three large ensembles can well reproduce the observed drying trend in CA precipitation based on the multi-member average, with magnitudes comparable to the observation (orange bars in Fig. 3d ). Therefore, the drying over CA in the past decades is dictated by the positive-to-negative IPO phase transition, which overshadows the insignificant effect of external forcing (Fig. 3d ). However, with continued warming in the future, external forcing has stronger and more significant effect on the decadal trend of CA winter precipitation (Fig. 3e ), which may overwhelm the effect of the IPO. As uncertainties contain contributions from model response uncertainty and internal variability 14 , 15 , 17 , it is important to compare their time-dependent relative contributions. Comparing the time evolution of these uncertainties, the total uncertainty of CA precipitation decadal trends increases gradually with warming, while the internal variability contribution, especially the IPO component, remains stable (Fig. 3f ). Internal variability dominates the total uncertainty of the CMIP simulations, indicating a more important role of internal variability than model response uncertainty. However, the time series of the total uncertainties estimated based on the CMIP models occasionally fall below the time series of the internal variability (grey solid line vs. black dashed line in Fig. 3f ). This suggests that the total uncertainty is likely underestimated by using the CMIP models due to the limited number of simulations from each model 20 , but our analysis suggests that most of the intermodel spread in the CA precipitation from the CMIP models can be represented by internal variability. Importantly, the IPO-related uncertainty explains ~50% of the internal uncertainty, accentuating the dominant contribution of the IPO to the internal uncertainty. Role of the IPO in the future changes of CA precipitation and El Niño-like warming pattern Having demonstrated the significant role of the IPO in the large internal variability uncertainty in CA winter precipitation decadal trends, we further quantify the role of internal variability in the uncertainty of future changes of CA winter precipitation, which have been more broadly investigated in previous studies 7 , 11 , 12 , 13 , 34 , 50 . Here the differences between 2085–2099 from the RCP8.5/SSP585 simulations and 1986–2000 from the historical simulations are used to represent the future changes under global warming. Consistent with the 41-year linear trend, internal variability accounts for a marked contribution (>70%) to the total uncertainty in the future change of CA precipitation based on three large ensembles as well as their mean (Fig. 4a ). The El Niño-like SST warming pattern has been suggested to contribute to the uncertainty of the westerly jet extension related with CA precipitation change 12 . Although the intermodel relationship between the El Niño-like warming and CA precipitation change is not significant among the CMIP5 models (Supplementary Fig. 11a ), consistent with previous studies 7 , 11 , 34 , all three large ensembles show a statistically significant intermember relationship between the two variables (Fig. 4b , Supplementary Fig. 11c–e ) at the 99% confidence level. The latter suggests an important contribution of the El Niño-like warming pattern to the internally-induced uncertainty of CA precipitation change by modulating the westerly jet extension (Supplementary Fig. 11f–h ). This contribution of El Niño-like warming pattern to uncertainty in CA precipitation change may have been masked by the large model response uncertainty in the CMIP5 models. Notably, the CMIP6 models show a more significant intermodel relationship ( r = 0.59, Supplementary Fig. 11b ) compared to the CMIP5 models ( r = 0.23). Whether model improvement or reduced model response uncertainty from CMIP5 to CMIP6 has contributed to the change in the relationship needs further investigation in the future. Fig. 4: Role of the Interdecadal Pacific Oscillation (IPO) in the uncertainty of the El Niño-like warming pattern in the future. a Fraction of the total uncertainty in California precipitation (CA prec) change explained by internal variability based on three large ensembles and their mean. b Scatterplots of the El Niño-like warming (K, x -axis) versus California precipitation change (mm day −1 , y -axis) based on 50 members from CanESM2. c The El Niño-like warming pattern change (K) based on all members, 57 CMIP5 members, 71 CMIP6 members, 40 CESM1, 50 CanESM2 and 100 MPI-ESM, respectively. Grey bars denote the ensemble mean, and error bars denote one standard deviation. The first term denotes the effect of external forcing (average of five large ensembles, grey bar) and the total uncertainty of 128 CMIP5 and CMIP6 simulations (error bar). d Fraction of the total uncertainty in El Niño-like warming explained by internal variability based on three large ensembles and their mean. e Intermember regressions of sea surface temperature (SST) change (K) onto the El Niño-like pattern change based on 50 members from CanESM2. f Scatterplots of the El Niño-like warming (K, x -axis) versus the IPO change (K, y -axis) based on 50 members from CanESM2. The regression line is shown as the red line and the intermember correlation (r) is shown at the top-right of panel b and f . The El Niño-like pattern is based on the SST inside the black rectangles, and the IPO is based on the SST inside the red rectangles in e . All the changes are based on the difference between RCP8.5/SSP585 (2085–2099) and historical (1986–2000). Full size image Most models project an El Niño-like warming pattern under global warming, except for MPI-ESM which projects a La Niña-like warming pattern (Fig. 4c ) consistent with its projection of a drying trend under external foring (grey bar in Fig. 3d ). It is noteworthy that the CMIP6 models project a stronger El Niño-like warming than the CMIP5 models based on the multimodel mean, but with a large intermodel spread in both. Even for a single model, the intermember spread is also comparable to the mean, underscoring the large uncertainty from internal variability in the future change of the SST warming pattern. The fractions of the total uncertainty in the El Niño-like warming explained by internal variability are estimated based on the three large ensembles as well as their mean (Fig. 4d ). Internal variability can explain >50% of the total uncertainty in the El Niño-like warming pattern in the future based on the three large ensembles mean. We hypothesize that among the internal variability, the IPO may contribute substantially to the uncertainty of El Niño-like warming pattern in the future, which induces uncertainty in the CA precipitation change through atmospheric teleconnection. To test this hypothesis, we investigate the intermember regression of SST change onto the El Niño-like warming index (Fig. 4e ). The regressed SST exhibits a pattern remarkably similar to that of the positive IPO phase, indicating a close relationship between the IPO change and the El Niño-like pattern 29 . The correlation coefficient between the IPO change and the El Niño-like warming pattern is 0.79 based on the 50 members of CanESM2, statistically significant at the 99% confidence level (Fig. 4f ). Hence the IPO explains ~62% of the internally-induced uncertainty of the El Niño-like warming pattern based on CanESM2. The other two large ensembles based on CESM1 and MPI-ESM show consistent results supporting our hypothesis (Supplementary Fig. 12 ). As an internal climate mode, the IPO changes are symmetric about zero, averaging to nearly no change (see Methods), while the El Niño-like warming pattern has components of internal variability and the model response to external forcing, with the latter tending to be positive in CESM1 and CanESM2 and negative in MPI-ESM (Fig. 4f , Supplementary Fig. 12c, d ). These results confirm the important contribution of the IPO to the uncertainty of the future change in El Niño-like warming pattern. Discussion Located in the path of the Pacific storm tracks and significantly influenced by storms, CA winter precipitation is highly variable 51 . Using a large set of climate simulations including two multimodel ensembles (CMIP5 and CMIP6) and three single-model large ensembles for a total of 318 simulations, this study links the uncertain CA winter precipitation decadal trends and future changes to the uncertain El Niño-like warming pattern through their respective connections to internal variability. Specifically, internal variability accounts for >80% of the CMIP intermodel spread of CA winter precipitation decadal trends in the past (1979–2019), near-future (2020–2060), and far-future (2061–2099). Moreover, internal variability is estimated to account for more than half of the total uncertainty in the projected El Niño-like warming pattern and contributes to >70% of the intermodel spread in CA precipitation change in the future. The uncertainties in CA precipitation changes and El Niño-like warming are physically linked by the IPO, which connects the two by modulating the Aleutian low and westerly jet extension. Accounting for the positive-to-negative phase transition of the IPO during 1979–2019 by linear regression, the simulated CA precipitation trends are comparable to the observed drying trends. In addition, simulations that can reproduce the observed recent CA drying feature the negative IPO pattern. Recognizing and understanding the relative contributions of internal variability and model response to the total uncertainty in CA precipitation projections can help focus our efforts in addressing uncertainty and improve communication of the uncertainty to stakeholders of the climate information. Although reducing the uncertainty of model response to external forcing may only reduce the uncertainty in CA precipitation projections by <30% (due to >70% of the uncertainty from internal variability), we have identified uncertainty in the El Niño-like warming response to external forcing as an important and potentially reducible uncertainty factor for targeted future research. This is hinted by the increased correlation between the El Niño-like warming and the CA precipitation projections in CMIP6 relative to CMIP5, although more detailed analysis of the reasons behind the difference is needed. Based on the above analysis, the teleconnection between the IPO-related jet extension, persistent blocking high-pressure and CA drought can be inferred. In terms of the uncertainty from internal variability, the strong and significant negative correlations between the linear trends of the jet extension and the presence of persistent high-pressure exist for all three large ensembles as well as the three periods (Supplementary Fig. 13 ), demonstrating this teleconnection. It indicates that the positive-to-negative phase transition of the IPO may contribute to CA drought by inducing westward retreat of the jet and persistence of high pressure, with the latter steering advected moisture away from CA. In particular, the drying trend of −0.61 mm day −1 (41year) −1 during 1979–2019 has reduced the CA precipitation by ~28% of its climatological mean. Our findings highlight the importance of internal variability, especially the positive-to-negative phase transition of the IPO, in this observed drying trend and caution interpreting the role of model errors in model-observation differences in the historical simulations. Although uncertainty from internal variability is irreducible, given the long timescale of the IPO, improving its decadal prediction may potentially reduce uncertainty in predicting the decadal trends in CA precipitation in the near future and support stakeholders in planning for changing likelihood of extreme events such as flood and drought. Near-term predictions of the IPO based on initialized multimodel ensemble decadal hindcasts have shown some promises, with future improvements possible through community activities 52 , 53 . Lastly, our finding of the dominant role of internal variability in CA precipitation trends and projections is partly conditioned on the estimation of the total uncertainty based on the CMIP multimodel ensembles that reflect uncertainty from both model response and internal variability. Although this approach is also commonly used in many previous studies 7 , 11 , 34 , comparison between the total uncertainty estimated using CMIP simulations and the internal variability estimated using the large ensemble simulations suggests that the total uncertainty is likely underestimated, due to the limited number of simulations from each model 20 . This calls for the need of large ensemble simulations from more modeling centers in the future to better quantify both model uncertainty and internal variability. Methods Models and datasets In this study, we use the monthly gridded precipitation data from the Global Precipitation Climatology Project Version 2.3 (GPCP, ref. 48 ) and the CPC Merged Analysis of Precipitation Version 2002 (CMAP, ref. 49 ), both covering the period of 1979–2019 with a horizontal resolution of 2.5° × 2.5°. Observed monthly SST data are taken from the Hadley Centre Sea Ice and Sea Surface Temperature dataset (HadISST), covering the period 1870 to 2019 with a horizontal resolution of 1.0° × 1.0° (ref. 54 ). To estimate the role of internal variability and model response uncertainty in the total uncertainty, we use monthly outputs from five simulation ensembles combining both the historical and future scenarios: (1) Historical (1962–2005) and RCP8.5 (2006–2099) simulations of 57 members from 37 models from CMIP5 (Supplementary Table 1 ; ref. 18 ). (2) Historical (1962–2015) and SSP585 (2016–2099) simulations of 71 members of 37 models from CMIP6 (Supplementary Table 1 ; ref. 19 ). (3) Historical (1962–2005) and RCP8.5 (2006–2099) simulations of 50 members from the CanESM2 large ensemble project (ref. 55 , 56 ). (4) Historical (1962–2005) and RCP8.5 (2006–2099) simulations of 40 members from the CESM1 large ensemble project (ref. 57 ); (5) Historical (1962–2005) and RCP8.5 (2006–2099) simulations of 100 members from the MPI-ESM Grand Ensemble project (ref. 58 ). All the outputs from the CMIP5 and CMIP6 models are interpolated to a common 73 × 144 global grid using bilinear interpolation. The three sets of large ensemble simulations are the only ones providing 40 or more members, more than the ensemble simulations from other models. The total number of members from the five simulation ensembles is 318. Estimating total uncertainty, internal uncertainty, and external forcing To estimate the total uncertainty in the CA precipitation change, we first build a CMIP ensemble including all 128 members from the CMIP5 and CMIP6 models. Then the standard deviation (STD) of the CMIP ensemble is used to estimate the total uncertainty, including the uncertainty arising from internal variability and that from model differences. We note that internal uncertainty might be underestimated in the total uncertainty due to the limited number of ensemble members in each CMIP model 20 . Using all 318 members from the five simulation ensembles to calculate the total uncertainty does not change our results. Consistent with the previous studies 15 , 17 , 59 , internal uncertainty is determined by the STD of the large ensemble of a given model. Because the different members in a large ensemble of a single model are driven by the same external forcing but differ in their initial conditions, internal variability arising from random climate variations can be estimated by the spread of the ensemble members of a given climate model. Averaging across the three large ensembles yields the multimodel mean internal variability. The contribution of internal variability is calculated as the ratio between the internal variability uncertainty and the total uncertainty, while the intermodel uncertainty can be inferred by the residual assuming the sources of uncertainty are additive 15 . To separate the effect of external forcing in CA precipitation change, the ensemble mean of all the large ensemble members of a single model can be taken as the response to external forcing based on the given model. The variance across such ensemble mean of the three large ensembles represents the model uncertainty in the response to external forcing. To better reduce both the internal variability and intermodel spread in estimating the external frocing, we first average all the members from CMIP5, CMIP6, and the 3 large ensembles, respectively, followed by averaging of the five ensemble mean to get the external forcing effect in Fig. 3 . Definitions of the IPO, El Niño-like pattern, and indices of key physical processes Similar to previous studies 59 , 60 , 61 , we define the IPO index as the differences between the SST anomalies averaged within the tropical central-eastern Pacific (10°S to 15°N, 180°E to 90°W) and the North Pacific (25°N to 40°N, 150°E to 140°W). The SST anomalies are obtained by the deviations in each year from the long-term mean for observation and by the deviations of each ensemble member from the ensemble mean for each large ensemble model. The IPO index is defined as the 7-year running average. The spatial patterns and time periods of the IPO simulated in the three large ensembles are reasonable (Supplementary Fig. 2 ), indicating the reliability of the three models for studying the IPO. El Niño-like pattern is defined as the SST difference between the tropical central-eastern Pacific (10°S −10°N, 180°E −90°W) and the tropical western Pacific (10°S −10°N, 110°E −140°E). To illustrate the physical mechanisms supporting the IPO’s influence on CA precipitation change, several indices are defined based on the regression patterns onto the IPO in Fig. 2 , including the precipitation averaged over California (32–42° N, 115° −124° W); the sea level pressure (SLP) averaged over the eastern North Pacific (25–55° N, 180° E-120° W); the westerly jet extension defined by the 200 hPa zonal wind averaged over 20–37° N, 170° E-115° W. We focus on the winter mean (December–January-February), the wet season in CA, in this study. Contributions of the IPO to the uncertainty in California precipitation change Following ref. 59 , we evaluate how much the IPO contribute to the uncertainty in CA precipitation change in several steps: (1) the IPO index is derived for each ensemble member; (2) the CA precipitation variations that are linearly related to the IPO index are removed through a linear regression; (3) the STD of CA precipitation change with and without the IPO can be compared to estimate the uncertainty arising from the IPO; (4) a fixed IPO’s influence based on the observed trend of the IPO during 1979–2019 is added to each member after step (2), so that all the members are influenced by the same IPO evolution and can be compared with the observed CA precipitation change during 1979–2019. Data availability The GPCP data is available at . The CMAP data is available at . HadISST is available at . The raw outputs of CMIP5 models are available at . The raw outputs of CMIP6 models are available at available at . Large ensembles of CanESM2 are available at . Large ensembles of CESM1 are available at . Large ensembles of MPI-ESM are available at . Code availability The codes to generate the figures are based on NCAR Command Language (NCL v.6.4.0; ) and are available at .
Over the past 40 years, winters in California have become drier. This is a problem for the region's agricultural operations, as farmers rely on winter precipitation to irrigate their crops. Determining if California will continue getting drier, or if the trend will reverse, has implications for its millions of residents. But so far, climate models that account for changes in greenhouse gases and other human activities have had trouble reproducing California's observed drying trends. When climate models project the future or simulate the past, they can't agree on long-term precipitation trends. Researchers at Pacific Northwest National Laboratory (PNNL) want to know why because these mixed results aren't very useful for future water resource planning. "When we see these large uncertainties in model simulations and projections, we have to ask whether or not the models are up for the task," said Ruby Leung, a Battelle Fellow and atmospheric scientist at PNNL. "One challenge with modeling California is that long-term natural cycles heavily affect its precipitation." These cycles range from years long, like El Niño and La Niña, to decades long, like the Interdecadal Pacific Oscillation (IPO). They represent natural variability associated with sea surface temperature patterns in the Pacific Ocean and affect winter precipitation in California. But how much of a role do they play in spawning uncertainty in California's precipitation projections? A big one, it turns out. Results from Leung and a PNNL team show that natural cycles are responsible for >70 percent of the uncertainty in model simulations of precipitation trends over the past 40 years. By isolating the effects of the natural cycles, scientists can focus on improving models to reduce the remaining uncertainty related to how greenhouse gases and other human activities affect climate. The impact of ensembles With more computing power, researchers can now run large sets of simulations called large ensemble simulations. To produce them, researchers run climate models from 40–100 times with minor differences in their starting conditions. Because everything except for the starting conditions remains the same, these ensembles provide a unique representation of natural variability. Modeling centers around the world also run simulations that contribute toward multi-model ensembles. These represent the total uncertainty due to both natural variability and model uncertainty. Leung and her team analyzed three ensemble simulations generated by three different climate models and two multi-model ensembles of two recent climate model generations. They wanted to determine the sources of uncertainty in the projections of California precipitation. What they found surprised them. The team found that natural climate cycles were responsible for roughly 70 percent of the total uncertainty in model simulations of California precipitation trends in the past 40 years. That leaves 30 percent of the uncertainty for how models represent human influence on climate. "We know that natural cycles have major impacts on California's climate, but we didn't think that they would dominate the total uncertainty in climate simulations to this extent," said Leung. "This result shows the importance of large ensemble simulations for isolating human influence on climate, which may be small compared to natural cycles in some regions." Natural cycles versus human impacts Of the natural cycles that influence California's climate, the IPO is one of the most important. Its decades-long phases help determine if California is in a wetting or drying trend. The team's results point to its substantial role in California's drying over the past 40 years. Currently, climate models have limited skill in predicting the transition between the IPO phases—especially decades from now. Therefore, future projections of California precipitation have large uncertainty due to IPO cycles. So where does that leave human-induced changes, like warming and increasing greenhouse gases? They still play a substantial role in shaping the future climate and weather. As greenhouse gases continue to accumulate in the atmosphere and the ocean's large heat capacity catches up with increasing temperatures, warming and its effects will become more pronounced. "Natural variability, such as the IPO, is like background noise," said Leung. "Although that noise is substantial, the climate response to rising concentrations of greenhouse gases is a signal that grows over time. Focusing our efforts on reducing model disagreement for this signal is impactful, particularly when looking to the far future." Understanding the extent to which natural and external factors affect California precipitation helps researchers better contextualize their projections. This knowledge helps modelers explain why their models might be missing the mark in simulating past observed trends. Scientists can then communicate more nuanced results to people planning California's water future.
10.1038/s41467-021-26797-5
Chemistry
A protein mines, sorts rare earths better than humans, paving way for green tech
Joseph Cotruvo, Enhanced rare-earth separation with a metal-sensitive lanmodulin dimer, Nature (2023). DOI: 10.1038/s41586-023-05945-5. www.nature.com/articles/s41586-023-05945-5 Journal information: Nature
https://dx.doi.org/10.1038/s41586-023-05945-5
https://phys.org/news/2023-05-protein-rare-earths-humans-paving.html
Abstract Technologically critical rare-earth elements are notoriously difficult to separate, owing to their subtle differences in ionic radius and coordination number 1 , 2 , 3 . The natural lanthanide-binding protein lanmodulin (LanM) 4 , 5 is a sustainable alternative to conventional solvent-extraction-based separation 6 . Here we characterize a new LanM, from Hansschlegelia quercus ( Hans -LanM), with an oligomeric state sensitive to rare-earth ionic radius, the lanthanum(III)-induced dimer being >100-fold tighter than the dysprosium(III)-induced dimer. X-ray crystal structures illustrate how picometre-scale differences in radius between lanthanum(III) and dysprosium(III) are propagated to Hans -LanM’s quaternary structure through a carboxylate shift that rearranges a second-sphere hydrogen-bonding network. Comparison to the prototypal LanM from Methylorubrum extorquens reveals distinct metal coordination strategies, rationalizing Hans -LanM’s greater selectivity within the rare-earth elements. Finally, structure-guided mutagenesis of a key residue at the Hans- LanM dimer interface modulates dimerization in solution and enables single-stage, column-based separation of a neodymium(III)/dysprosium(III) mixture to >98% individual element purities. This work showcases the natural diversity of selective lanthanide recognition motifs, and it reveals rare-earth-sensitive dimerization as a biological principle by which to tune the performance of biomolecule-based separation processes. Main The irreplaceable roles of rare-earth (RE) elements in ubiquitous modern technologies ranging from permanent magnets to light-emitting diodes and phosphors have renewed interest in one of the grand challenges of separation science—efficient separation of lanthanides 1 . The separation of these 15 elements is complicated by the similar physicochemical properties of their predominating +III ions, with ionic radii decreasing only 0.19 Å between La III and Lu III (ref. 7 ), which also leads to these metals co-occurring in RE-bearing minerals. Conventional hydrometallurgical liquid–liquid extraction methods for RE production utilize organic solvents such as kerosene and toxic phosphonate extractants and require dozens or even hundreds of stages to achieve high-purity individual RE oxides 3 , 8 . The inefficiency and large environmental impact of RE separations 9 have stimulated research efforts into alternative ligands with larger separation factors between adjacent REs 10 , 11 , 12 , 13 , 14 , and greener process designs to achieve RE separation in fewer stages 15 and using all-aqueous chemistry 6 , 16 , 17 , 18 , 19 , 20 . The discovery of the founding member of the LanM family of lanthanide-binding proteins demonstrated that nature has evolved macromolecules surpassing the selectivity of synthetic f-element chelators 4 . The prototypal LanM, from M. extorquens AM1 ( Mex -LanM), is a small (12-kDa), monomeric protein that undergoes a selective conformational response to picomolar concentrations of lanthanides 4 , 18 and actinides 21 , 22 , 23 , 24 , has facilitated understanding of lanthanide uptake in methylotrophs 25 , and has served as a technology platform for f-element detection 26 , recovery 18 , 27 and separation 6 . Unusually among RE chelators, Mex -LanM favours the larger and more abundant light REs (LREs), especially La III –Sm III , over heavy REs (HREs) 4 . Our recent demonstration that even single substitutions to the metal-binding motifs of Mex -LanM can improve actinide/lanthanide separations 23 spurred us to investigate whether orthologues of Mex -LanM might possess distinct, and potentially useful, metal selectivity trends. Herein, we report that the LanM from Hansschlegelia quercus ( Hans -LanM), a methylotrophic bacterium isolated from English oak buds 28 , exhibits enhanced RE separation capacity relative to Mex- LanM. Whereas Mex- LanM is always monomeric, Hans -LanM exists in a monomer/dimer equilibrium, the position of which depends on the specific RE bound. Three X-ray crystal structures of LanMs and structure-guided mutagenesis explain Hans -LanM’s RE-dependent oligomeric state and its greater separation capacity than that of Mex -LanM. Finally, we leverage these findings to achieve single-stage Hans -LanM-based separation of the critical neodymium/dysprosium pair. These results illustrate how intermolecular interactions—common in proteins but rare in small molecules—may be exploited to improve RE separations. Hans -LanM’s distinct selectivity profile We have proposed 4 several hallmarks of a LanM. First, LanMs possess four EF-hand motifs. EF hands comprise 12-residue, carboxylate-rich metal-binding loops flanked by α-helices, which traditionally respond to Ca II binding; 29 in Mex -LanM, however, EF hands 1–3 bind lanthanide(III) ions with low-picomolar affinity and 10 8 -fold selectivity over Ca II , resulting in a large, lanthanide-selective disorder-to-order conformational transition 4 . EF4 binds with only micromolar affinity. Second, adjacent EF hands in LanMs are separated by 12–13 residues—rather than the typical ≈25 residues in Ca II -responsive EF-hand proteins—resulting in an unusual three-helix bundle architecture with the metal-binding sites on the periphery 5 . Third, at least one EF hand contains proline at the second position (in Mex -LanM, all four EF hands feature P 2 residues). We searched sequence databases using the first two criteria and a sequence length of <200 residues, identifying 696 putative LanMs. These sequences were visualized using a sequence similarity network 30 to identify LanM sequences that cluster separately from Mex -LanM. Notably, at a 65% identity threshold, a small cluster of sequences that is remote from the main cluster of 642 sequences is formed (Fig. 1a ). This exclusive cluster (the Hans cluster), includes bacteria from several genera, including Hansschlegelia and Xanthobacter (Extended Data Fig. 1 ), all of which are facultative methylotrophs 31 . Fig. 1: Hans -LanM diverges from Mex -LanM in sequence and RE versus RE selectivity. a , Sequence similarity network of core LanM sequences indicates that Hans -LanM forms a distinct cluster. The sequence similarity network includes 696 LanM sequences connected with 48,647 edges, thresholded at a BLAST E value of 1 × 10 −5 and 65% sequence identity. The black box encloses nodes clustered with Hans -LanM. The LanM sequence associated with Mex (downtriangle) and four within Hansschlegelia (uptriangle) are enlarged compared to other nodes (circles). Colours of the nodes represent the family from which the sequences originate. b , Comparison of the sequences of the four EF hands of Mex - and Hans -LanMs. Residues canonically involved in metal binding in EF hands are in blue; Pro residues are in purple. c , Circular dichroism spectra from a representative titration of Hans -LanM with La III , showing the metal-associated conformational response increasing helicity; apoprotein is bold black, La III -saturated protein is bold red. d , Circular dichroism titration of Hans -LanM with La III , Nd III and Dy III (pH 5.0). Each point represents the mean ± s.d. from three independent experiments. e , Comparison of K d,app values (pH 5.0) for Mex -LanM (black 18 ) and Hans- LanM (red), plotted versus ionic radius 7 . Mean ± s.e.m. from three independent experiments. Source Data Full size image Hans -LanM features low (33%) sequence identity with Mex -LanM (Supplementary Fig. 1 ) and divergent EF-hand motifs, particularly at the first, second and ninth positions (Fig. 1b ), which are important positions in Mex -LanM 23 , 26 and other EF-hand proteins 29 . Therefore, Hans -LanM presented an opportunity to determine features essential for selective lanthanide recognition in LanMs. Hans -LanM was expressed in Escherichia coli as a 110 amino acid protein (Supplementary Fig. 1 ). La III and Nd III were selected as representative LREs and Dy III was selected as a representative HRE for complexation studies. The protein binds about three equivalents of La III and Nd III , and slightly less Dy III , by inductively coupled plasma mass spectrometry (Supplementary Table 1 ), as does Mex -LanM 4 . Also like Mex -LanM 4 , Hans -LanM exhibits little helical content in the absence of metal, as judged by the circular dichroism signal at 222 nm (Fig. 1c ). Unexpectedly, only two equivalents of La III or Dy III were sufficient to cause Hans -LanM’s complete conformational change (Supplementary Fig. 2 ), indicating that the third binding equivalent is weak and does not increase helicity. The apparent dissociation constants ( K d,app ) determined by circular dichroism spectroscopy 4 reflect the RE versus RE, and RE versus non-RE, selectivities of Mex -LanM under competitive RE recovery conditions 6 , 18 . Therefore, similar determinations of K d,app with free metal concentrations controlled by a competitive chelator 4 , 32 were applied to Hans -LanM; the results (Fig. 1d and Supplementary Table 2 ) diverged from those for Mex -LanM. Binding of La III and Nd III to Hans -LanM increases molar ellipticity at 222 nm by 2.3-fold, the full conformational change evident in stoichiometric titrations. The conformational change is cooperative (Hill coefficients, n , of 2; Supplementary Table 2 ), and the K d,app values are similar, 68 and 91 pM, respectively. By contrast, even though Dy III induces the same overall response as La III in stoichiometric titrations (Supplementary Fig. 2 ), in the chelator-buffered Dy III titrations Hans -LanM exhibits a lesser conformational response (1.8-fold increase). This difference indicates that at least one of the Dy III -binding sites is very weakly responsive ( K d,app > 0.3 µM, the highest concentration accessible in the chelator-buffered titrations). The main response to Dy III occurs at 2.6 nM, >30-fold higher than with the LREs, and with little or no cooperativity ( n = 1.3). By contrast, Mex -LanM shows only a modest preference for LREs (about fivefold; Fig. 1e ; ref. 4 ), and all lanthanides and Y III induce similar conformational changes and cooperativity 18 . Hans -LanM responds to calcium(II) weakly ( K d,app = 60 µM), with the same lack of cooperativity ( n = 1.0) and partial conformational change evident with Dy III (Extended Data Fig. 2 ). Therefore, Hans -LanM discriminates more strongly between LREs and HREs than does Mex -LanM, with the HRE complexes exhibiting lower affinity, lesser cooperativity and a lesser primary conformational change. LRE-selective dimerization The distinct behaviours of the LRE– and HRE– Hans -LanM complexes suggested mechanism(s) of LRE versus HRE selectivity not present in Mex -LanM. As Mex -LanM is a monomer in complex with LREs and HREs alike 4 , 5 , we considered that LREs and HREs might induce different oligomeric states in Hans -LanM. In the presence of three equivalents of La III , Hans -LanM elutes from a size-exclusion chromatography (SEC) column not at the expected molecular weight (MW) of 11.9 kDa but instead at 27.8 kDa, suggestive of a dimer (Supplementary Figs. 3 and 4a ). Starting gradually after Nd III but sharply at Gd III , the apparent MW decreases towards that expected for a monomer (Fig. 2a , Supplementary Fig. 4 and Supplementary Table 3 ). Notably, lanthanides heavier than Gd III do not seem to support growth of RE-utilizing bacteria 33 , 34 , 35 . Fig. 2: A dimerization equilibrium sensitive to LRE versus HRE or non-RE coordination. a , Apparent molecular weight of Hans -LanM complexes with REs as determined by analytical SEC (red lines) or SEC–MALS (black dashed line). See Supplementary Table 1 for conditions. Each individual data point is the result of a single experiment. b , The La III -bound Hans -LanM dimer as determined by X-ray crystallography. La III ions are shown as green spheres and Na I ions are shown as grey spheres. c , Detailed view of the dimer interface near EF3 of chain A (blue cartoon). Arg100 from chain C (light blue cartoon) anchors a hydrogen-bonding network involving Asp93 of chain A and two EF3 La III ligands (Glu91 and Asp85). These interactions constitute the sole polar contacts at the dimer interface, providing a means to control the radius of the lanthanide-binding site at EF3. d , Schematic of the interactions at the dimer interface. Red dashed lines indicate hydrogen-bonding interactions and grey dashed lines indicate hydrophobic contacts. e , DENSS projections of electron density from small-angle X-ray scattering datasets for La III -bound (left) and Dy III -bound (right) Hans -LanM, overlaid with a PyMOL-generated ribbon diagram of the dimeric La III – Hans -LanM crystal structure. Full size image To provide further support for preferential dimerization in the presence of physiologically relevant LREs, RE complexes of Hans -LanM were analysed using multi-angle light scattering (MALS; Fig. 2a and Supplementary Fig. 5 ). The La III , Nd III and Gd III complexes have MWs of 22–25 kDa, indicative of dimers, but MWs decrease starting with Tb III and continue to Dy III and Ho III , at about 15 kDa (Extended Data Table 1 ), in agreement with the SEC data. Ca II -bound Hans -LanM also indicated a MW of 14.7 kDa. The HRE–, Ca II – and apo Hans -LanM complexes are still one-third larger than expected for a monomer, however, suggesting that these forms exist in a rapid equilibrium with ≈2:1 monomer/dimer ratio under these conditions. We next determined the K d for dimerization ( K dimer ) of apo, La III -bound and Dy III -bound Hans -LanM by isothermal titration calorimetry (Extended Data Table 2 and Supplementary Figs. 6 – 8 ). The apoprotein and Dy III -bound protein weakly dimerize, with K dimer values of 117 µM and 60 µM, respectively, consistent with the ratios of monomer and dimer reflected in the SEC and MALS traces. In the presence of La III , however, the dimer was too tight to be able to observe monomerization by isothermal titration calorimetry, which indicates that K dimer <0.4 µM (Supplementary Fig. 8 ). Thus, La III favours Hans -LanM’s dimerization by >100-fold over Dy III . A 1.8-Å-resolution X-ray crystal structure of Hans -LanM in complex with La III confirms LRE-induced dimerization (Extended Data Fig. 3 and Supplementary Table 4 ). Two LanM monomers interact head-to-tail (Fig. 2b ), burying about 600 Å 2 of surface area through hydrophobic and polar contacts (Fig. 2c,d ). These interactions occur largely between side chains contributed by the core helices α1 (between EF1 and EF2) and α2 (between EF3 and EF4; Supplementary Fig. 9 ). Residues at the dimer interface make direct contact with only one of the four metal-binding sites, EF3; three residues of EF3 in each monomer form a hydrogen-bonding network with Arg100 of the other monomer (Fig. 2c ), suggesting that occupancy and coordination geometry at this site may control oligomeric state. Hans -LanM and its complexes with three equivalents of La III , Nd III and Dy III were also analysed by small-angle X-ray scattering (Supplementary Figs. 10 and 11 ). The calculated solvent envelopes 36 from the small-angle X-ray scattering data fit well to the crystallographic Hans -LanM dimer for La III – Hans -LanM, adequately for Nd III – Hans -LanM, but poorly for Dy III – Hans -LanM (Fig. 2e and Supplementary Figs. 12 – 14 ). The weaker dimerization of Dy III – Hans -LanM is also supported by quantitative metrics, such as the Porod volume (Supplementary Figs. 15 and 16 and Supplementary Tables 5 and 6 ). Together, the biochemical and structural results indicate that Hans -LanM’s dimerization equilibrium depends strongly on the particular RE bound. Structural basis for dimerization The structure of La III – Hans -LanM also provides one of the first detailed views of the coordination environments in a LanM, and indeed any natural biomolecule tasked with reversible lanthanide recognition. The previous NMR structure of Mex -LanM 5 revealed the protein’s unusual fold, but it could not provide molecular-level details about the metal-binding sites. To understand the basis for LRE versus HRE discrimination, we also determined a 1.4-Å-resolution structure of Dy III – Hans -LanM. Finally, we report a 1.01-Å-resolution structure of Nd III – Mex -LanM, which rationalizes Mex -LanM’s shallower RE selectivity trend 4 . In La III – Hans -LanM, EF1–3 are occupied by La III ions (Extended Data Fig. 3b–e ). EF4 is structurally distinct, does not exhibit anomalous difference density consistent with La III and was modelled with Na I (Supplementary Fig. 17a ). Each La III -binding site is ten-coordinate, as observed in structures of lanthanide-dependent methanol dehydrogenases 33 , 37 (Supplementary Fig. 18 ). A monodentate Asn (N 1 position), four bidentate Asp or Glu residues (D 3 , D 5 , E 9 and E 12 ) and a backbone carbonyl (T 7 or S 7 ) complete the first coordination sphere in EF1–3 (Fig. 3a ). Exogenous solvent ligands are not observed (Supplementary Fig. 17b ); luminescence studies of Eu III – Hans -LanM to determine the number of coordinated solvent molecules ( q ) yielded q = 0.11, consistent with the absence of solvent ligands in the X-ray structure (Supplementary Fig. 19 ). Fig. 3: Hans -LanM uses an extended hydrogen-bonding network to control lanthanide selectivity. a , Zoomed-in views of EF2 (left) and EF3 (right) in La III – Hans -LanM. La III ions are shown as green spheres. Coordination bonds and hydrogen bonds are shown as dashed lines. Residues contributed by chain A are shown in blue and those contributed by chain C (in the case of EF3) are shown in light blue. Inset: overlay of La III – Hans -LanM (blue and light blue) with Dy III – Hans -LanM (grey), showing the carboxylate shift of Glu91 from bidentate (La) to monodentate (Dy). Coordination and hydrogen bonds (dashed lines) are shown only for the Dy case. b , Representative metal-binding site (EF3) in Nd III – Mex -LanM. Nd III ion is shown as an aqua sphere. Solvent molecules are shown as red spheres. Full size image The lanthanide-binding sites in Hans -LanM additionally share extensive second-sphere interactions that may further constrain the positions of the ligands and the size of the metal-binding pore (Supplementary Fig. 20 ). This phenomenon is most obvious in EF3, at which the dimer interface mediates an extended hydrogen-bonding network involving several ligands. Arg100, contributed by the adjacent monomer, projects into the solvent-exposed side of EF3 to contact two carboxylate ligands, Asp85 (D 3 ) and Glu91 (E 9 ), enforcing their bidentate binding modes. Arg100 is also buttressed by Asp93 (EF3 D 11 ), unique to EF3 within Hans -LanM and not observed in Mex -LanM. We tested the importance of this network in Hans -LanM dimerization by making the minimal substitution, R100K. Hans -LanM(R100K) had nearly identical K d,app values and response to Nd III and Dy III as wild-type Hans -LanM, but the K d,app for La III was twofold weaker (Supplementary Fig. 21 and Supplementary Table 7 ). SEC–MALS analysis indicated MWs of 10–13 kDa for apo, La III – and Dy III – Hans -LanM(R100K) (Supplementary Fig. 22 and Supplementary Table 8 ), indicative of increased monomerization, especially for the La III complex, and suggesting that weaker dimerization may be responsible for the lower La III affinity. All four residues comprising the Arg100–EF3 network are completely conserved in the Hans cluster (Supplementary Fig. 23 ), suggesting that these interactions may contribute to dimerization in these LanMs. The structure of Dy III – Hans -LanM confirms the importance of second-sphere control of ligand positioning (Extended Data Fig. 4 , Supplementary Figs. 24 – 26 and Supplementary Tables 9 and 10 ). The overall structure of Dy III – Hans -LanM is largely superimposable with that of La III – Hans -LanM, and the coordination spheres of the Dy III ions in EF1–3 are similar to those in La III – Hans -LanM (Fig. 3a , inset), with the notable exception of E 9 (for example, Glu91 in EF3). This residue shifts from bidentate with La III to monodentate with the smaller Dy III ions, yielding a nine-coordinate distorted capped square antiprismatic geometry; the lower coordination number with a HRE ion is consistent with the case of other RE complexes 38 , 39 . In EF3, this carboxylate shift lengthens the distance between Arg100 and the proximal Oε of Glu91 from 2.9 Å (in La III – Hans -LanM) to 3.2 Å (Supplementary Fig. 27 ). The rearrangement of this second-sphere hydrogen-bonding network suggests a structural basis for RE-dependent differences in K dimer values. The metal-binding sites of Mex -LanM differ substantially from those of Hans -LanM. In Mex -LanM, all four EF hands are occupied by nine-coordinate (EF1–3) or ten-coordinate (EF4) Nd III ions, each including two solvent ligands, not present in Hans- LanM (Fig. 3b and Supplementary Fig. 28 ). The observation of the two solvent molecules per metal site and the hydrogen bond to the D 9 residue validates recent spectroscopic studies 21 , 23 , 26 . The difference in coordination number between EF1–3 and EF4 is due to the D 3 carboxylate being monodentate in EF1–3 but bidentate in EF4. Although the Nd III sites of Mex -LanM share the nine- and ten-coordination observed in Dy III – and La III – Hans -LanM, they structurally resemble the seven-coordinate Ca II -binding sites of calmodulin (Supplementary Fig. 18 ). The increased coordination numbers in Mex -LanM relative to calmodulin result from bidentate coordination of D 5 and an additional solvent ligand. These similarities suggest that much of LanM’s unique 10 8 -fold selectivity for REs over Ca II results from subtle differences in second-coordination-sphere and other more distal interactions. Finally, the exclusively protein-derived first coordination sphere in Hans -LanM, particularly due to coordination by E 9 , yields more extended hydrogen-bonding networks (Supplementary Figs. 20 and 29 ) and probably enhances control over the radius of the binding site. Thus, the structures rationalize the extraordinary RE versus non-RE selectivity of Mex -LanM and Hans -LanM while also accounting for their differences in LRE versus HRE selectivity. Single-stage Nd III /Dy III separation The differences in stability and structure between Hans- LanM’s LRE versus HRE complexes suggested that Hans -LanM (wild type and/or R100K) would outperform Mex -LanM in RE/RE separations. We focused on separating the RE pair of Nd III and Dy III used in permanent magnets. We first assayed the stabilities of the wild-type Hans -LanM and Hans -LanM(R100K) RE complexes against citrate, previously used as a desorbent with Mex -LanM 6 . RE– Hans -LanM complexes are generally less stable against citrate than those of Mex -LanM, as expected on the basis of lower affinity (Fig. 1e ), but the difference in stability between the Nd III – Hans -LanM and Dy III – Hans -LanM complexes—expressed as the ratio of citrate concentration required for 50% desorption of each metal ([citrate] 1/2 ), as reported by the fluorescence of Hans -LanM’s two Trp residues (Supplementary Fig. 30 )—is twofold greater than for Mex -LanM complexes (Fig. 4a , Supplementary Table 11 and Extended Data Fig. 5 ). Furthermore, the R100K substitution significantly destabilizes Hans -LanM’s La III complex against citrate, whereas it only slightly affects the Nd III complex and does not affect the Dy III complex. This result confirms that dimerization selectively stabilizes Hans -LanM’s LRE complexes (and especially the La III complex), a factor abrogated by the R100K substitution. Using malonate, a weaker chelator than citrate, Dy III can be readily desorbed from both Hans -LanM and R100K with 10–100 mM chelator without significant Nd III desorption, suggesting conditions for Nd III /Dy III separation (Fig. 4b ). Fig. 4: Leveraging Hans -LanM to separate Nd/Dy in a single-stage process. a , Hans -LanM and the R100K variant exhibit greater differences in Nd versus Dy complex stability than Mex -LanM against desorption by citrate. Mean ± s.e.m. for three independent trials. **Significant difference between [citrate] 1/2 for La III between Hans -LanM and Hans -LanM(R100K) (20 µM protein) shows the impact of dimerization of La III complex stability ( P < 0.01, analysis of variance with Bonferroni post-test). Mex -LanM Nd and Dy data from ref. 6 . b , Spectrofluorometric titration of Hans -LanM and R100K variant ( λ ex = 280 nm, λ em = 333 nm) at pH 5.0, depicting the malonate-induced desorption of a 2:1 metal–protein complex. Mean ± s.e.m. for three independent trials, except those with R100K, which were single trials of each condition. c , Comparison of distribution factors (pH 5.0, about 0.33 mM each RE, La III –Dy III ) for immobilized Hans -LanM, Hans -LanM(R100K) and Mex -LanM. Each point represents mean ± s.d. for three independent trials. d , Separation of a 95:5 mixture of Nd III /Dy III using immobilized Hans -LanM(R100K) and a desorption scheme of three stepped concentrations of malonate followed by pH 1.5 HCl. One bed volume was 0.7 ml. Source Data Full size image Although a twofold modulation of RE versus RE selectivity by dimerization may seem small, such differences provide opportunity to decrease the number of separation stages, increasing efficiency of a separation process 3 , 12 . Therefore, Hans -LanM and the R100K variant were immobilized through a carboxy-terminal Cys residue on maleimide-functionalized agarose beads, as described previously 6 , and tested for Nd III /Dy III separation. Immobilized Hans -LanM bound about one equivalent of RE, unlike in solution and compared with two equivalents for Mex- LanM 6 and Hans -LanM(R100K) (Supplementary Fig. 31 ). Hans -LanM and R100K exhibited similar separation ability in the La–Gd range—although R100K exhibits greater separation ability in the Gd–Dy range—as determined by the on-column distribution ratios ( D ) of a mixed RE solution at equilibrium (Fig. 4c , Extended Data Table 3 and Supplementary Tables 12 – 14 ). These Nd/Dy separation factors are nearly double ( Hans -LanM) and triple ( Hans -LanM(R100K)) that of Mex -LanM (Extended Data Table 3 ). Immobilized Hans -LanM was loaded to 90% of breakthrough capacity with a model electronic waste mixture of 5% dysprosium and 95% neodymium and, guided by Fig. 4b , eluted with a short, stepwise malonate gradient, followed by complete desorption using pH 1.5 HCl. In a single purification stage, Dy was upgraded from 5% to 83% purity and Nd was recovered at 99.8% purity (both >98% yield; Extended Data Fig. 6 ). This significantly outperformed the comparable Mex -LanM-based process, which achieved only 50% purity in a first separation stage and required a second stage to obtain >98% purity 6 . The immobilized R100K variant performed even better, achieving baseline separation of Dy III and Nd III to >98% purity and >99% yield in a single stage (Fig. 4d ). The R100K variant’s better performance was unexpected and may point to the unlikelihood of functional dimers on the column at this immobilization density (see the caption of Extended Data Fig. 6 for a discussion). Thus, despite substantially improved performance versus Mex -LanM enabled by characterization of Hans -LanM’s mechanism of dimerization, fully exploiting the dimerization phenomenon on-column may involve, for example, tethering of two monomers on a single polypeptide chain, which is under investigation. Conclusion Biochemical and structural characterization of Hans -LanM’s mechanism of metal-sensitive dimerization provides a new, allosteric mechanism for LRE versus HRE selectivity in biology, extending concepts in dimer-dependent metal recognition recently emerging from synthetic lanthanide complexes 11 and engineered transition metal-binding proteins 40 and showing that these principles are hard-wired into nature. Our work also shows that dimerization strength, and thus metal selectivity, can be rationally modulated. Hans -LanM evolved LRE-selective dimerization at physiological protein concentrations closer to those in our biochemical assays (10–20 µM) rather than those on the column (about 3 mM). Therefore, leveraging dimerization in a separation process would be assisted by shifting dimerization sensitivity to the higher concentration regime, such as by tuning hydrophobic interactions at the dimerization interface. Furthermore, our studies establish that LanMs with as low as 33% identity are easily predicted yet have useful differences in metal selectivity; further mining of this diversity may reveal yet additional mechanisms for tuning RE separations. Finally, the solvent-excluded coordination spheres of Hans -LanM should outperform Mex -LanM in RE/actinide separation 23 , luminescence-based sensing 21 , 26 and stabilization of hydrolysis-prone ions. Continued characterization of the coordination and supramolecular principles of biological f-element recognition will inspire design of ligands with higher RE versus RE selectivities and their implementation in new RE separation processes. Methods General considerations See the Supplementary Methods for details. Bioinformatics methods Protein and genome sequence data The sequence of LanM from M. extorquens AM1 was used as a query to conduct PSI-BLAST searches against the National Center for Biotechnology Information non-redundant protein sequence (nr) and metagenomic protein (env_nr) databases until convergence 41 . The resulting 3,047 protein sequences were then manually curated for those that are less than 200 residues long, have at least one pair of EF hands separated by less than 14 residues, and have 4 EF hands. Signal peptides of LanM sequences were predicted using SignalP (v6.0) 42 , and then removed before further analysis of the sequences. Construction of sequence similarity networks The Enzyme Function Initiative-Enzyme Similarity Tool was used to calculate the similarities between all peptide sequence pairs with an E -value threshold of 1 × 10 −5 (ref. 30 ). The resulting sequence similarity network of 696 nodes and 241,853 edges was then constructed and explored using the organic layout through Cytoscape (v3.9.1) 43 and visualized in R (v4.1.0) 44 . The edge percentage identity threshold was gradually increased from 40% to 90% to yield distinct clusters. Multiple sequence alignment and phylogenetic analysis LanM sequences were aligned using MUSCLE (v5.1) 45 with default parameters. The model used for phylogeny construction was selected using ModelFinder in IQ-TREE (v2.2.0.3) 46 , 47 with --mset set to beast2. Bayesian phylogeny was generated on the basis of these results using BEAST (v2.6.7) 48 . The resulting phylogeny was evaluated using 10 7 generations and discarding a burn-in of 25%, and then visualized using ggtree (v3.2.1) 49 . Expression and purification of Hans -LanM and its R100K variant The gene encoding Hans- LanM, codon optimized for expression in E. coli without its native 23-residue signal peptide (see Supplementary Table 15 ), was obtained from Twist Bioscience and inserted into pET-29b(+) using the restriction sites NdeI/XhoI. Hans -LanM was overexpressed on a 2-l scale and purified using the established protocol for Mex -LanM 50 , with one modification: after the final SEC step, the protein was concentrated to 5 ml and dialysed against 5 g Chelex 100 in 500 ml of 30 mM HEPES, 100 mM KCl, 5% glycerol, pH 8.4, to remove Ca II and trace metal contaminants. This procedure resulted in approximately 15 ml of 550 μM protein, which was not concentrated further. The final yield was 45 mg of protein per litre of culture. Protein concentrations were calculated using an extinction coefficient of 11,000 M −1 cm −1 , based on the ExPASy ProtParam tool 51 . Hans- LanM(R100K) was purified using the same procedure, yielding 30 mg of protein per litre of culture. Circular dichroism spectroscopy Circular dichroism spectra of Hans -LanM were collected as described previously 32 , at 15 µM (monomer concentration) in Chelex 100-treated buffer A (20 mM acetate, 100 mM KCl, pH 5.0), unless otherwise indicated. Buffered metal solutions were prepared as described previously 4 , 23 , 25 , 32 . Additional details are available in the Supplementary Information . Preparation of protein samples for SEC–MALS and small-angle X-ray scattering (SAXS) Samples of wild-type Hans -LanM were prepared by adding 3.0 equivalents of metal slowly (0.5 equivalent at a time followed by mixing) to 1.0 ml of concentrated stock of Hans -LanM (550 μM). At these protein concentrations, slight precipitation was observed for LRE samples (for example, La III ) whereas significant precipitation was observed for HRE samples (for example, Dy III ). Samples were centrifuged at 12,000 g for 2 min to remove precipitate and then purified using gel filtration chromatography (HiLoad 10/300 Superdex 75 pg, 1-ml loop, 0.8 ml min −1 ) in buffer B (30 mM MOPS, 100 mM KCl, 5% glycerol, pH 7.0). Hans -LanM-containing peaks (ranging from 12.0 to 15.0 ml elution volume) were collected to avoid the high-MW aggregate peaks, yielding 2.0 ml of metalated Hans- LanM ranging between 114 μM and 128 μM (1.37–1.53 mg ml −1 ). Samples of Hans- LanM(R100K) do not form high-MW species or precipitate on metal addition. To prepare samples of this protein, a 500 μM protein solution was diluted to 250 μM (3 mg ml −1 ) in buffer B containing 0.75 mM of a specific RECl 3 , yielding a final solution of 3 mg ml −1 protein, with a 1:3 metal ratio, which was analysed directly by SEC–MALS. For calcium conditions, proteins were diluted to 250 μM (3 mg ml −1 ), 5 mM CaCl 2 was added, and the samples were incubated at room temperature for 1 h. The buffer used for SEC–MALS was the same as above, except that it also contained 5 mM CaCl 2 . In-line SEC and MALS SEC–MALS experiments were conducted using an Agilent 1260 Infinity II HPLC system equipped with an autosampler and fraction collector, and the Wyatt SEC hydrophilic column had 5-µm silica beads, a pore size of 100 Å and dimensions of 7.8 × 300 mm. Wyatt Technology DAWN MALS and Wyatt Optilab refractive index detectors were used for analysing the molar mass of peaks that eluted from the column. The SEC–MALS system was equilibrated for 5 h with buffer B. The system was calibrated with bovine serum albumin (monomer MW: 66 kDa) in the same buffer and normalization and alignment of the MALS and refractive index detectors were carried out. A volume of 15 µl of each sample was injected at a flow rate of 0.8 ml min −1 with a chromatogram run time of 25 min. Data were analysed using the ASTRA software (Wyatt). When small-angle X-ray scattering (SAXS) analysis was desired, a second run was carried out with 150 µl protein (about 4 mg ml −1 ) injected, and 200-µl fractions of the main peak were collected. BioSAXS data were subsequently collected in triplicate. Isothermal titration calorimetry The dissociation constants for the dimers of apo, La III -bound and Dy III -bound Hans -LanM were determined by dilutive additions of a concentrated protein stock, followed using isothermal titration calorimetry on a TA Instruments Low-volume Auto Affinity isothermal titration calorimeter. The syringe contained 300 μM protein (apo or 2 equivalents of Dy III bound) or 150 µM or 540 µM (2 equivalents of La III bound), and the cell contained 185 μl of a matched buffer (30 mM MOPS, 100 mM KCl, pH 7.0). Titrations were carried out at 30 °C. Titrations consisted of a first 0.2-μl injection followed by 17 × 2-μl injections, unless otherwise noted, with stirring at 125 r.p.m. and 180 s equilibration time between injections. The data were fitted using NanoAnalyze using the Dimer Dissociation model, yielding the dimer dissociation constant ( K dimer ), enthalpy of dissociation (Δ H ) and entropy of dissociation (Δ S ). All parameters are shown in Extended Data Table 2 . K dimer is defined as the dissociation constant for the equilibrium D \(\rightleftharpoons \) 2 M , such that K dimer = [ M ] 2 /[ D ], in which [ D ] is the concentration of the dimer and [ M ] is the concentration of the monomer, and the total protein concentration [ P ] (as measured using the extinction coefficient for the monomer) is given by [ P ] = [ M ] + 2[ D ]. Therefore, K dimer = 2[ M ] 2 /([ P ] − [ M ]) or $$2{[M]}^{2}+{K}_{{\rm{dimer}}}[M]-{K}_{{\rm{dimer}}}[P]=0$$ (1) This equation can be used to estimate monomer and dimer concentrations during SEC–MALS experiments, using K dimer values calculated from isothermal titration calorimetry experiments and [ P ] from the SEC–MALS trace. This equation can also be used to estimate the maximum possible K dimer for La III -bound protein, given the SEC–MALS data. SAXS SAXS data were collected on RE-complexed Hans -LanM, at protein concentrations given in Supplementary Table 5 using equipment and under conditions described in the Supplementary Methods . The forward scattering I (0) and the radius of gyration ( R g ) are listed in Supplementary Table 5 and were calculated using the Guinier approximation, which assumes that at very small angles ( q < 1.3/ R g ) the intensity is approximated as I ( q ) = I (0)exp[−1/3( qR g ) 2 ]. In the La III -, Nd III - and Dy III -bound conditions, this agrees with the calculated size of 17.9 Å for the crystallographic dimer. The molecular mass was estimated using a comparison with SAXS data of a bovine serum albumin standard. The data files were analysed for Guinier R g , maximum particle dimension ( D max ), Guinier fits, Kratky plots and pair-distance distribution function using the ATSAS software 52 . GNOM, within ATSAS, was used to calculate the pair-distance distribution function P ( r ), from which R g and D max were determined. Solvent envelopes were computed using DENSS 36 . The theoretical scattering profiles of the constructed models were calculated and fitted to experimental scattering data using CRYSOL 53 . OLIGOMER 54 was used to estimate the monomer and dimer fractions. Preparation of protein samples for crystallography To Hans- LanM (2 ml, 1.16 mM, buffer B), 3.0 equivalents of LaCl 3 or DyCl 3 were added slowly, 0.5 equivalents at a time with mixing, to minimize precipitation. Precipitate was removed by centrifugation at 12,000 g for 2 min. Any soluble aggregates were removed and the protein was exchanged into buffer lacking glycerol (buffer C: 30 mM MOPS, 50 mM KCl, pH 7.0) by gel filtration chromatography (HiLoad 16/600 Superdex 75 pg, 1-ml loop, 0.75 ml min −1 ). The peak in the 70–85 ml range was pooled, and the fractions were concentrated to about 500 μl with a final concentration of about 1.3 mM. Mex -LanM was purified as described previously 50 and was exchanged into buffer C before crystallization. The protein was loaded with 3.5 equivalents of Nd III (NdCl 3 ). General crystallographic methods Diffraction datasets were collected at the Life Sciences Collaborative Access Team ID-G beamline and processed with the HKL2000 package 55 . In all structures, phase information was obtained with phenix.autosol 56 , 57 through the single-wavelength anomalous diffraction method, in which lanthanide ions identified with HySS 58 were used as the anomalous scatterers. Initial models were generated with phenix.autobuild 59 with subsequent rounds of manual modification and refinement in Coot 60 and phenix.refine 61 . In the final stages of model refinement, anisotropic displacement parameters and occupancies were refined for all lanthanide sites 62 . Model validation was carried out with the Molprobity server 63 . Figures were prepared using the PyMOL molecular graphics software package (Schrödinger, LLC). La-bound Hans -LanM structure determination Crystals were obtained by using the sitting drop vapour diffusion method, in which 1 μl of protein solution (15 mg ml −1 ) was mixed with 1 μl 10 mM tri-sodium citrate, pH 7.0, and 27% (w/v) PEG 6000 in a 24-well plate from Hampton Research (catalogue number HR1-002) at room temperature. Thin plate-shaped crystals appeared in 3 days. Crystals suitable for data collection were mounted on rayon loops, soaked briefly in a cryoprotectant solution consisting of the well solution supplemented with 10% ethylene glycol, and flash-frozen in liquid N 2 . La III -loaded Hans- LanM crystallized in the P 2 1 space group ( β = 90.024°) with four monomers in the asymmetric unit. The initial figure of merit and Bayesian correlation coefficient were 0.563 and 0.56, respectively 64 . The final model consists of residues 24–133 in each chain, 12 La III ions (3 per chain in the first, second and third EF hands), 4 Na I ions 65 (1 per chain in the fourth EF hand), 273 water molecules and 2 molecules of citrate. Of the residues modelled, 100% are in allowed or preferred regions as indicated by Ramachandran statistical analysis. Dy-bound Hans- LanM structure determination Crystals were obtained by using the sitting drop vapour diffusion method, in which 1 μl of protein solution (15 mg ml −1 ) was mixed with 1 μl of 250 μM tri-sodium citrate, pH 7.0, and 27% (w/v) PEG 6000 in a 24-well plate from Hampton Research at room temperature. Thin plate-shaped crystals appeared within 1 month. Crystals suitable for data collection were mounted on rayon loops, soaked briefly in a cryoprotectant solution consisting of the well solution supplemented with perfluoropolyether cryo oil from Hampton Research (catalogue number HR2-814) and flash-frozen in liquid N 2 . Dy III -loaded Hans- LanM crystallized in the P 2 1 space group ( β = 93.567°) with four monomers in the asymmetric unit. The initial figure of merit and Bayesian correlation coefficient were 0.748 and 0.58, respectively 64 . The final model consists of residues 24–133 in each chain (except for chain D, for which residues 34–38 cannot be modelled), 14 Dy III ions (4 in chains A and D, 3 in the second, third and fourth EF hands of chains B and C) and 656 water molecules. Of the residues modelled, 100% are in allowed or preferred regions as indicated by Ramachandran statistical analysis. Collection of anomalous datasets is described in the Supplementary Methods . Nd-bound Mex -LanM structure determination Crystals were obtained by using the sitting drop vapor diffusion method, in which 1 μl of protein solution (35 mg ml −1 ) was mixed with 1 μl of 0.1 M ammonium sulfate, 0.1 M Tris pH 7.5, and 20% (w/v) PEG 1500 in a 24-well plate from Hampton Research at room temperature. Thin plate-shaped crystals appeared within 6 months. Crystals suitable for data collection were mounted on rayon loops, soaked briefly in a cryoprotectant solution consisting of the well solution supplemented with perfluoropolyether cryo oil from Hampton Research and flash-frozen in liquid N 2 . Nd III -loaded Mex- LanM crystallized in the P 2 1 2 1 2 1 space group with one monomer in the asymmetric unit. The initial figure of merit and Bayesian correlation coefficient were 0.799 and 0.56, respectively 64 . The final model consists of residues 29–133, 4 Nd III ions and 171 water molecules. Of the residues modelled, 100% are in allowed or preferred regions as indicated by Ramachandran statistical analysis. Fluorescence spectroscopy All fluorescence data were collected using a Fluorolog-QM fluorometer in configuration 75-21-C (Horiba Scientific) equipped with a double monochromator on the excitation arm and single monochromator on the emission arm. A 75-W xenon lamp was used as the light source for steady-state measurements and a pulsed xenon lamp was used for time-resolved measurements. Ten-millimetre quartz spectrofluorometry cuvettes (Starna Cells, 18F-Q-10-GL14-S) were used to collect data at 90° relative to the excitation path. Fluorescence lifetime measurements were carried out using established methods 26 , 66 . In short, a solution of Hans- LanM with 2 equivalents of Eu III added, totalling 4.5 ml, was prepared in 100% H 2 O matrix (buffer: 25 mM HEPES, 75 mM KCl, pH 7.0). Half of this initial protein mixture (2.25 ml) was retained for future use and the remainder was exchanged to D 2 O through lyophilization to remove H 2 O and resuspension in 99.9% D 2 O two times. The resulting protein solutions (in 100% H 2 O and about 99% D 2 O) were mixed in varying ratios to produce D 2 O contents of 0%, 25%, 50% and 75%. The protein concentration was 20 µM. For each sample, the luminescence decay time constant ( τ ) was measured ( λ ex = 394 nm, λ em = 615 nm) with 5,000 shots over a time span of 2,500 μs. τ was determined using the FelixFL Powerfit-10 software (Horiba Scientific) using a single exponential fit. 1/ τ was plotted against percentage composition of D 2 O, and the slope of the resulting line ( m ) was determined. The q value was determined using the following equation from ref. 67 : $$q=1.11[{\tau }_{{\rm{H}}2{\rm{O}}}^{-1}-{\tau }_{{\rm{D}}2{\rm{O}}}^{-1}-0.31+0.45{n}_{{\rm{OH}}}+0.99{n}_{{\rm{NH}}}+0.075{n}_{{\rm{O}}\mbox{--}{\rm{CNH}}}]$$ (2) in which τ −1 H2O and τ −1 D2O are the inverses of the time constants in 100% H 2 O and D 2 O, respectively (the latter extrapolated using the equation of the fitted line), in ms –1 ; and n OH = 0, n NH = 0, and n O–CNH = 1 (resulting from the metal-coordinated Asn residues), on the basis of the Hans -LanM crystal structures. This equation simplifies to: $$q=1.11[-m-0.31+0.075]$$ (3) For fluorescence competition experiments, a solution of 20 μM Hans -LanM or the R100K variant was prepared in buffer A (pH 5.0) with two equivalents of metal (40 μM). Fluorescence emission spectra were collected with settings: λ ex = 278 nm, λ em 300–420 nm, integration time = 0.5 s, step size = 1 nm. Titrations were carried out through addition of at least 0.6 μl of titrant (from concentrated stock solutions of 10 mM–1 M citrate or malonate, pH 5.0). Spectra were corrected for dilution. Each experiment was carried out in triplicate. Purification of Cys-containing variants Hans -LanM(R100K)-Cys was expressed and purified as described for Mex- LanM-Cys (ref. 6 ), with a final yield of 50 mg of protein per litre of culture. For Hans -LanM-Cys, the protein was purified by incorporating the same modifications from above, minus the dialysis step, to our previously described Mex -LanM-Cys purification, except that the SEC step was run using a reducing buffer (30 mM MOPS, 100 mM KCl, 5 mM TCEP, pH 7.0) with 5 mM EDTA, and frozen under liquid N 2 before immobilization. Maleimide functionalization of agarose beads The maleimide functionalization of amine-functionalized agarose beads was described previously 6 . See the Supplementary Information for complete details. Immobilization of Hans -LanM and the R100K variant Hans -LanM(R100K) immobilization was carried out using a thiol-maleimide conjugation reaction as described previously 6 . In the case of Hans -LanM, a final protein concentration of about 0.4 mM (8 ml) was combined with 1 ml of maleimide–microbeads and the conjugation reaction was carried out for 16 h at room temperature. Unconjugated Hans -LanM was removed by washing with coupling buffer, and the Hans- LanM microbeads were stored in coupling buffer for subsequent tests. To quantify Hans -LanM immobilization yield, Pierce BCA Protein Assay (ThermoFisher Scientific) was used to determine the LanM concentration in the reaction solution before and after the conjugation reaction as previously described. Batch experiment to determine separation factors LanM-immobilized microbeads were washed with deionized water. Feed solution (5 ml, equimolar REs La–Dy, 3 mM total, pH 5.0) was added to 1 ml microbeads and incubated for 2 h. The liquid at equilibrium was collected and RE concentrations were determined by inductively coupled plasma mass spectrometry as [ M ] ad . Then 4 ml of 0.1 M HCl was used to desorb REs from the microbeads and concentrations were measured by inductively coupled plasma mass spectrometry as [ M ] de . The RE distribution factor ( D ) between the LanM phase and the solution phase was calculated as: $$D=\frac{{[M]}_{{\rm{LanM}}}}{{[M]}_{{\rm{Liquid}}}}$$ (4) in which [ M ] LanM and [ M ] Liquid are the molar concentrations of each metal ion in the LanM phase and the solution phase at equilibrium, respectively. To account for the free liquid that absorbed on the agarose microbeads, the following correction was applied: [ M ] Liquid = [ M ] ad ; [ M ] LanM = (4 × [ M ] de – [ M ] ad )/4. The separation factor is defined as: $${\rm{SF}}=\frac{{D}_{{\rm{RE1}}}}{{D}_{{\rm{RE2}}}}$$ (5) in which D RE1 and D RE2 are the distribution factors of RE1 and RE2, respectively. Breakthrough column experiments Columns were filled and run, and metal concentrations analysed, as described in our previous work 6 ; details are available in the Supplementary Methods . For the RE pair separation experiments, the metal ion purity and yield are defined as: $${{\rm{Purity}}}_{{\rm{RE}}1}=\frac{{C}_{{\rm{RE}}1}}{{C}_{{\rm{RE}}1}+{C}_{{\rm{RE}}2}}$$ (6) $${{\rm{Y}}{\rm{i}}{\rm{e}}{\rm{l}}{\rm{d}}}_{{\rm{R}}{\rm{E}}1}=\frac{{\rm{R}}{\rm{E}}1\,{\rm{r}}{\rm{e}}{\rm{c}}{\rm{o}}{\rm{v}}{\rm{e}}{\rm{r}}{\rm{e}}{\rm{d}}}{{\rm{T}}{\rm{o}}{\rm{t}}{\rm{a}}{\rm{l}}\,{\rm{R}}{\rm{E}}1\,{\rm{l}}{\rm{o}}{\rm{a}}{\rm{d}}{\rm{e}}{\rm{d}}}$$ (7) in which C RE1 and C RE2 are the molar concentrations of RE1 and RE2, respectively. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability All data are available in the main text or the Supplementary Information . Coordinates have been deposited in the Protein Data Bank with accession codes 8DQ2 (La III – Hans -LanM), 8FNR (Dy II – Hans -LanM) and 8FNS (Nd III – Mex -LanM). Source data are provided with this paper.
Rare earth elements, like neodymium and dysprosium, are a critical component to almost all modern technologies, from smartphones to hard drives, but they are notoriously hard to separate from the Earth's crust and from one another. Penn State scientists have discovered a new mechanism by which bacteria can select between different rare earth elements, using the ability of a bacterial protein to bind to another unit of itself, or "dimerize," when it is bound to certain rare earths, but prefer to remain a single unit, or "monomer," when bound to others. By figuring out how this molecular handshake works at the atomic level, the researchers have found a way to separate these similar metals from one another quickly, efficiently, and under normal room temperature conditions. This strategy could lead to more efficient, greener mining and recycling practices for the entire tech sector, the researchers state. "Biology manages to differentiate rare earths from all the other metals out there—and now, we can see how it even differentiates between the rare earths it finds useful and the ones it doesn't," said Joseph Cotruvo Jr., associate professor of chemistry at Penn State and lead author on a paper about the discovery published today in the journal Nature. "We're showing how we can adapt these approaches for rare earth recovery and separation." Rare earth elements, which include the lanthanide metals, are in fact relatively abundant, Cotruvo explained, but they are what mineralogists call "dispersed," meaning they're mostly scattered throughout the planet in low concentrations. "If you can harvest rare earths from devices that we already have, then we may not be so reliant on mining it in the first place," Cotruvo said. However, he added that regardless of source, the challenge of separating one rare earth from another to get a pure substance remains. "Whether you are mining the metals from rock or from devices, you are still going to need to perform the separation. Our method, in theory, is applicable for any way in which rare earths are harvested," he said. All the same—and completely different In simple terms, rare earths are 15 elements on the periodic table—the lanthanides, with atomic numbers 57 to 71—and two other elements with similar properties that are often grouped with them. The metals behave similarly chemically, have similar sizes, and, for those reasons, they often are found together in the Earth's crust. However, each one has distinct applications in technologies. The Penn State lab turned to nature to find an alternative to the conventional solvent-based separation process for rare earths, because biology has already been harvesting and harnessing the power of rare earths for millennia, especially in a class of bacteria called “methylotrophs” that often are found on plant leaves and in soil and water and play an important role in how carbon moves through the environment. Credit: Patrick Mansell/Penn State Conventional rare earth separation practices require using large amounts of toxic chemicals like kerosene and phosphonates, similar to chemicals that are commonly used in insecticides, herbicides and flame retardants, Cotruvo explained. The separation process requires dozens or even hundreds of steps, using these highly toxic chemicals, to achieve high-purity individual rare earth oxides. "There is getting them out of the rock, which is one part of the problem, but one for which many solutions exist," Cotruvo said. "But you run into a second problem once they are out, because you need to separate multiple rare earths from one another. This is the biggest and most interesting challenge, discriminating between the individual rare earths, because they are so alike. We've taken a natural protein, which we call lanmodulin or LanM, and engineered it to do just that." Learning from nature Cotruvo and his lab turned to nature to find an alternative to the conventional solvent-based separation process, because biology has already been harvesting and harnessing the power of rare earths for millennia, especially in a class of bacteria called "methylotrophs" that often are found on plant leaves and in soil and water and play an important role in how carbon moves through the environment. Six years ago, the lab isolated lanmodulin from one of these bacteria, and showed that it was unmatched—over 100 million times better—in its ability to bind lanthanides over common metals like calcium. Through subsequent work they showed that it was able to purify rare earths as a group from dozens of other metals in mixtures that were too complex for traditional rare earth extraction methods. However, the protein was less good at discriminating between the individual rare earths. Cotruvo explained that for the new study detailed in Nature, the team identified hundreds of other natural proteins that looked roughly like the first lanmodulin but homed in on one that was different enough—70% different—that they suspected it would have some distinct properties. This protein is found naturally in a bacterium (Hansschlegelia quercus) isolated from English oak buds. The researchers found that the lanmodulin from this bacterium exhibited strong capabilities to differentiate between rare earths. Their studies indicated that this differentiation came from an ability of the protein to dimerize and perform a kind of handshake. When the protein binds one of the lighter lanthanides, like neodymium, the handshake (dimer) is strong. By contrast, when the protein binds to a heavier lanthanide, like dysprosium, the handshake is much weaker, such that the protein favors the monomer form. "This was surprising because these metals are very similar in size," Cotruvo said. "This protein has the ability to differentiate at a scale that is unimaginable to most of us—a few trillionths of a meter, a difference that is less than a tenth of the diameter of an atom." Joseph Cotruvo Jr., associate professor of chemistry at Penn State, is lead author on a paper about the discovery of a new mechanism by which bacteria can select between different rare earth elements, using the ability of a bacterial protein to bind to another unit of itself, or “dimerize,” when it is bound to certain rare earths, but prefer to remain a single unit, or “monomer,” when bound to others. Credit: Patrick Mansell/Penn State Fine-tuning rare earth separations To visualize the process at such a small scale, the researchers teamed up with Amie Boal, Penn State professor of chemistry, biochemistry and molecular biology, who is a co-author on the paper. Boal's lab specializes in a technique called X-ray crystallography, which allows for high-resolution molecular imaging. The researchers determined that the protein's ability to dimerize dependent on the lanthanide to which it was bound came down to a single amino acid—1% of the whole protein—that occupied a different position with lanthanum (which, like neodymium, is a light lanthanide) than with dysprosium. Because this amino acid is part of a network of interconnected amino acids at the interface with the other monomer, this shift altered how the two protein units interacted. When an amino acid that is a key player in this network was removed, the protein was much less sensitive to rare earth identity and size. The findings revealed a new, natural principle for fine-tuning rare earth separations, based on propagation of miniscule differences at the rare earth binding site to the dimer interface. Using this knowledge, their collaborators at Lawrence Livermore National Laboratory showed that the protein could be tethered to small beads in a column, and that it could separate the most important components of permanent magnets, neodymium and dysprosium, in a single step, at room temperature and without any organic solvents. "While we are by no means the first scientists to recognize that metal-sensitive dimerization could be a way of separating very similar metals, mostly with synthetic molecules," Cotruvo said, "this is the first time that this phenomenon has been observed in nature with the lanthanides. This is basic science with applied outcomes. We're revealing what nature is doing and it's teaching us what we can do better as chemists." Cotruvo believes that the concept of binding rare earths at a molecular interface, such that dimerization is dependent on the exact size of the metal ion, can be a powerful approach for accomplishing challenging separations. "This is the tip of the iceberg," he said. "With further optimization of this phenomenon, the toughest problem of all—efficient separation of rare earths that are right next to each other on the periodic table—may be within reach." A patent application was filed by Penn State based on this work and the team is currently scaling up operations, fine-tuning and streamlining the protein with the goal of commercializing the process. Other Penn State co-authors are Joseph Mattocks, Jonathan Jung, Chi-Yun Lin, Neela Yennawar, Emily Featherston and Timothy Hamilton. Ziye Dong, Christina Kang-Yun and Dan Park of the Lawrence Livermore National Laboratory also co-authored the paper.
10.1038/s41586-023-05945-5
Earth
Are marine organisms evolving to protect their young in response to ocean acidification?
To brood or not to brood: Are marine invertebrates that protect their offspring more resilient to ocean acidification? Scientific Reports 5, Article number: 12009 DOI: 10.1038/srep12009 Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep12009
https://phys.org/news/2015-08-marine-evolving-young-response-ocean.html
Abstract Anthropogenic atmospheric carbon dioxide (CO 2 ) is being absorbed by seawater resulting in increasingly acidic oceans, a process known as ocean acidification (OA). OA is thought to have largely deleterious effects on marine invertebrates, primarily impacting early life stages and consequently, their recruitment and species’ survival. Most research in this field has been limited to short-term, single-species and single-life stage studies, making it difficult to determine which taxa will be evolutionarily successful under OA conditions. We circumvent these limitations by relating the dominance and distribution of the known polychaete worm species living in a naturally acidic seawater vent system to their life history strategies. These data are coupled with breeding experiments, showing all dominant species in this natural system exhibit parental care. Our results provide evidence supporting the idea that long-term survival of marine species in acidic conditions is related to life history strategies where eggs are kept in protected maternal environments (brooders) or where larvae have no free swimming phases (direct developers). Our findings are the first to formally validate the hypothesis that species with life history strategies linked to parental care are more protected in an acidifying ocean compared to their relatives employing broadcast spawning and pelagic larval development. Introduction We focused on the unique coastal vent ecosystem of Ischia island (Italy), where underwater CO 2 volcanic emissions interact with a seagrass and rocky reef habitat 1 . CO 2 bubbling from the seafloor drives the seawater pH down to equal to or lower than business-as-usual IPCC projections for 2100 (pH 6.5–7.8 1 , 2 ), effectively creating a “chemical island” approximately 2,000 years old 3 . Our biological focus is on polychaete worms, as they are an abundant taxonomic group in the vents 1 . Their consistent vent-dominance and the trends seen in their seasonal abundances indicate the possibility of either multi- and/or transgenerational exposure 4 , 5 , 6 , 7 . Furthermore, the group exhibits highly diverse reproductive and developmental modes 8 . We related the type of early life history strategies employed by species living in the vents with their known distribution and abundances 1 , 5 , 6 . We found twelve of the total thirteen species with known reproductive characteristics colonizing high CO 2 vent areas to be brooding or direct developers (eggs kept in protected maternal environment/no free-swimming larval phases). Ten had higher abundances in the venting areas than in nearby ambient CO 2 areas ( Table 1 ). The exception was one species, morphologically appearing to be Platynereis dumerilii (Audouin & Milne-Edwards, 1834), the only broadcast spawning pelagic developer with higher abundances in the vents 5 , 7 , 9 . Table 1 Early life-history strategies of all polychaete species present in the lowest pH vent site. Full size table The observation that brooding polychaete species dominate the CO 2 vent areas, along with evidence for physiological and genetic adaptation in vent-inhabiting Platynereis dumerilii 6 , prompted further examination of this particular species. To determine whether these adaptations have led to reproductive isolation, we attempted to crossbreed Platynereis individuals collected from within the vent sites with those collected from control sites outside the vent sites, in the laboratory. A male from the control population in the initial stages of transforming into a pelagic, swimming reproductive P. dumerilii was introduced into a container with an immature adult Platynereis sp. from the vent population. Within two hours, the male prompted this vent-originating worm to develop large yellow eggs, likely a pheromone-induced response between the two sexes 10 . These eggs filled the female body cavity and were five times larger than the average P. dumerilii eggs. The female proceeded to build a complex tube structure consisting of inner microtubes where she deposited large, fertilized eggs that immediately stopped developing ( Fig. 1 ). Figure 1 a . Initial cross-breeding activity with (top) Platynereis dumerilii male transforming into a pelagic, swimming epitoke full of sperm and (below) the Platynereis massiliensis female developing large yellow yolky eggs, (250 μm in diameter); b . Female inside tube laying and moving 74 eggs into inner brood tubes after 12 h of pairing with the male; c . Close-up of inner-parental mucus tubes holding large yellow eggs. Scale: 0.5 mm. Full size image We matched the reproductive description of the female’s brooding behaviour to the parent’s genetic identities using a COI barcoding approach (Supplementary Methods). While the COI sequence of the pelagic form was only 0.7% different from the published sequence of P. dumerilii , the brooding form’s sequence was 26% different, indicating that it represents a separate species. Observational results confirm that the female found in the vents is actually Platynereis massiliensis (Moquin-Tandon, 1869), a sibling species of P. dumerilii 11 . These two sibling species are morphologically indistinguishable as immature adults but are easily discernible upon maturation, having evolved opposing reproduction modes with morphologically different gametes 11 , 12 . Platynereis massiliensis are protandric sequential hermaphrodites that first mature as males and fertilize a female partner’s eggs laid inside a brood tube. The female then dies and the male continues ventilating and protecting the developing embryos inside the tube as they develop into young worms 11 , after which the father changes sex and the process is repeated in the next reproductive event. Platynereis dumerilii have separate sexes and maturation invokes morphological changes allowing the benthic forms to leave their tubes and swarm in a single spawning event in the surface water. Adults swim to the surface, in synchronization with the full moon, in a pheromone-induced search for the opposite sex 11 , 13 . They then release their gametes and die. Fertilization occurs in the sea water and the larvae go through a subsequent six-week pelagic phase 10 . Our COI analysis provides the first genetic record for P. massiliensis , as well as a genetic template to match previously sequenced individuals from both inside and outside the venting areas to their correct species identity. We did this using published sequence data from Calosi et al. (2013) for P. dumerilii . Results suggest that the vent site is dominated by brooding P. massiliensis (10:1 with P. dumerilii ) and the control site is dominated by broadcasting P. dumerilii (15:1 with P. massiliensis ), these differences being significant (Χ 2 : 9.808, p < 0.005). Additionally, we observed several mating pairs successfully producing juveniles inside their maternal tubes from P. massiliensis parents collected exclusively from the vent site. It is not known what prompted speciation in these two species 11 . Existing ecological knowledge suggests that they have comparable sizes, habitats and functions and as such are overcoming similar mechanical, chemical and physical constraints 11 . Additionally, the known species ranges appear to overlap on a large spatial scale: ripe females and adult males of P. massiliensis have been found in the Gulf of Naples (Italy) 12 , Banyuls-Sur-Mer (France) 11 , on the Isle of Man coast (British Sea) 14 , in a Denmark fjord 15 and in Norfolk (UK) 16 . Platynereis dumerilii is also found in these localities, however we are cautious to compare the species’ global distributions from current records, as observations are limited and not confirmed on a molecular basis 17 . Speciation may have been sympatric in the past (occurring in the same habitat), but the distribution of the brooding P. massiliensis in the localized venting area of this study clearly shows how this species favours this high CO 2 habitat, whereas the sibling broadcasting P. dumerilii species does not. This pattern can be interpreted as a solid example of pH-driven brooding preference 18 . Using the local distribution information of these congeners, we revisit the synthesis of life history strategies for the complete vent polychaete community and affirm that each dominant species exhibits parental care by a form of brooding or direct development ( Table 1 ). The most parsimonious mechanism driving this trend appears to be that of the direct physical protection of early life stages from the water conditions 19 , 20 , 21 . Alternatively, or in part, this trend may be attributed to (1) an evolutionarily based selection for phenotypes tolerant to low pH among brooding species, (2) selection of traits associated with brooding; or (3) selection through some other vent characteristics besides low pH conditions. The possibility that these CO 2 -dominating brooding species have selected phenotypes tolerant to low pH is supported by the general ability of polychaetes to rapidly adapt to chronically disturbed habitats 8 , 22 . Furthermore, the traits commonly associated with brooders, such as short larval dispersal, continuous reproduction, in part through hermaphroditism and small adult sizes having smaller broods per reproductive event, support respective population’s survival by continuously selecting for fitness to a specific habitat 8 , 23 , 24 . Low pH habitat-based changes may be indirect factors influencing brooding preference as well 4 , 9 , 25 . For instance, habitat complexity and increased algal growth may cause a loss of brooder predators or competitors not as phenotypically plastic to CO 2 stress, such as microbial shifts deterring pelagic larval recruitment 26 Alternatively, a greater availability of sheltered habitat-based types of refugia and/or better food resources for brooding interstitial species living in the algae may occur 27 , 28 , 29 , 30 . The thirteen polycheate species in this study live in the low pH vent habitat and have many of these traits ( Table 1 ), but further investigation of OA-mediated biological and ecological effects on species’ long-term OA tolerance is needed to distinguish the exact mechanisms responsible for low pH brooding dominance 31 , 32 . These possibilities show that brooding and/or direct development may not be solely contingent on water chemistry, however the dominant species in this open ‘chemical island’ CO 2 vent habitat do appear to be adapted to OA conditions in their reproductive and developmental modes. To broaden and further corroborate our evidence on a relationship between species life history strategy and tolerance to an important global change driver such as OA, we found examples in the literature from other polychaete worms, starfish, cowries and oysters, all following parallel adaptive pathways under climate and environmental-related stressors ( Table 2 ). These species have been found inhabiting areas undergoing rapid environmental alterations and appear to have evolved direct development from broadcasting ancestors to enable them to counteract the detrimental effects of continuous disturbances. Many of these examples show species complexes in which broadcast spawning ancestors retain sensitivity to high CO 2 /low pH and other environmental extremes marked by their absence in disturbed sites, while species showing forms of parental care persist in the disturbed area 33 , 34 . Table 2 Review of marine taxa exhibiting climate-related tolerance and greater parental care compared to their congeneric counterparts, respectively. Full size table This multispecies comparative method substantiates the idea that today’s organisms exhibiting brooding or direct development may be more successful in responding to future OA than their pelagic broadcast spawning counterparts. One important consideration in this proposed response hinges on dispersal capacity and extinction of brooders in the future ocean. Brooding dispersal capacity is theoretically limited by low mobility of the early developmental phases, but existing evidence counter-intuitively indicate high dispersal ability in many brooder species 35 , 36 . The “Rockall paradox” reviews examples of such situations, where isolated islands are void of any pelagic broadcast spawning invertebrates. In these cases, it is noted that pelagic spawning parents assume a risk that their offspring will find suitable habitats for survival and reproduction. This strategy potentially presents difficulties, as pelagic larvae may not be able to find, settle and reproduce in distant places 35 . The possible link of these isolated islands to the “chemical island” of Ischia’s vents may be that pelagic larval settlement and recruitment success in acidified oceans is highly reduced 4 , 5 , 7 , 26 , supporting the hypothesis of direct developer pH tolerance. On the global scale of OA, pelagic larvae may be searching in vain for a ‘less acidified’ habitat that can retain a viable population base. Current research on evolution and adaptation to OA is primarily focused on quantifying genetic variability of OA tolerant traits as an indicator of adaptive capacity into the expected future oceanic conditions 37 , 38 , 39 , 40 . Within this context, brooders may reach extinction far before their pelagic counterparts, as they typically hold lower genetic variability 24 . However, our evidence points to the opposite pattern. It would be worthwhile to investigate extinction risks of brooding and pelagic-developing species in the context of global OA at different spatial and temporal scales, in an attempt to constrain the effects of both exposure to ongoing global OA and local extreme events. In fact, while brooding-associated traits may be less advantageous under local extreme events, due to dispersal limitation on a short time scale – within a generation, they may actually prove to be more adaptive in a globally disturbed ocean (on a longer time scale: across multiple generations). Our polychaete-based analysis, supported by a selection of other invertebrate taxa, provides compelling comparative evolutionary-relevant evidence that direct developers/brooders may do better in the globally acidifying ocean than their relatives employing broadcast spawning and pelagic larval development. The general principle we present here will be useful to inform our capacity to identify which marine taxa will likely be more tolerant to ocean acidification, largely advancing our predictive ability on the fate of marine biodiversity simply based on an aspect of species’ life history strategies. Methods for the sequencing procedure DNA was extracted from two partial specimens of confirmed reproductive modes using the DNEasy Blood and Tissue Kit (Qiagen), following the manufacturer’s protocol. A ~600 base pair segment of the mitochondrial cytochrome c oxidase subunit I was amplified using universal primers 41 for Platynereis massiliensis and polychaete-specific PolyLCO/Poly-HCO primers for P. dumerilii 42 . PCR products were cleaned with Exo-SapIT (Affymetrix). Cycle sequencing was performed using BigDye Terminator v 3.1 (Life Technologies). Sequences were cleaned using Zymo Research DNA Sequencing Clean-up Kit™. Sequences were analyzed in an ABI3130 Genetic Analyzer (Life Technologies) and edited in Sequencher v. 4.8 (Genecodes). Sequence alignment and calculation of Kimura 2-parameter genetic distances were conducted in MEGA 6 43 . The sequences have been deposited in GenBank under accession numbers KP127953 ( P. massiliensis ) and KP127954 ( P. dumerilii ). Additional Information How to cite this article : Lucey, N. M. et al. To brood or not to brood: Are marine invertebrates that protect their offspring more resilient to ocean acidification?. Sci. Rep. 5 , 12009; doi: 10.1038/srep12009 (2015).
Marine organisms living in acidified waters exhibit a tendency to nurture their offspring to a greater extent than those in more regular conditions. Researchers at Plymouth University have found that polychaete worms located around volcanic vents in the Mediterranean grow and develop their eggs within the protection of the family unit - in contrast to closely-related species that release them into the water column to fend for themselves. The scientists say the findings could provide an important insight into how organisms might adjust to increasing levels of carbon dioxide in the sea - and the ramifications that might have for future biodiversity. Their report - published in Scientific Reports - was based on field research off the island of Ischia in Italy and lab-work in which the breeding patterns of the worms were observed at closer quarters. Noelle Lucey, a researcher within Plymouth University's Marine Institute, and of the University of Pavia, said: "One of the most interesting annelid worms here typically grows to around 3cm in length and is found on the seafloor. It was previously thought that their breeding is triggered by a full moon, when they swim up to the surface and release - or 'broadcast' - their eggs. But our studies at the CO2 vents off Ischia have found something very different: those species living near the volcanic vents, in waters rich in carbon dioxide, seem to have adapted to the harsher conditions by brooding their offspring." The team found that 12 of the 13 species that had colonized the vent area exhibited brooding characteristics, most notably producing fewer and larger eggs that were usually retained within some form of protective sac. Ten of those species were in higher abundance around the vents than in the ambient areas surrounding them - some by a ratio as high as nine-to-one. The observation that brooding worms dominated the CO2 vent areas, and existing evidence of physiological and genetic adaptation in vent-inhabiting species, prompted the researchers to take immature adult Platynereis dumerilii specimens and attempt to cross breed them in the laboratory. A male - taken from the ambient control area - and a female - from the vent zone - almost immediately began to breed. But instead of the typical broadcast pattern, the eggs produced were five times larger than the average and were laid in a complex tube structure or brooding pouch. When genetic analysis was conducted, it became clear that worms from inside the CO2 vents were from a sibling species of Platynereis massiliensis, one that has diverged from Platynereis dumerilii in the recent past - confirming that all of the polychaete species are brooders of some sort. Dr Piero Calosi, from the University of Quebec in Rimouski, Canada, said: "Our study confirms the idea that marine organisms have evolved brooding characteristics in response to environmental stresses, such as ocean acidification." On the breadth and importance of their study Dr Chiara Lombardi, from ENEA, Italy, said: "Studies like ours can help substantially advance our predictive ability on the fate of marine biodiversity simply based on species characteristic, such as their reproductive strategy." Ms Lucey added: "This study brings us one step closer to understanding which marine species will be more resilient to climate changes. In fact, our work helps in establishing a fundamental principle to be used to guide decisions on the conservation of marine ecosystems and to help better manage the fisheries and aquaculture industries."
10.1038/srep12009
Biology
Turtle tumors linked to excessive nitrogen from land-based pollution
PeerJ 2:e602. DOI: 10.7717/peerj.602 Journal information: PeerJ
http://dx.doi.org/10.7717/peerj.602
https://phys.org/news/2014-10-turtle-tumors-linked-excessive-nitrogen.html
Abstract The tumor-forming disease fibropapillomatosis (FP) has afflicted sea turtle populations for decades with no clear cause. A lineage of α -herpesviruses associated with these tumors has existed for millennia, suggesting environmental factors are responsible for its recent epidemiology. In previous work, we described how herpesviruses could cause FP tumors through a metabolic influx of arginine. We demonstrated the disease prevails in chronically eutrophied coastal waters, and that turtles foraging in these sites might consume arginine-enriched macroalgae. Here, we test the idea using High-Performance Liquid Chromatography (HPLC) to describe the amino acid profiles of green turtle ( Chelonia mydas ) tumors and five common forage species of macroalgae from a range of eutrophic states. Tumors were notably elevated in glycine, proline, alanine, arginine, and serine and depleted in lysine when compared to baseline samples. All macroalgae from eutrophic locations had elevated arginine, and all species preferentially stored environmental nitrogen as arginine even at oligotrophic sites. From these results, we estimate adult turtles foraging at eutrophied sites increase their arginine intake 17–26 g daily, up to 14 times the background level. Arginine nitrogen increased with total macroalgae nitrogen and watershed nitrogen, and the invasive rhodophyte Hypnea musciformis significantly outperformed all other species in this respect. Our results confirm that eutrophication substantially increases the arginine content of macroalgae, which may metabolically promote latent herpesviruses and cause FP tumors in green turtles. Cite this as Van Houtan KS, Smith CM, Dailer ML, Kawachi M. 2014 . Eutrophication and the dietary promotion of sea turtle tumors . PeerJ 2 : e602 Main article text Introduction Fibropapillomatosis (FP) is a chronic and often lethal tumor-forming disease in sea turtles ( Fig. 1A ). It became a panzootic in green turtles in the 1980s, prompting concern that it was a serious threat to their global conservation ( Chaloupka et al., 2008 ; Herbst, 1994 ). Though most green turtle population indices have increased steadily since ( Seminoff et al., 2014 ), the disease remains prevalent and in several locations its incidence is still increasing ( Van Houtan, Hargrove & Balazs, 2010 ). Advances in understanding the cause of FP have recently centered on environmental factors, with diverse lines of evidence from genomics to epidemiology supporting this hypothesis ( Aguirre & Lutz, 2004 ; dos Santos et al., 2010 ; Ene et al., 2005 ; Herbst et al., 2004 ; Van Houtan, Hargrove & Balazs, 2010 ). The ecological promotion of the disease is made further interesting as FP tumors have a proposed viral origin. Figure 1: (A) Juvenile green turtle ( Chelonia mydas ) severely afflicted with fibropapillomatosis, a tumor-forming disease associated with α -herpesviruses. Photo: August 2012 Makena, Maui (credit: Chris Stankis, Flickr/Bluewavechris). (B) Amino acid profiles from turtle tissues show fibropapilloma tumors are notably enriched in glycine, proline and arginine, and depleted in lysine. Glycine is a known tumor biomarker; proline aids herpesvirus infections; and arginine and lysine promote and inhibit herpesviruses, respectively. Bars represent the average difference between tumor and baseline tissue for 12 individual turtles, percent changes from baseline percent total protein listed in parentheses, error bars are SEM. Bar color indicates P values from two-tailed paired t -tests. (C) Underlying histograms for arginine content in baseline and tumor tissue samples, bars are raw values, curves are smoothed trend. Download full-size image DOI: 10.7717/peerj.602/fig-1 Early studies discovered DNA from α -herpesviruses in FP tumors, but found adjacent tissues from diseased turtles, as well as samples from clinically healthy turtles, to be free of herpes DNA ( Lackovich et al., 1999 ; Lu et al., 2000 ; Quackenbush et al., 1998 ). Though further progress has been limited by an inability to develop viral cultures, recent work with next generation genomic techniques have made important contributions. These studies ( Alfaro-Núñez & Gilbert, 2014 ; Page-Karjian et al., 2012 ) found herpesvirus DNA to be rather ubiquitous—occurring in all hard-shelled sea turtles, in all populations tested, and even prevalent in clinically healthy turtles. If α -herpesviruses are the origin of FP, this represents a classic herpesvirus scenario where infections are pervasive, but latent or subclinical, in the host population ( Stevens & Cook, 1971 ; Umbach et al., 2008 ) and revealing of its etymology from the Greek ε ρπης , meaning “to creep”. With this in mind, we recently described the epidemiological link between this disease and coastal eutrophication, detailing how green turtles could literally be eating themselves sick ( Hall et al., 2007 ), activating latent herpes infections and promoting tumors by foraging on arginine-enriched macroalgae. A model built on this hypothesis ( Van Houtan, Hargrove & Balazs, 2010 ) explained 72% of the spatial variability of the disease across the Hawaiian Islands while offering a detailed explanation of the disease that connects turtle ecology, plant physiology (e.g., Raven & Taylor, 2003 ), and herpes biology to known management problems of nutrient pollution and invasive species. At the forefront, this proposed pathway focuses on the role arginine might have in promoting FP tumors. A significant body of evidence supports this. In many chronic diseases, arginine is implicated in cell inflammation and immune dysfunction ( Peranzoni et al., 2008 ) and in promoting viral tumors ( Mannick et al., 1994 ). But arginine is specifically important for herpesviruses. Laboratory studies demonstrate that herpes infections require arginine, being stunted in its absence ( Inglis, 1968 ; Mikami, Onuma & Hayashi, 1974 ; Olshevsky & Becker, 1970 ) and diminished when it is deprived ( Mistry et al., 2001 ). Subsequent research revealed that arginine is a principal component of glycoproteins in the outer viral envelope of herpesviruses. These glycoproteins are conserved across a wide variety of herpesviruses ( Alfaro-Núñez, 2014 ) and are critical to the herpes life cycle as they facilitate localization, fusion, and entrance to host cell nuclei ( Hibbard & Sandri-Goldin, 1995 ; Klyachkin & Geraghty, 2008 ). Beyond its significance for herpesviruses, arginine is an emerging focus of human cancer treatments as well. Cancer tumors lacking enzymes that synthesize arginine must obtain arginine metabolically, and can therefore be regulated by arginine deprivation ( Bowles et al., 2008 ; Feun et al., 2008 ; Kim et al., 2009 ). Perhaps coincidentally, arginine also plays an important role in how plants sequester environmental nitrogen. Nitrogen is a limiting factor for both plant and macroalgal growth ( Raven & Taylor, 2003 ). As a result, in times of environmental availability plants acquire excess nitrogen through what is known as luxury consumption ( Chapin III, 1980 ). Terrestrial plants, however, do not rely on a host of amino acids for luxury consumption; they preferentially store ambient nitrogen in arginine ( Chapin III, 1980 ; Chapin III, Schulze & Mooney, 1990 ; Llàcer, Fita & Rubio, 2008 ). Little is known about how this functions in macroalgae, however. Previous studies in Hawaii suggest it might be relevant. Macroalgae from these limited surveys demonstrated that amino acids and stable isotope values for δ 15 N varied by species and by location ( Dailer et al., 2010 ; McDermid, Stuercke & Balazs, 2007 ). Though the data were limited, arginine was specifically elevated at eutrophic sites for two invasive species of Ulva and Hypnea ( McDermid, Stuercke & Balazs, 2007 ), prompting more systematic study. Eutrophication of coastal waters in Hawaii has spurred chronic nuisance algal blooms and dramatically altered the composition of reef ecosystems ( Cox et al., 2013 ; Dailer et al., 2010 ; Lapointe & Bedford, 2011 ; Smith, Hunter & Smith, 2010 ). Non-native macroalgae introduced across the Main Hawaiian Islands after 1950 have been particularly influential ( Abbot, 1999 ; Smith, Hunter & Smith, 2002 ), having displaced native algae and become the dominant forage for Hawaiian green turtles ( Russell & Balazs, 2009 ). Despite the emergence of FP, and historical overharvesting ( Kittinger et al., 2013 ; Van Houtan & Kittinger, 2014 ), numbers of nesting green turtles have grown steadily in Hawaii since their protection under state and federal regulations in the 1970s ( Seminoff et al., 2014 ). Nonetheless, FP remains the greatest known mortality to Hawaiian green turtles ( Chaloupka et al., 2008 ) and in some regions the incidence of FP tumors is still on the rise ( Van Houtan, Hargrove & Balazs, 2010 ). Beyond its influence on green turtle populations, eutrophication is also associated with coral reef declines ( Vega Thurber et al., 2014 ). Growth anomalies in Porites corals, for example, occur in the same eutrophied Hawaii reefs as diseased green turtles ( Friedlander et al., 2008 ), and these coral tumors have Herpesviridae gene signatures ( Vega Thurber et al., 2008 ). Understanding the promotion of FP tumors may therefore be broadly relevant for the conservation of coral reef ecosystems. Here we analyze tissues from tumored green turtles and dominant forage species of macroalgae from across Hawaii. We determine amino acid content using High-Performance Liquid Chromatography (HPLC) to establish tumor biomarkers ( Jain et al., 2012 ) and examine nutrient changes in macroalgae and luxury consumption. We run a series of Generalized Linear Models (GLMs) to test for arginine levels, arginine storage, and to examine the role of eutrophication. Collectively, these analyses are an interdisciplinary test of a hypothesis we posed earlier ( Van Houtan, Hargrove & Balazs, 2010 ; Van Houtan & Schwaab, 2013 ) that arginine would be elevated in invasive algae in eutrophic locations and this would in turn promote FP tumors in green turtles. Materials and Methods Tissue collection and preparation We sampled turtle tissues during necropsies of stranded green turtles at the NOAA Pacific Islands Fisheries Science Center in Honolulu (US FWS permit # TE-72088A-0). For turtles with heavy tumor burdens, we collected both tumor and baseline tissue. Using a scalpel to make radial cross-sections, we obtained at minimum 0.5 cm 3 for each tissue type, selecting subsurface material to avoid contamination. Tumors were sampled from the flipper or eye, and baseline tissue was subcutaneous muscle from the flipper or pectoral that appeared grossly subclinical. We collected samples from 12 turtles, representing males and females from a variety of life stages ( Table S1 provides full details). At collection, we rinsed samples in water and stored in 90% alcohol in 1.5 mL cryovials (Thermo Scientific™ Nalgene™). After 24–48 h, we pressed the samples dry with forceps, and transferred to clean cryovials packed with SiO 2 indicating gel desiccant (Fisher™, grade 48, 4–10 mesh). We replaced the spent silica beads every 24 h, repeating the process as needed until the samples were completely dried. We homogenized the resulting tissues with a porcelain mortar and pestle or by shaving samples with a #22 scalpel blade. We collected five species of macroalgae from coastal watersheds spanning a range of nutrient profiles ( Van Houtan, Hargrove & Balazs, 2010 ) on Oahu, Maui, and Hawaii island. We focused on three invasive species— Hypnea musciformis (Wulfen) JV Lamouroux, Acanthophora spicifera (M Vahl) Borgesen, and Ulva lactuca Linnaeus—and two non-invasive native species— Pterocladiella capillacea (SG Gmelin) Santelices & Hommersand and Amansia glomerata C Agardh—that are representative turtle forage items ( Russell & Balazs, 2009 ) and reasonably widespread. P. capillacea and A. glomerata were inconsistently found and combined into a single category. U. lactuca , the only chlorophyte, was recently reclassified ( O’Kelly et al., 2010 ) from U. fasciata . For each species and location we collected three replicates at 0–5 m depth, where nearshore green turtles commonly forage ( Van Houtan & Kittinger, 2014 ). We rinsed samples with deionized water and later dried in a 60 °C oven ( Dailer et al., 2010 ). We homogenized dried samples with a mortar and pestle and stored in 5 mL cryovials (Thermo Scientific™ Nalgene™). When samples were difficult to obtain, we supplemented our samples with results from a published study in Hawaii ( McDermid, Stuercke & Balazs, 2007 ). Online Supplemental Information provides full sample metadata. Amino acid and statistical analysis We sent prepared samples to the Protein Chemistry Laboratory at Texas A&M University for amino acid determination. Samples were separated into three aliquots, weighed, and then placed in 200 µL of 6N HCl along with the Internal Standard and hydrolyzed at 110 °C for 22 h. The resulting amino acids were separated and quantified using an HPLC (Agilent™ 1260) with pre-column derivitization ( Blankenship et al., 1989 ) by ortho-phthalaldehyde (OPA) and fluorenylmethyl chloroformate (FMOC). As both tryptophan and cysteine are destroyed during hydrolysis, the total protein measured is slightly underreported. As a result, this analysis reports the dry mass for 16 amino acids ( Fig. 1B ), which we calculate as the mass divided by the total sample mass, averaged between sample aliquot replicates. For turtle tissues, we calculate the change of amino acids from the baseline to tumor samples, expressed as the difference of the average percent total protein for each tissue type. For macroalgae samples, we first calculate the change in arginine levels in samples from eutrophic and oligotrophic sites. Collection sites were considered eutrophic if they had a nitrogen footprint statistic ( Van Houtan, Hargrove & Balazs, 2010 ) above 0.50 and oligotrophic if not. (There was a clear separation here as no sites fell between 0.37 and 0.57.) Sites straddling two watersheds were also located on remote geographic peninsulas with little human impacts ( Van Houtan, Hargrove & Balazs, 2010 ) and therefore given the lower of the two watershed footprint statistics. To describe how macroalgae store environmental nitrogen, we multiplied the amino acid dry mass (percent of total) for each sample by the proportional molecular weight of nitrogen. We formally test for statistical sample differences through a variety of generalized linear models (GLM). To assess amino acid changes between tumors and turtle baseline tissue, we run a paired t -test for sample means and plot the results to identify potential biomarkers ( Jain et al., 2012 ). To examine arginine variability of individual algae across site types we use a one-tailed t -test. We follow this with a GLM that has site treatment and species as factors to predict arginine content. As a frame of reference, we combine these results with known energetic requirements of green turtles ( Jones et al., 2005 ) and published energy content of alga in our study ( McDermid, Stuercke & Balazs, 2007 ) to estimate the daily arginine intake. We calculate this for subadult turtles, the highest-diseased demographic in Hawaii ( Van Houtan, Hargrove & Balazs, 2010 ), as well as for large adults, for different site-species comparisons. To test for luxury consumption, we fit normal distributions to the observed nitrogen amino acid dry mass values for each sample, and determine if arginine falls outside the distribution’s expected 95% interval. We then examine the cause of arginine nitrogen variability. We first build a GLM with total plant nitrogen and species as factors, and then a second with nitrogen footprint and species as factors. Results Figure 1B plots the amino acid profiles of tumor-baseline tissue sets for 12 green turtles. We observed significant differences in all 16 amino acids tested, highlighting the divergent metabolism of tumors. Methionine (tumor depleted) and arginine (tumor enriched) had the most statistically significant changes. Glycine, however, had the most dramatic shift. Tumor glycine increased on average 260% (range 93–382%, t = 11.4, P < 0.0001), meaning tumors had 2–5 times more glycine than baseline tissues. This is perhaps unsurprising as glycine is a building block for nucleic acids and is required in large amounts by rapidly proliferating cancer cells ( Jain et al., 2012 ; Tomita & Kami, 2012 ). Proline also increased markedly in tumors (average 144%, range 40–269%, t = 10.1, P < 0.0001), the second largest change we observed. This may reflect the importance of proline for herpesviruses in counteracting host cell defenses. The herpesvirus protein Us11 has an arginine- and proline-rich binding domain that specifically inhibits PKR (protein kinase R), critical for cellular viral defense ( Khoo, Perez & Mohr, 2002 ; Poppers et al., 2000 ). Proline synthesis was also important in recent analyses of cancer tumors ( De Ingeniis et al., 2012 ; Nilsson et al., 2014 ). Arginine increased (average 25%, range 9–38%, t = 12.9, P < 0.0001) and lysine decreased (range: average 47%, range 23–67%, t = −12.1, P < 0.0001) in tumors, consistent with their respective demonstrated roles in herpes infections ( Fatahzadeh & Schwartz, 2007 ; Griffith et al., 1987 ; Hibbard & Sandri-Goldin, 1995 ; Inglis, 1968 ; Mikami, Onuma & Hayashi, 1974 ; Olshevsky & Becker, 1970 ). Figure 1C plots the underlying raw histograms for arginine percent total protein in baseline samples and tumors, with smoothed distributions in the background. Aside from the above results, tumors were depleted in glutamine (53%), leucine (37%), isoleucine (44%), asparagine (18%), tyrosine (44%), valine (31%), threonine (24%), phenylalanine (22%), and histidine (35%)—listed in order of percent total protein change. Tumors were also enriched in alanine (37%) and serine (16%), the latter being essential in breast cancers ( De Ingeniis et al., 2012 ; Possemato et al., 2011 ). These amino acid profiles serve as a first template for establishing FP biomarkers that may aid understanding this disease, and for herpesviruses and tumor formation more generally. Analyzing forage, Fig. 2 plots the arginine enrichment in common forage species of wild macroalgae between locations of low and high nutrient inputs. Arginine levels increased at eutrophic sites in all species (average 160%, range: 70–230%). Though A. spicifera had the highest increase (230%, t = 3.5, P = 0.01), H. musciformis had the highest arginine content at eutrophic (1.94% dry mass) and oligotrophic (0.79% dry mass) sites and the most statistically significant change ( t = 4.1, P = 0.007). Of note, the H. musciformis arginine content at oligotrophic sites was higher than that of native rhodophytes sampled at eutrophic sites (0.73% dry mass). This may highlight the role of aggressively invasive macroalgae ( Smith, Hunter & Smith, 2002 ) such as H. musciformis in this disease. A more complete GLM with site treatment and species as factors predicts arginine content ( F (7,24) = 11.8, P < 0.0001, R = 0.91). Figure 2: Common forage species for green turtles are arginine enriched at eutrophic coastal areas. Arginine content is 2–3 times higher at eutrophic sites, compared to the same alga sampled at oligotrophic sites. Increases are more pronounced, and arginine levels are higher, in the three nonnative invasive macroalgae species than for the two native Rhodophyta, Amansia glomerata and Pterocladiella capillacea . Error bars indicate SEM. Asterisks are one-tailed t -test results: ∗ P < 0.05; ∗∗ P = 0.01; ∗∗∗ P < 0.01. A GLM with site treatment and species as factors to predict arg content is statistically significant ( F (7,24) = 11.8, P < 0.0001, R = 0.91). Given energetic estimates, green turtles foraging on non-native algae at eutrophied sites can increase their daily arginine intake by 17–26 g. Download full-size image DOI: 10.7717/peerj.602/fig-2 But this does not quite capture the nutrient intake of turtles foraging at different site treatments. From energetics we know a 45 kg subadult green turtle requires 2,435 kJ day -1 of dietary energy ( Jones et al., 2005 ). If this turtle only foraged on H. musciformis —with an energy content of 4.3 kJ g -1 total dry mass ( McDermid, Stuercke & Balazs, 2007 )—it would require 567 g dry mass of H. musciformis daily to meet energetic demands. Based on our amino acid analysis ( Fig. 2 ) this turtle would consume 11.1 g of arginine daily at eutrophic sites. If this same turtle only consumed the native species we tested ( P. capillacea and A. glomerata , average energy 8.9 kJ g -1 total dry mass ( McDermid, Stuercke & Balazs, 2007 )) it would require 274 g dry mass of daily forage. At oligotrophic sites this turtle would consume 1.2 g arginine per day. In other words, foraging on invasive alga, H. musciformis , at eutrophic sites increases the average arginine intake by 9.9 g day -1 (range 7.8–11.8 g) by comparison to consuming native species at oligotrophic sites, which is 5–14 times the baseline arginine consumption. If we consider this for a 100 kg adult turtle requiring 5,364 kJ daily ( Jones et al., 2005 ) the arginine boost is 21.8 g day -1 (range 17.2–26.0 g). Clearly, there is a substantial dietary influx of arginine for green turtles foraging in eutrophied watersheds. Figure 3 plots the nitrogen dry mass for each amino acid to examine how macroalgae sequester environmental nitrogen. In 13/13 samples from eutrophic sites and 9/12 (75%) samples from oligotrophic sites, nitrogen levels were anomalously high for arginine. That is, nitrogen dry mass was outside the expected 95% interval set by the fitted normal distribution parameters for that sample. Two samples from oligotrophic sites (Punaluu P. capillacea and Kaena A. spicifera ) demonstrated no preferential nitrogen storage. One sample (Olowalu U. lactuca ) had a positive anomaly for alanine, but its total nitrogen levels were the lowest measured, minimizing its significance. We generated Fig. 3 before receiving the results for a seventh Ulva sample (eutrophic Kanaha). This panel appears in the online supplement. Figure 3: Like terrestrial plants, macroalgae preferentially sequester available environmental nitrogen in arginine. We quantify this luxury consumption by calculating the nitrogen dry mass in each amino acid across species and ecosystem treatments. Horizontal lines are the mean nitrogen dry mass for each sample, ! indicates value lies outside the expected 99% interval, and ∗ outside the expected 95% interval. All (13/13) of the eutrophic and 75% (9/12) of the oligotrophic site samples show preferential sequestration of nitrogen in arginine. Amino acids are arranged from left to right in average descending order of prevalence: arginine, R; aspartic acid, D; glutamic acid, E; alanine, A; lysine, K; glycine, G; leucine, L; serine, S; valine, V; histidine, H; threonine, T; proline, P; isoleucine, I; phenylalanine, F; tyrosine, Y; methionine, M. Download full-size image DOI: 10.7717/peerj.602/fig-3 Though arginine was the clear preference for nitrogen storage (22/25 total samples, 88%), this was often extreme. In 15 samples (denoted by “!” in Fig. 3 ) arginine nitrogen storage is outside the 99% interval (above the expected 99.5% cumulative probability distribution) for that sample. Such extreme arginine preference occurred in all H. musciformis samples, followed by A. spicifera (4/6 samples, 67%), U. lactuca (3/7 samples, 43%), and the native rhodophytes (2/6 samples, 33%)—and was observed in 9/13 (69%) samples from eutrophic sites considering all species. Thus, the marine macroalgae we sampled demonstrated a clear tendency to sequester environmental nitrogen as arginine, which is an interesting convergence with terrestrial plants ( Chapin III, 1980 ; Chapin III, Schulze & Mooney, 1990 ). Having documented elevated arginine at eutrophic sites and arginine luxury consumption, Fig. 4 assesses the relationship between arginine nitrogen storage, total tissue nitrogen, and watershed-level eutrophication metrics. Arginine nitrogen increased with total tissue nitrogen for all species ( Fig. 4A ), and a GLM with total tissue nitrogen and species as factors is highly significant ( F (7,24) = 35.3, P < 0.0001, R = 0.97). Figure 4B shows that arginine nitrogen also increased with each site’s nitrogen footprint ( Van Houtan, Hargrove & Balazs, 2010 ); an index of natural and anthropogenic factors that generate, deliver, and retain nitrogen in coastal watersheds, and a proxy for local nitrogen loading. Similar to the previous model, a GLM with nitrogen footprint and species as factors is significant ( F (7,24) = 13.0, P < 0.0001, R = 0.92). Figure 4: Arginine sequestration of environmental nitrogen increases with (A) total plant N and (B) in proportion with watershed eutrophication. A GLM with total plant nitrogen and species as factors is statistically significant ( F (7,24) = 35.3, P < 0.0001, R = 0.97), as is a model with nitrogen footprint ( Van Houtan, Hargrove & Balazs, 2010 ) and species as factors ( F (7,24) = 13.0, P < 0.0001, R = 0.92). The relationships are similar between species, except for H. musciformis , which has a steeper slope in both panels, indicating H. musciformis is more efficient at sequestering environmental nitrogen and in storing it in arginine. Download full-size image DOI: 10.7717/peerj.602/fig-4 The relationships for these models are similar between species, except for H. musciformis , which has a steeper slope in both panels. In Fig. 4A , this indicates that given environmental levels, H. musciformis allocates proportionally more nitrogen to arginine than other amino acids, for the alga tested. Figure 4B suggests H. musciformis is more proficient at sequestering environmental nitrogen in arginine than the other species. Similar to Fig. 2 , this perhaps underscores the importance of H. musciformis in promoting FP tumors. Though the arginine levels ( Fig. 2 ) and the arginine nitrogen storage ( Fig. 3 ) are not as high in the native macroalgae as for U. lactuca and A. spicifera , the statistical relationships between arginine nitrogen, tissue nitrogen, and ecosystem nitrogen are similar ( Fig. 4 ). Discussion In this study we demonstrated how eutrophication increases the arginine in invasive marine macroalgae; that this significantly boosts the arginine intake by foraging green turtles, and that arginine is elevated in tumors from diseased green turtles. Based on energetic needs, we calculated adult turtles foraging in eutrophic habitats on invasive algae might boost their arginine intake 5–14 times, consuming a total of 22–28 g of arginine daily. We provide a first baseline set of amino acid biomarkers for FP tumors, and we documented arginine luxury consumption across a variety of macroalgae. We discuss the results and their implications for understanding the disease and its environmental promotion below. Though FP tumors were elevated in several amino acids, dietary shifts in arginine may be significant. Glycine, proline, alanine, arginine and serine all were elevated in tumors ( Figs. 1B – 1C ). Of these amino acids, glycine ( Jain et al., 2012 ), proline ( De Ingeniis et al., 2012 ; Nilsson et al., 2014 ), and serine ( Possemato et al., 2011 ) are known tumor biomarkers; where arginine and proline have added significance for herpesviruses. Of the elevated amino acids, however, only arginine increased in macroalgae at eutrophied sites (where disease rates are elevated) and has a functional role in nitrogen luxury consumption ( Fig. 3 ). Arginine nitrogen content, for example, was above the 95% expected interval in 88% of our macroalgae samples, and was extreme (above the 99% interval) in 60% of our samples ( Fig. 3 ). This suggests arginine may be the critical ingredient linking nearshore eutrophication, luxury consumption, turtle diet, and FP tumors. The metabolic pathways here are uncertain, however. Metabolic reprogramming is a hallmark of rapidly proliferating cancer cells, and a growing body of literature has focused on carbon metabolism in tumors ( De Ingeniis et al., 2012 ; Jain et al., 2012 ; Nilsson et al., 2014 ). Perhaps it is unsurprising that we detected significant tumor-baseline differences for all 16 amino acids. However, instead of focusing on carbon metabolism common to cancer studies, we profiled nitrogen due to its role in limiting macroalgae growth. Future progress in understanding FP may therefore come from a systematic characterization of the metabolic pathways in FP tumors, and in particular the recycling, salvaging, and biosynthesis of arginine. A dietary role for tumor promotion in human cancers may also benefit from a more comprehensive understanding of nitrogen metabolism. Our results help explain the epidemiology of this disease, and highlight the role of environmental factors in Hawaii and perhaps beyond. Though DNA from herpesviruses linked to FP tumors is found in all sea turtle species, this disease has only been widespread and a conservation concern for green turtles ( Alfaro-Núñez, 2014 ). This is consistent with our proposed mechanism involving eutrophication and arginine intake. Green turtles are the only strictly herbivorous sea turtle and therefore would consume the most arginine-enriched algae in nearshore habitats (other omnivorous sea turtle species consume algae, though at lower rates). The spatial and demographic structure of green turtles may also be relevant. Juvenile green turtles have a pelagic phase until they recruit to nearshore habitats as young juveniles ( Seminoff et al., 2014 ). Far away from human population centers, green turtles are disease-free during this pelagic phase ( Ene et al., 2005 ; Van Houtan, Hargrove & Balazs, 2010 ). In the Main Hawaiian Islands (MHI), the incidence of FP tumors increases steadily as turtles mature, and then decreases when they begin migrating to the relatively pristine Northwestern Hawaiian Islands to breed. In other words, disease rates increase directly in proportion to their residency time in the Main Hawaiian Islands ( Van Houtan, Hargrove & Balazs, 2010 ). In addition to this chronic exposure to eutrophic habitats, older turtles have greater energetic demands and therefore may additionally have higher disease rates due to increases in their consumption of MHI macroalgae and subsequent arginine intake. Though we are investigating environmental influences, it is possible that immunocompetence could factor in these patterns, but its influence is unknown. Aside from demographic patterns in turtles, invasive species of macroalgae also seem to be influential. Certain regions—such as the Kona coast of Hawaii island—have curiously low disease rates. While we previously demonstrated that this region has few nutrient inputs and invasive algae are uncommon ( Van Houtan, Hargrove & Balazs, 2010 ), our results here help explain this pattern. In this study we showed foraging green turtles could be more easily satiated by native macroalgae, as they can have relatively higher energy contents. Combined with our amino acid results, the energy and arginine content of macroalgae may therefore act as a sort of one-two punch for promoting this disease. Native macroalgae have a fraction of the arginine content of invasive species ( Fig. 2 ), but offer more calories per unit mass (see online Supplemental Information ). Turtles foraging on invasive macroalgae in eutrophic areas would need twice the amount they would require of native algae, therefore multiplying the arginine enrichment effect. For so-called superweeds like H. musciformis , this low energy–high arginine combination is the most extreme we observed. Considering that H. musciformis energy content can vary inversely with growth rate ( Guist Jr, Dawes & Castle, 1982 ), this may be a general result, and a topic for future research. Our estimates for turtle arginine consumption were often substantial, but the numbers could be even higher. For subadults we documented an average increase of 9.9 g arginine day -1 when shifting from native forage at oligotrophic sites to invasive forage at eutrophic sites. This number jumped to an average 21.8 g day -1 for adults. These numbers are based on our observed amino acid values and energetic demands. The metabolic rates we reference are baseline averages ( Jones et al., 2005 ), which would underestimate the dietary needs of rapidly growing turtles, migrating animals, or adult females amassing resources for vitellogenesis. Our calculated dietary intake of arginine could therefore increase, making the already significant increase even more so. There are no daily nutritional guides for wild green turtles ( Bjorndal, 1997 ). However, to put these numbers in context, they are well above the recommended dietary allowance for humans. Human adults (19–50 years) should consume 4.7 g of arginine and 51 g of total protein daily ( Institute of Medicine, 2005 ). Our estimated arginine intake for adult turtles could reach 28 g day -1 (online Supplemental Information ), which is half the recommended total daily protein for humans and 5 times the suggested arginine intake. Future studies can use our arginine intake estimates to guide treatments of turtles with FP tumors. Across green turtle populations, it is widely observed that FP occurs most frequently in eutrophied and otherwise impaired waterways ( Herbst, 1994 ; Van Houtan, Hargrove & Balazs, 2010 ). Our efforts here have largely been to demonstrate why this might occur, and to detail the ecological mechanisms. A logical next step is to repeat this study comparing tumors and forage items for other green turtle populations and other species. Additional next steps could be developing a monitoring plan to assess ecosystem risk for the disease in Hawaii and other ecological regions. Stable isotope analyses of tissues from U. lactuca or H. musciformis are an effective method for monitoring water quality in an integrative manner. For example, for macroalgae that uptake nitrogen from the water column, δ 15 N values above 6.0 point to a significant wastewater presence ( Dailer et al., 2010 ; Dailer, Smith & Smith, 2012 ). While these tests can reveal plant nitrogen sources, they do not comment on ecosystem nitrogen flux. Our test for nitrogen arginine sequestration ( Fig. 3 ) may infer on nitrogen flux at a more informative level than total tissue nitrogen, however, more research here is necessary. Combined stable isotope and amino acid analysis of macroalgae, therefore could be a powerful and reasonably inexpensive tool ($120 for both tests) to monitor and understand eutrophication in coastal ecosystems. The relevance of this tool extends beyond turtle diseases, but to ecosystem based management of coral reefs, estuaries, and seagrass systems. Supplemental Information Online supplemental material This table provides the full metadata and amino acid results for the algae samples considered in this study. Table 2: This table provides full sample metadata for the 12 turtles from which 24 tissue samples were taken. DOI: 10.7717/peerj.602/supp-1 Download Additional Information and Declarations Competing Interests The authors have no competing interests or ethical conflicts. Author Contributions Kyle S. Van Houtan conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. Celia M. Smith conceived and designed the experiments, performed the experiments, reviewed drafts of the paper. Meghan L. Dailer and Migiwa Kawachi performed the experiments, reviewed drafts of the paper. Animal Ethics The following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): We have an IACUC permit for our program but we do not need it for this study because all the samples involved dead, stranded sea turtles. For this, however, we require a USFWS ESA permit (#TE-72088A-0). Funding This study was supported by a grant from the Disney Worldwide Conservation Fund to CMS and KSVH, and a Presidential Early Career Award in Science and Engineering to KSVH. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Acknowledgements Acknowledgements J Johnson conducted the amino acid analyses. E Cox, D Francke, T Jones, and N Sarto aided with tissue preparation, T Jones advised on the energetics calculations. C Henson and J Browning assisted with permits. C Stankis provided tumored green turtle photos. A Alfaro-Núñez, D Guo, A Page-Karjian, J Parr, S Pimm, J Polovina, A Rivero, and two anonymous reviewers made improvements to an earlier version of this manuscript.
Hawai'i's sea turtles are afflicted with chronic and often lethal tumors caused by consuming non-native algae "superweeds" along coastlines where nutrient pollution is unchecked. The disease that causes these tumors is considered the leading cause of death in endangered green sea turtles. The new research was just published in the scientific journal PeerJ. Turtles that graze on blooms of invasive seaweeds end up with a diet that is rich in a particular amino acid, arginine, which promotes the virus that creates the tumors. Scientists at the University of Hawai'i at Mānoa and their NOAA colleague estimate that adult turtles foraging at high-nutrient grazing sites increase their arginine intake 17–26 g daily, up to 14 times the background level. "For years, local ocean lovers have known that our green turtles have had awful tumors on their heads, eyes and front flippers," said UH Mānoa Marine Biology Professor Celia Smith, who worked with Kyle S. Van Houton of NOAA's Turtle Research Program on this study. "Many hypotheses were offered to explain the tumors, but we kept coming back to the observation that urban reefs—those near dense populations—are the sites with greater numbers of sick turtles. We had no mechanism for this disease." More than 60 percent of turtles in Kāne'ohe Bay have been observed to bear tumors. Kihei, Maui, has been called a "ground zero" for fibropapillomatosis, the disease that is caused by a herpes virus and manifests as tumors in turtles. Humans appear unaffected by the disease. Van Houtan and colleagues previously described an epidemiological link between tumors and coastal eutrophication, that is, the enrichment of coastal waters with nutrients from land-based sources of pollution such as wastewater or agricultural fertilizers. This new study analyzed the actual tissues from tumored green turtles and the amounts of arginine in the dominant algae forage species from across Hawai'i. The analysis revealed remarkably high levels of arginine in tissues of invasive seaweeds harvested under nutrient-rich conditions, such as those affected by nitrogen from land-based pollution. These are the same conditions that promote algal blooms. The non-native algae "superweeds" grow so quickly when fertilized that some can double their weight in a period of two days. When found on typical healthy reefs with low nutrient inputs, the same invasive algae species had lower levels of amino acids, even though arginine levels were still elevated. A native algae species and favorite food of the green sea turtle called Pterocladiella capillacea did not synthesize anomalously high levels of arginine under low nitrogen conditions. "I've never had a research project where so many different tools have been used to evaluate a hypothesis and, in every case, the same complex answer is returned: excess nutrients in coastal waters drive blooms of specific invasive algae," Smith said. "These weeds then grow rapidly, dominate shallow water ecosystems, and store high levels of arginine in their tissues that trigger tumor growth for the grazing turtle population. Few could have imagined that an algal bloom could have such consequences up the food chain." Smith and her colleagues, including UH Mānoa's Meghan L. Dailer and Migiwa S. Kawachi, note that eutrophication of coastal waters goes beyond its influence on green turtle populations. Eutrophication is also associated with coral reef declines. "The native species we have as our limu, fish and corals evolved for millions of years in low nutrient environments," Smith said. "Any added nitrogen that enters our tropical coasts begins to alter the fundamental competition among species. With too much nutrient input, as we have seen on Maui, new dynamics of fast growth by non-native superweeds occurs. These weeds take over our reefs, and we tend to lose our native species." The biggest losers might be the gentle sea turtles, who are not looking beyond their next algae snack. "The honu, or green turtles, have a special status in Hawai'i—culturally and federally," Smith said. "But to prevent them from consuming superweeds that promote these tumors—their greatest known source of mortality, we need to manage aggressively all land-based sources nutrient pollution and to restore the turtle's native diet."
10.7717/peerj.602
Biology
Genes offer new insights into the distribution of giraffes
Bock, F. et al. (2014):" Mitochondrial sequences reveal a clear separation between Angolan and South African giraffe aLlong a cryptic rift valley" - BMC Evolutionary Biology, DOI: 10.1186/s12862-014-0219-7 Journal information: BMC Evolutionary Biology
http://dx.doi.org/10.1186/s12862-014-0219-7
https://phys.org/news/2014-11-genes-insights-giraffes.html
Abstract Background The current taxonomy of the African giraffe ( Giraffa camelopardalis ) is primarily based on pelage pattern and geographic distribution, and nine subspecies are currently recognized. Although genetic studies have been conducted, their resolution is low, mainly due to limited sampling. Detailed knowledge about the genetic variation and phylogeography of the South African giraffe ( G. c. giraffa ) and the Angolan giraffe ( G. c. angolensis ) is lacking. We investigate genetic variation among giraffe matrilines by increased sampling, with a focus on giraffe key areas in southern Africa. Results The 1,562 nucleotides long mitochondrial DNA dataset (cytochrome b and partial control region) comprises 138 parsimony informative sites among 161 giraffe individuals from eight populations. We additionally included two okapis as an outgroup. The analyses of the maternally inherited sequences reveal a deep divergence between northern and southern giraffe populations in Africa, and a general pattern of distinct matrilineal clades corresponding to their geographic distribution. Divergence time estimates among giraffe populations place the deepest splits at several hundred thousand years ago. Conclusions Our increased sampling in southern Africa suggests that the distribution ranges of the Angolan and South African giraffe need to be redefined. Knowledge about the phylogeography and genetic variation of these two maternal lineages is crucial for the development of appropriate management strategies. Background For more than 250 years, giraffe ( Giraffa camelopardalis ) taxonomy has attracted interest among scientists [ 1 ]-[ 3 ]. The descriptions of the nine giraffe subspecies are primarily based on pelage patterns, characteristics of ossicones and their geographic distribution across the African continent [ 4 ],[ 5 ]. However, the inconsistent pelage recognition has confused taxonomical assignments due to its high variability [ 6 ]-[ 8 ]. Recent efforts using molecular genetic techniques are beginning to clarify giraffe taxonomy [ 9 ]-[ 11 ]. In contrast to studies on elephant [ 12 ],[ 13 ], and other African wildlife [ 14 ],[ 15 ], a range-wide genetic analysis of giraffe is lacking [ 9 ]-[ 11 ]. A phylogenetic study using data of six subspecies (Angolan giraffe ( G. c. angolensis ), South African giraffe ( G. c. giraffa ), West African giraffe ( G. c. peralta ), reticulated giraffe ( G. c. reticulata ), Rothschild’s giraffe ( G. c. rothschildi ) and Masai giraffe ( G. c. tippelskirchi )) based on nuclear microsatellites and mitochondrial (mt) DNA sequences suggested that some of the subspecies may actually represent distinct species [ 9 ]. Another study of the giraffe subspecies historically classified as Thornicroft’s giraffe ( G. c. thornicrofti ), which is restricted to Zambia’s South Luangwa valley, showed that this population has a distinct mtDNA haplotype that is nested within the clade of Masai giraffe [ 11 ]. Genetic analysis suggested that the Kordofan giraffe ( G. c. antiquorum ) in Central Africa is closely related to the West African giraffe [ 10 ], while the relationship of the Nubian giraffe ( G. c. camelopardalis ) is unclear due to a lack of any genetic analyses. In southern Africa, two subspecies of giraffe live in close proximity. South African giraffe have been reported to occur naturally throughout southern Botswana, southern Zimbabwe, southwestern Mozambique, northern South Africa and southeastern Namibia [ 7 ]. Giraffe of northwestern and north-central Namibia have been categorized as Angolan giraffe [ 1 ],[ 16 ] but the taxonomic classification of giraffe from northern Botswana and northeastern Namibia remains uncertain. Angolan giraffe is thought to occur also in southern Zambia, western Zimbabwe and central Botswana [ 16 ]. Both giraffe populations have historically been classified as either G. c. giraffa or G. c. angolensis , or most recently as a hybrid of G. giraffa / G. angolensis , depending on the taxonomic reference [ 6 ],[ 8 ]. The uncertainty of giraffe taxonomy in southern Africa affects conservation efforts, as individuals are being translocated both within and between different populations and countries across Africa without knowledge of the taxonomical status. Frequently, these translocations are driven by economic reasons for improving regional tourism rather than biodiversity conservation [ 17 ]. Conservation policies depend on reliable information about the taxonomic status and about genetic variability of locally adapted populations. Clarifying the relationship and distribution of the Angolan and South African giraffe is therefore particularly relevant for conservation efforts of the newly established Kavango-Zambezi Transfrontier Conservation Area (KAZA) that includes northeastern Namibia and northern Botswana. Although no targeted census of giraffe has been conducted, the size of Botswana’s northern giraffe population is estimated to have dropped over the last decade from >10,000 to <4,000 individuals [ 18 ]. The number of giraffe in Bwabwata National Park in Namibia was decimated in the 1970s and 1980s due to illegal hunting but has recovered since then to >150 individuals [ 19 ]. We here present a population genetic analysis of mitochondrial cytochrome b (cytb) and partial control region (CR) sequences for eight of the nine currently described giraffe subspecies. Our sampling focuses on geographic regions that have not been analyzed before, particularly in southern Africa: Namibia (Bwabwata National Park – BNP, Etosha National Park – ENP) and Botswana (Chobe National Park – CNP, Moremi Game Reserve – MGR, Nxai Pans – NXP, Vumbura Concession – V, Central Kalahari Game Reserve – CKGR), but also central Africa’s Democratic Republic of Congo (Garamba National Park – GNP) (Table 1 , Additional file 1 : Table S1). Our dense sampling includes many key areas of the giraffe distribution range in southern Africa and therefore allows for a high-resolution analysis of the phylogeography of South African and Angolan giraffe. Furthermore, it allows assessing the impact of a “cryptic” rift valley, which runs northeast to southwest across Botswana from Zambia [ 20 ],[ 21 ], potentially having acted as a barrier to giraffe dispersal. Table 1 Origin , abbreviation , number of individuals ( N ) and subspecies designation of analyzed giraffe sequences Full size table Results Mitochondrial DNA sequences from the cytochrome b (cytb) gene and partial control region (CR) were successfully amplified from all samples. The cytb alignment was 1,140 nucleotides (nt) long and showed no gaps or ambiguous sites. We also sequenced the L-strand of the CR for a length of 786/787 nt, excluding the highly repetitive poly-cytosine region. In order to match our newly obtained sequences with published data, the length of the CR alignment was limited to 422 nt. The stringent 422 nt CR alignment did not contain gaps. The CR was relatively conserved outside this 422 nt region and until the poly-cytosine sequence, yielding only three variable sites among 20 giraffe individuals that represented all clades. All sequences conformed to the reading frame, length, stop codon and other properties of a functional protein coding gene or the control region that are observed in an established mitochondrial genome [EMBL: NC012100]. Sequences with the same properties were also obtained using the alternative primer pair for amplification and sequencing. Thus, it is reasonable to assume that no mitochondrial nuclear mitochondrial insertions (numts) were sequenced. The inclusion of two okapi ( Okapia johnstoni ) sequences introduced unambiguously placed gaps in the CR alignment, which were ignored in all subsequent analyses. The combined (cytb plus CR) alignment was 1,562 nt long and contained 138 parsimony informative sites. The alignment included 161 giraffe and two okapi individuals, of which 102 giraffe were newly sampled (Table 1 , Additional file 1 : Table S1). The Bayesian analysis of mitochondrial sequence data recovered the matrilines of all giraffe subspecies to be monophyletic with respect to each other, although not all nodes received posterior support values above 0.95 (Figure 1 ). The most obvious pattern is a well-supported north-south split, with the southern subspecies Angolan giraffe, South African giraffe, and Masai/Thornicroft’s giraffe being separated from the northern subspecies Kordofan giraffe, reticulated giraffe, Rothschild's giraffe and West African giraffe. Figure 1 Phylogenetic tree based on mitochondrial DNA encompassing 161 giraffe individuals. The topology corresponds to a maximum clade credibility tree obtained from BEAST, but branch lengths were calculated by maximum likelihood in Treefinder. Each dot represents one individual giraffe, colors are coding for the respective subspecies/population. “z” denotes captive (zoo) individuals, asterisks at branches indicate Bayesian posterior support >0.95. Abbreviations for the samples are explained in the text and in Table 1 . Full size image Using a molecular clock, BEAST estimates the deepest divergence time among giraffe matrilines between the northern and southern clade at ca. 2.0 million years ago (Ma) (Figure 2 ). This is followed by the divergence of a mtDNA clade containing Angolan giraffe, South African giraffe and Masai/Thornicroft’s giraffe at ca. 1.4 Ma (Table 2 , Figure 2 ). A northern giraffe clade, which includes the Kordofan giraffe, reticulated giraffe, Rothschild’s giraffe, and West African giraffe, diverged at about 0.8 Ma (Table 2 ). Divergences within each subspecies are estimated to have occurred between 100 to 400 thousand years (ka) ago. Note that the Bayesian posterior support values for some of the nodes at the subspecies level were below 0.95 (Figure 2 ). Figure 2 Maximum clade credibility tree of the major giraffe populations as reconstructed by Bayesian analysis conducted in BEAST. Blue bars indicate 95% highest posterior density intervals for node ages, asterisks denote posterior probability >0.95. Scale on the bottom represents divergence time (million years ago). Full size image Table 2 Divergence time estimates ( median heights and 95 % highest posterior density intervals ) obtained from BEAST based on 1 , 565 nt mtDNA Full size table Giraffe from Luangwa Valley National Park, Zambia, which are formally classified as Thornicroft’s giraffe, form a uniform but not a separated matrilineal group within the variation of Masai giraffe. Note, that the divergence between the southern and northern clade occurs between populations south and north of the equator that are in close geographic proximity to each other (Masai giraffe, reticulated giraffe, Rothschild’s giraffe). The relative clustering of the northern mtDNA clades (West African giraffe, Rothschild’s giraffe, Kordofan giraffe and reticulated giraffe) remains uncertain due to low posterior support values for some of the nodes (Figure 1 , Figure 2 ). Nine database individuals that were assigned to a particular subspecies previously [ 9 ] grouped at unexpected positions in our phylogenetic analysis (numbered individuals in Figure 1 ). Two individuals of South African giraffe (# 1 and 2) are placed within Angolan giraffe but not with other South African giraffe individuals. Likewise, two individuals (# 3 and 4) of Masai giraffe are placed within South African giraffe, two Rothschild’s giraffe individuals (# 5 and 6) grouped with Masai giraffe, one Masai giraffe (# 7) fell basal to reticulated giraffe, and two reticulated giraffe (# 8 and 9) grouped with Rothschild’s giraffe. Additional information of the geographic origin of each individual sequence is given in Additional file 1 : Table S1. Currently, there are four giraffe subspecies recognized south of the equator in Africa: Masai/Thornicroft’s giraffe, South African giraffe, and Angolan giraffe, the two latter occurring in close proximity in Botswana. In our data, Angolan giraffe individuals from the Central Kalahari Game Reserve in central Botswana grouped with Angolan giraffe from the Etosha National Park in Namibia, which was expected from their geographic origin and previously assumed classification. One individual from the Etosha National Park fell into the Central Kalahari Game Reserve mtDNA clade. Unexpectedly, 46 individuals sampled as Angolan giraffe from Chobe National Park, Nxai Pans, Vumbura Concession and Moremi Game Reserve in northern Botswana, and Bwabwata National Park in northeastern Namibia grouped with South African giraffe from the Khamab Kalahari Reserve in South Africa. These hitherto not sampled regions thus harbor mtDNA lineages of the South African giraffe subspecies and not of Angolan giraffe. Populations carrying the mitochondrial haplotype of South African giraffe thus geographically enclose the Angolan giraffe of the Central Kalahari Game Reserve from the north and south (Figure 3 ). Figure 3 Map of sub-Saharan Africa. A : Distribution range of giraffe (yellow patches) and sampling locations (abbreviations are explained in Table 1 ). Colors show genetically identified subspecies (coding as in Figure 1 ). B : Depiction of southern African giraffe populations and location of geographic boundaries. O-K-Z: Owambo-Kalahari-Zimbabwe epigeiric axis, O-B: Okavango-Bangweulu axis. Full size image Individuals from Bwabwata National Park formed a separate group with its own mtDNA haplotype (Figure 1 , Figure 4 ). Figure 4 Statistical parsimony haplotype network of the giraffe and okapi sequences. The sub-networks of different giraffe subspecies do not connect at the 95% connection probability limit. Different populations having identical haplotypes are indicated by pie-sections. Black rectangles indicate not sampled haplotypes. Abbreviations as in Table 1 . Full size image To assess differentiation between populations, pairwise F ST values were calculated (Table 3 ). The overall population differentiation of mtDNA was high, with F ST values ranging from 0.672 (Masai giraffe and Thornicroft’s giraffe) to 0.998 (Rothschild’s giraffe and Thornicroft’s giraffe). The pairwise F ST value between South African and Angolan giraffe was 0.929, showing a clear differentiation between those two populations, despite their close geographic proximity. Table 3 Genetic differentiation ( pairwise F ST values ) among the eight subspecies as defined by mtDNA clades Full size table A haplotype network analysis supports the strong divergences among most giraffe mtDNA clades (Figure 4 ), as sub-networks representing the different subspecies are not connected to each other at the 95% connection probability limit. Corresponding to our phylogenetic analysis (Figure 1 ), Thornicroft’s giraffe are an exception, as individuals from the Luangwa valley share a distinct haplotype that falls within the variation of Masai giraffe. The networks also demonstrate the considerable amount of variation within most subspecies: Masai/Thornicroft’s and Angolan giraffe have the highest numbers of haplotypes (14 and 13, respectively; Table 4 ). Kordofan and reticulated giraffe show the highest haplotype diversities, 0.964 ± 0.077 and 0.972 ± 0.064, respectively – almost every individual has its own mitochondrial haplotype. In contrast, Thornicroft’s, West African and Rothschild’s giraffe have the lowest number of haplotypes, the lowest haplotype diversity, and the lowest nucleotide diversity (Table 4 ), corresponding to the short branch lengths of these three mtDNA clades (Figure 1 ). Although the overall mitochondrial variation in South African giraffe was comparable to that of other clades (13 haplotypes, H d =0.769 ± 0.050; Table 4 ), it is noteworthy that one haplotype was common and shared among individuals from different reserves or parks (Vumbura Concession, Chobe National Park, Moremi Game Reserve, Nxai Pans, all in Botswana) (Figure 4 ). Table 4 Diversity indices per subspecies for the mtDNA Full size table Rothschild’s giraffe, which is currently considered “endangered” on the IUCN Red List [ 22 ], has two haplotypes among 11 individuals and low nucleotide and haplotype diversity (0.00012 ± 0.00009 and 0.182 ± 0.144, respectively; Table 4 ). Discussion The analyses of 1,562 nt of concatenated mitochondrial sequence data identified seven well-separated and reciprocally monophyletic giraffe clades. The deepest divergence, as estimated by a Bayesian BEAST analysis, was found between a northern clade, comprising West African, Kordofan, reticulated, and Rothschild’s giraffe, and a southern clade, comprising Angolan, South African, and Masai/Thornicroft’s giraffe, despite the close geographic proximity of populations of both clades in East Africa. Notably, Masai giraffe are geographically much closer to northern populations than to the southern African Angolan and South African giraffe. The matrilineal clades identified are largely congruent to previously named subspecies and reflect the geographic structure seen among giraffe. The Thornicroft’s giraffe has been described to only occur in the Luangwa Valley National Park. Divergences between Thornicroft’s and Masai giraffe are shallow, which is why the former was proposed to be subsumed into the Masai giraffe’s clade [ 11 ]. These lineages are on discrete evolutionary trajectories, due to their geographic isolation. The shallow divergence might thus reflect retention of ancestral polymorphisms, rendering mtDNA a marker with limited diagnostic resolution [ 23 ],[ 24 ]. However, the giraffe from Luangwa Valley National Park have a unique mitochondrial haplotype (Figure 4 ). This should be taken into account in giraffe conservation and management, in particular for ecological, spatial and behavioral aspects. A previously suggested placing of the South African giraffe within the variation of the Masai giraffe [ 9 ] could not be confirmed. Our mtDNA tree shows the same topology as found by Hassanin and colleagues [ 10 ]. Assignment of individual giraffe to the wrong subspecies is not unusual and could be explained by natural migration or human-induced translocation. It is noteworthy, however, that every single one of the newly sampled 102 individuals was associated with the expected subspecies. Therefore, our data do not indicate large-scale migration of females from one subspecies to another or confusion of populations by human-induced translocation of females. Our new sampling effort of 102 individuals from well-defined areas and populations, and the data analyses indicate that individuals previously assigned to a clade different from the individual’s designation [ 9 ] might be a result of mtDNA introgression, or of inadequate subspecies identification. This highlights the importance of accurate sample collection and identification. From previous studies [ 2 ] and historical assumptions [ 6 ], it was expected that Botswana and Namibia contain Angolan giraffe, and that the South African giraffe occurs further south and east in South Africa and Zimbabwe [ 2 ],[ 6 ],[ 25 ]. However, our data suggest a narrow zone separating Central Kalahari Game Reserve in Botswana, which is inhabited only by Angolan giraffe, from Chobe National Park, Moremi Game Reserve, Nxai Pans Park, and Vumbura Concession in northern Botswana, which are inhabited by South African giraffe. The central and northwestern giraffe populations in Namibia have formerly been assigned to Angolan giraffe [ 1 ],[ 16 ]. Based on our results, the Bwabwata National Park population in northeastern Namibia unambiguously represents South African giraffe. The Bwabwata National Park population is geographically close (<100 km) to Chobe National Park and Vumbura Concession (also inhabited by South African giraffe), whereas the nearest natural Angolan giraffe population is >500 km to the west (Etosha National Park) or >350 km to the south (Central Kalahari Game Reserve). Pairwise F ST values of mtDNA sequences are expected to exceed those from nuclear markers in cases of strong female philopatry and male-biased gene flow or temporal nonequilibrium after a (recent) habitat fragmentation. In that case, mtDNA gene trees would show reciprocal monophyly and geographic structuring (as seen here), but nuclear loci would not support this [ 26 ]. The oldest fossils show that the giraffe species complex existed already about one Ma [ 27 ]. According to our divergence time estimates (Table 2 , Figure 2 ), giraffe diverged into distinct populations that are designated as subspecies during the Pleistocene (2.6 Ma to 12 ka). This is considerably older than divergence times between closely related species of Ursus (~600 ka) estimated by independently inherited nuclear introns [ 28 ], of Pan (~420 ka) using multilocus analysis including mitochondrial, nuclear, X- and Y-chromosomal loci [ 29 ], or of Canis (~900 ka) based on mitochondrial genes and nuclear loci [ 30 ]. Due to the lack of sequence data from giraffe fossils and closely related and dated outgroup fossils, our calibration points (5 and 9 Ma, respectively) might lead to an overestimation of divergence times within giraffe. However, the clear intraspecific structuring into region-specific maternal clades supports an early divergence within giraffe. However, the mitochondrial gene tree might differ from the species tree [ 31 ], and a multilocus approach will be necessary to estimate divergence times representative of the species as a whole. Support for the early divergence time estimates comes from haplotype networks showing that numerous substitutions accumulated between matrilineal clades preventing connection at the 95% probability limit (Figure 4 ). Furthermore, there is considerable variation within most giraffe subspecies that can only develop during considerable time periods. Finally, signs of haplotype sharing between subspecies are rare (Figure 4 ), suggesting that maternal clades have been separated from each other for a considerable amount of time and that female gene flow among those clades is limited. However, it is not clear if the nine deviating individuals are misidentified samples, or if they result from human translocation or introgression of mtDNA among different giraffe populations. From 26 Masai/Thornicroft’s giraffe individuals, two share mtDNA haplotypes with South African giraffe, and one has a unique haplotype similar to reticulated giraffe (Figure 1 , Figure 4 ). Evidence from autosomal microsatellites supports the clear structuring into subspecific groups, although limited signs of allele sharing were found among some populations [ 9 ]. Today, the majority of giraffe populations analyzed are widely separated and geographically isolated. This is a consequence of increasing agricultural practices causing habitat loss and fragmentation, of human population and settlement growth, and illegal hunting. Historically, and during the Pleistocene, the distribution ranges may have been more contiguous. Yet, during the Pleistocene, some barriers must have limited female gene flow among different giraffe populations. The distribution of many African ungulates is correlated closely with the distribution of savannah habitat, which in turn is strongly influenced by climatic conditions. The African climate experienced wide changes during the Pleistocene, resulting in recurrent expansions and contractions of savannah habitat and tropical forest. An increase of tropical forest across Central Africa during warm and wet periods (pluvials) around the equator might explain the north-south split seen today in giraffe and other ungulates [ 32 ],[ 33 ]. In the northern parts of the distribution range, the expansion of the Lake Mega-Chad at about 8,000 to 3,000 years ago [ 34 ], might have affected recent giraffe dispersal [ 10 ]. We dated the divergence between the Angolan and South African giraffe matrilines in Botswana to 1.4 Ma. This deep, early Pleistocene divergence exists despite their close geographic proximity: distances up to 300 km can be travelled by giraffe [ 35 ]. Today, no obvious geographic barrier appears to separate these two subspecies. Thus, we propose a historical “cryptic” rift valley as explanation for the pattern seen in Botswana, as outlined below. A known geographic boundary follows the Okavango River (Figure 3 B) and Gumare Fault in the northwest of Botswana and extends east to the Thamalakhane Fault south of the Okavango pans and the Ntwetwe Pan. The Owambo-Kalahari-Zimbabwe epeirogenic axis (O-K-Z; Figure 3 B) also forms a subtle but yet distinct geographic boundary [ 21 ],[ 20 ] between Angolan and South African giraffe populations. Today, this area only holds seasonal water and thus does not seem as an obvious barrier to dispersal. However, it could have been a barrier during the Pleistocene [ 21 ],[ 36 ]. The Okavango-Bangweulu axis (O-B; Figure 3 B) is the southern extension of the East African Rift System and could have acted as further geographic separator when mountains were lowered and drainage systems formed resulting in the north-east split of giraffe matrilines. The persistence of these conditions might have been reinforced, if an early Pleistocene interglacial coincided with a maximum extent of Palaeo-Lake Makgadikgadi, which ended likely before the Middle Pleistocene (~970 to 500 ka) [ 21 ],[ 36 ]. It has been suggested that a “cryptic” rift valley runs northeast to southwest across Botswana from Zambia with faulting ramifying southwest which is represented best by the development of the Fish River canyon in southern Namibia [ 37 ]. There were massive lake systems in northeast Botswana, but these dwindled by 500 to 600 ka (Palaeo-Lake Thamalakhane) [ 21 ]. Cotterill [ 36 ] argues that the above described phylogeographic anomaly is a result of an expansion of moist, evergreen forests in an interglacial, e. g. during warm and wet conditions. Such a “cryptic” rift valley can also explain distributions of other animals that are similar to the distribution of giraffe mtDNA haplotypes: African forest elephant ( Loxodonta cyclotis ) haplotypes are not within the variation of the African elephant ( L. africana ) from central Namibia (and southeast Botswana), but are confined only to the populations in northern Botswana and northwestern Zambia [ 12 ]. Phylogeographic divergences between southeast and northeast representatives of the Damara dikdik ( Madoqua damarensis ) and the impala ( Aepyceros petersi ) [ 38 ] exhibit both congruent distributions with Angolan giraffe in Namibia, as a result of Pleistocene climatic conditions and/or major changes in the larger rivers on the south-central African plateau during the Pleistocene [ 39 ]. Finally, the estimated population expansion of the Okavango Red lechwes ( Kobus leche ), a floodplain specialist, is explained by expansion of floodplain habitats following contraction of the northeast Botswana mega-lakes in the Middle Pleistocene [ 36 ]. Thus, the persistence of a vast mosaic of aquatic habitats and moist forest occupied the shallow rift valley of northeast Botswana through much of the Pleistocene [ 21 ]. This scenario poses a conceivable explanation for the formation of the distribution of Angolan and South African giraffe maternal lineages as currently seen in Botswana. Today, no obvious geographic barrier appears to separate these two subspecies. Ecological or behavioral factors, such as a specific mate recognition system [ 40 ], possibly differentiated pelage pattern and female philopatry may maintain limited genetic admixture. A major episode of aridity in a Pleistocene glacial period may explain mtDNA lineage divergence within Angolan giraffe populations being restricted to Namibia (including Etosha National Park), and one being located in central Botswana (Central Kalahari Game Reserve). Few large mammals show such phylogeographic evidence of strong influence by geological landforms in the form of genetic depauperation or change in the extant distributions across southern Angola, northeastern Botswana and southwestern Zambia [ 39 ]. Mitochondrial DNA is maternally inherited from mother to offspring. It allows tracing the maternal lineage and reflects female movements, or the lack thereof, in a phylogeographic context. While we acknowledge the pitfalls of only investigating a small, uniparentally inherited part of the genome [ 26 ], mtDNA nevertheless enabled us to specifically analyze the maternal lineages of giraffe subspecies and also include database sequences of reticulated giraffe, for which samples are lacking. Reticulated giraffe are interesting due to their high variability and close proximity to subspecies of the southern clade. Moreover, it has been shown previously that phylogenetic trees based on mtDNA and nuclear microsatellites are congruent in giraffe [ 9 ], suggesting that the matrilineal structuring is not differing considerably from that of the species as a whole. The clear structure of the mtDNA clades might thus allow inferring that giraffe populations (and not only the matrilines) have been separated from each other for a considerable amount of time. Alternatively, mtDNA structure might reflect the nature of females to stay at or return to their place of birth (philopatry or site fidelity). Although female philopatry and male-biased dispersal has not been systematically studied in wild giraffe, it is a general pattern in many mammals [ 41 ]. However, long-term field observations by one of the authors (JF) support fidelity of both sexes of giraffe to a particular region, because the populations of desert-dwelling Angolan giraffe in northwest Namibia remained without contact and genetic admixture for at least five years, despite close proximity to other giraffe in Etosha National Park approximately 150 to 200 km east. The effects of male-biased gene flow on phylogeographic structuring of a widely distributed species have recently been demonstrated in bears [ 42 ]. To further investigate if giraffe represent one species with matrilineal structuring or a multi-species complex, and to analyze the extent of mitochondrial and nuclear discordance [ 43 ], future research must incorporate multiple independently inherited autosomal loci. The differences in pelage pattern observed among giraffe from different regions might reflect nuclear variation, indicative of separation between subspecies also at biparentally inherited parts of the genome. Moreover, markers from the paternally inherited Y chromosome would be beneficial to specifically study male gene flow to recover a potentially contrasting structuring of the patriline. If giraffe exhibited male-biased dispersal and if several species were involved, female-specific mtDNA is predicted to be a marker with high introgression rates, showing insufficiently diagnostic resolution on species delimitation [ 23 ]. Conclusions Enhanced sampling from key regions of the giraffe distribution range show a clear matrilineal structuring of giraffe into distinct clades. The genetic analyses support a clear north-south split, separating two major matrilineal clades in giraffe (southern and northern clade). We also found a sharp east-west delineation between Angolan and South African giraffe, in an area in northern Botswana that has not been genetically investigated before. Our study shows for the first time that South African giraffe are distributed in different parks in Botswana, north of their previously known distribution range. Biparentally and/or paternally inherited sequence markers will be the next step to fully understand the subspecies/species structure in this wide-spread charismatic African mammal. Methods We collected giraffe tissue samples from seven of nine currently described subspecies (Table 1 ) ( G. c. angolensis , G. c. giraffa , G. c. tippelskirchi , G. c. antiquorum , G. c. rothschildi , G. c. peralta , G. c. thornicrofti ) and included published data for G. c. reticulata (Additional file 1 : Table S1) in our analyses. In August 2009, samples for seven subspecies were collected using remote delivery biopsy darting from free-ranging giraffe in major giraffe populations in northern and central Botswana: Moremi Game Reserve (MGR), Chobe National Park (CNP), Central Kalahari Game Reserve (CKGR) and Nxai Pans (NXP). In 2013, samples were collected from the Vumbura Concession (V) and northern Okavango Delta in Botswana, and from Bwabwata National Park (BNP) in northeastern Namibia (Figure 3 ). Additional samples were collected in collaboration with conservation partners in Chad, Democratic Republic of Congo, Niger, South Africa, Tanzania and Uganda (Table 1 , Additional file 1 : Table S1). Skin biopsies were stored at room temperature in a tissue preservative buffer [ 44 ] with glutaraldehyde prior to DNA isolation. Whole genomic DNA was extracted from tissue and blood using standard phenol/chloroform extraction [ 45 ]. The complete cytb gene and a partial CR were PCR amplified and sequenced with newly designed giraffe-specific PCR primers that were constructed from an existing mitochondrial genome of the giraffe [EMBL AP003424]. The 1,140 nt long cytb gene was amplified with the primer pair 5’-TGAAAAACCATCGTTGTCGT-3’ and 5’-GTGGAAGGCGAAGAATCG-3’ and the control region (422 nt) was amplified with the primer pair 5’-TGAAAAACCATCGTTGTCGT-3’ and 5’-GTGGAAGGCGAAGAATCG-3’. In rare cases where amplification or sequencing produced unintelligible sequences or sequences with poor quality, mitochondrial-specific sequences were obtained with an alternative primer pair (5’-GACCCACCAAAATTTAACACAATC-3’ and 5’-GTATGAAGTCTGTGTTGGTCGTTG-3’). PCR amplification of mtDNA sequences was performed with 10 ng genomic DNA using the VWR Mastermix containing Amplicon-Taq (VWR International GmbH, Darmstadt, Germany) according to the following protocol: 6 μL 2× mastermix incl. Taq, 0.25 μL 100× bovine serum albumin, 0.4 μL 10 pmol/μL each forward and reverse primer, 6.45 μL desalted water, DNA. PCR conditions for were as follows: initiation at 95°C for 5 min, 35 cycles of denaturation (at 95°C for 30 s), annealing (at 50°C for 30 s) and elongation (at 72°C for 1 min), and a final elongation step at 72°C for 5 min. The PCR products were diluted in water and cycle sequencing was done with the BigDye terminator sequencing kit 3.1 (Applied Biosystems, Foster City, California). Excess dye was removed with the BigDye XTerminator Purification Kit (Applied Biosystems). Purified products were analyzed on an Applied Biosystems ABI 3730 DNA Analyzer [EMBL: HG975087-HG975290]. Our data set was complemented with published sequences from databases (listed in Additional file 1 : Table S1) e.g. from [ 9 ],[ 10 ],[ 46 ]. Sequences were manually edited in Geneious version 5.6.4 (Biomatters, Auckland, New Zealand) and aligned with ClustalX [ 47 ]. The corresponding sequences from two okapis ( Okapia johnstoni ) database samples [EMBL: JN632674, HF571214, HF571175] were used as outgroup. TCS 1.21 [ 48 ] inferred statistical parsimony haplotype networks with the connection probability limit set to 95%. Columns containing ambiguous sites were removed from the alignment and gaps were treated as fifth state. DnaSP 5.10 [ 49 ] was used for the calculation of nucleotide diversity, number of haplotypes and haplotype diversity and Arlequin ver 3.5 [ 50 ] for pairwise F ST values. Inkscape 0.48 was used to improve trees and networks graphics. For divergence time estimations, mtDNA sequences from suitable ruminants ( Pudu puda , Rangifer tarandus , Muntiacus muntjak and Cervus elaphus ) were obtained from EMBL/GenBank (Additional file 1 : Table S1). The split between Pudu puda and Rangifer tarandus was set to 5 Ma and between Muntiacus muntjak and Cervus elaphus to 9 Ma according to the fossil record [ 46 ]. A Bayesian phylogenetic tree including all 161 giraffe individuals and two okapis was estimated in BEAST v1.7.5 [ 51 ]. The branch length were calculated on the BEAST tree topology in TREEFINDER version of March 2008 using a maximum likelihood approach [ 52 ]. Coalescent based divergence times were estimated in BEAST on a restricted subset of the giraffe individuals in order to avoid an imbalance between taxon sampling of giraffe and outgroups. The subset included one representative of each subspecies and major population. We used the HKY + I + G substitution model as identified best fitting by jModelTest [ 53 ], a lognormal relaxed clock with a uniform prior on the substitution rate and ran the program for 2×10 8 generations. Convergence was confirmed in Tracer v1.5. Availability of supporting data DNA sequences are deposited at GenBank under the accession numbers [EMBL: HG975087-HG975290]. Authors’ contributions FB and TB have done the molecular lab work. FB, TB, and AJ performed the data analysis. AT, AM, FD, and JF obtained the samples. AJ, FB, JF, and TB have written the manuscript. All authors read and approved the final version of the manuscript. Additional file Abbreviations Cytb: Cytochrome b CR: Control region H d : Haplotype diversity ka: Thousand years KAZA: Kavango-Zambezi Ma: Million years mt: Mitochondrial nt: Nucleotides numts: Nuclear mitochondrial DNA O-K-Z epigeiric axis: Owambo-Kalahari-Zimbabwe epigeiric axis O-B axis: Okavango-Bangweulu axis PCR: Polymerase chain reaction
The Giraffe (Giraffa camelopardalis), a symbol of the African savanna and a fixed item on every safari's agenda, is a fascinating animal. However, contrary to many of the continent's other wild animals, these long-necked giants are still rather poorly studied. Based on their markings, distribution and genome, nine subspecies are recognized – including the two subspecies Angola Giraffe (Giraffa c. angolensis) and South African Giraffe (Giraffa c. giraffa). South African Giraffes occur farther north than previously assumed Like most other giraffes, these subspecies are now mainly found in nature reserves. Until recently, scientists assumed a clear demarcation of their ranges: Angola Giraffes occur in Namibia and northern Botswana, while South African Giraffes reside in southern Botswana and South Africa. "However, according to our studies, the distribution areas prove to be much more complex. South African Giraffes also occur in northeastern Namibia and northern Botswana, and Angola Giraffes can be found in northwestern Namibia and southern Botswana, as well," explains the study's author, Friederike Bock from the Biodiversity and Climate Research Center (BiK-F). A look at the new distribution map reveals the presence of a population of Angola Giraffes in the Central Kalahari Game Reserve, the world's second-largest national park, quasi nestled between two populations of the South African Giraffe, with both subspecies living side by side. Subspecies were the result of early geographic separation According to the research team, the fact that two genetically distinct subspecies could develop within the same region may be explained by the local geographic conditions that prevailed approximately 500,000 to two million years ago. Back then, the mountain range along the East African Rift Valley was sinking, creating vast wetlands and lakes, such as the paleo lake Makgadikgadi. According to Professor Dr. Axel Janke from the BiK-F, "these large bodies of water may have separated the populations for long periods of time. Moreover, female giraffes likely do not migrate across long distances, thereby contributing to a clear separation of the maternal lines." Today, there no longer exist any barriers that prevent the possible mingling of both subspecies; an investigation of these processes is however subject to further genetic analyses. Angola and South African Giraffes can be uniquely identified by their maternal gene profile For the study, the researchers created a profile of the subspecies' mitochondrial DNA, using tissue samples from about 160 giraffes from various populations across the entire African continent. On the basis of this genetic material, inherited from the maternal side, the often similarly marked subspecies can be uniquely identified genetically and the relationships between various populations can be clearly demonstrated. "Our focus was on giraffes in southern Africa, in particular in Botswana and South Africa. There, we sampled populations that had not been genetically analyzed before," says Bock. New insights enable improved protection measures for the giraffe According to estimates by the World Conservation Organization IUCN, the world's giraffe population is about 100,000 individuals – showing a decreasing trend. In Botswana alone, the population has dwindled by more than half in recent years. In order to achieve effective protection measures that will preserve the majority of the giraffe's subspecies, it is indispensable to gain knowledge that allows their reliable identification as well as detailed information regarding their distribution. The surprising results concerning the distribution of the two subspecies in Namibia and Botswana emphasize the importance of additional taxonomic research on all giraffe subspecies.
10.1186/s12862-014-0219-7
Biology
Innovative surveillance technique gives vital time needed to track a cereal killer
Guru V. Radhakrishnan et al. MARPLE, a point-of-care, strain-level disease diagnostics and surveillance tool for complex fungal pathogens, BMC Biology (2019). DOI: 10.1186/s12915-019-0684-y Journal information: BMC Biology
http://dx.doi.org/10.1186/s12915-019-0684-y
https://phys.org/news/2019-08-surveillance-technique-vital-track-cereal.html
Abstract Background Effective disease management depends on timely and accurate diagnosis to guide control measures. The capacity to distinguish between individuals in a pathogen population with specific properties such as fungicide resistance, toxin production and virulence profiles is often essential to inform disease management approaches. The genomics revolution has led to technologies that can rapidly produce high-resolution genotypic information to define individual variants of a pathogen species. However, their application to complex fungal pathogens has remained limited due to the frequent inability to culture these pathogens in the absence of their host and their large genome sizes. Results Here, we describe the development of Mobile And Real-time PLant disEase (MARPLE) diagnostics, a portable, genomics-based, point-of-care approach specifically tailored to identify individual strains of complex fungal plant pathogens. We used targeted sequencing to overcome limitations associated with the size of fungal genomes and their often obligately biotrophic nature. Focusing on the wheat yellow rust pathogen, Puccinia striiformis f.sp. tritici ( Pst ), we demonstrate that our approach can be used to rapidly define individual strains, assign strains to distinct genetic lineages that have been shown to correlate tightly with their virulence profiles and monitor genes of importance. Conclusions MARPLE diagnostics enables rapid identification of individual pathogen strains and has the potential to monitor those with specific properties such as fungicide resistance directly from field-collected infected plant tissue in situ. Generating results within 48 h of field sampling, this new strategy has far-reaching implications for tracking plant health threats. Background Rapid and accurate point-of-care (PoC) diagnostics facilitate early intervention during plant disease outbreaks and enable disease management decisions that limit the spread of plant health threats. PoC diagnostics involve portable equipment that can be used in-field to rapidly confirm disease outbreaks and provide actionable information [ 1 ]. At present, conventional plant disease diagnostics rely on visible inspections of disease symptoms followed by basic laboratory tests through culturing and pathogenicity assays [ 2 ]. Unfortunately, these conventional methods tend to be subjective, time-consuming, labour-intensive and reliant on specialised expertise and equipment, providing only limited phenotypic information [ 3 ]. These factors limit their utility in PoC diagnosis. Recent alternative approaches have focused on serological and nucleic acid assays [ 4 ]. Polyclonal and monoclonal antisera are frequently used to detect plant pathogens using techniques such as enzyme-linked immunosorbent assay (ELISA), immunostrip assays and immunoblotting [ 5 ]. In addition, following a flurry of PCR-based diagnostic tests in the 1980s, the advent of the loop-mediated isothermal amplification (LAMP) assay at the turn of the twenty-first century provided the first rapid nucleic acid amplification method to accurately diagnose pathogens in situ in real time [ 6 ]. Both serological and DNA-based methods typically require high initial financial investments and specialised expertise to develop new assays, are limited in sample capacity, frequently are not reliable at the asymptomatic stage, and provide limited information beyond the species level [ 1 ]. The capacity to distinguish between individuals in a pathogen population with specific properties such as fungicide resistance, toxin production and virulence profiles is often essential to inform disease management approaches. In the past two decades, the genomics revolution has led to technologies that can rapidly generate genome-scale genetic information to define individual variants of a pathogen species [ 4 ]. These emerging, data-driven, PoC diagnostic tools have the potential to rapidly track shifting pathogen populations in near real-time, providing copious genetic information at the strain level that can be used in early warning systems and disease forecasting. The value of portable genomic-based diagnostics and surveillance was first illustrated during emergent human health outbreaks. For instance, during the Ebola crisis in West Africa in 2015, genome sequencing of the virus was carried out in situ on the first portable genome sequencer, the Oxford Nanopore Technologies MinION sequencer [ 7 ] . The resulting real-time genomic information on evolutionary rates and epidemiological trends revealed frequent transmission across the Guinea border [ 7 ], which informed subsequent disease control measures. For plant diseases, a similar approach in the laboratory environment successfully identified Plum pox virus and ‘ Candidatus Liberibacter asiaticus’, which causes citrus greening in infected insect and plant tissues [ 8 ], exemplifying the potential for the development of portable genomic-based diagnostics for plant health threats. However, for higher-order fungal pathogens which constitute the largest and most widely dispersed group of plant pathogens [ 9 ], the utility of mobile genomic-based PoC diagnostics remains to be fully realised. The sheer size of fungal genomes, which can be tens or even hundreds of thousands of times larger than viral genomes, makes full-genome or whole-transcriptome sequencing on portable sequencing devices currently prohibitively expensive. In this study, we developed an approach for generating high-throughput sequencing data in situ from the complex obligately biotrophic fungal pathogen Puccinia striiformis f. sp. tritici ( Pst ). Pst is a basidiomycete and heterokaryotic fungus that causes wheat yellow rust disease, which is a constant and significant threat to wheat production worldwide [ 10 ]. We demonstrate herein that our approach can be used to rapidly define individual Pst strains, assign strains to distinct genetic lineages that have been shown to correlate tightly with their virulence profiles [ 11 ], and monitor mutations in genes of importance. As Pst is an obligate biotroph, the genetic material of the pathogen and plant have to be analysed together in field-collected infected samples. Furthermore, the pathogen’s genome is more than 10,000 times larger than that of, for instance, the Ebola virus. To address these complexities, we first utilised a comparative genomics approach to define genomic regions of high variability between pathogen strains that could then be amplified for sequencing directly from field-infected wheat samples on the mobile nanopore sequencer. This new approach thereby circumvents the need to carry out lengthy in-lab processes of purification and multiplication of isolates prior to high molecular weight DNA extraction that is a requirement for full genome sequencing. This targeted sequencing approach also reduced the complexity and amount of data generated per sample, thereby accelerating the speed of processing and reducing the cost. Furthermore, we developed a mobile lab system to enable deployment of our diagnostic platform in resource-poor regions without the need for continuous electricity or access to additional laboratory equipment. This Mobile And Real-time PLant disEase (MARPLE) diagnostics system was designed with simplicity and mobility in mind to enable true PoC plant disease diagnostics. This new strategy has the potential to revolutionise plant disease diagnostics, changing how plant health threats are identified and tracked into the future. Results Capturing the global diversity of the Pst population To reduce the complexity of the genomic data generated from Pst -infected wheat samples, we aimed to define regions of the Pst genome showing high variability between pathogen strains that could be subsequently amplified for sequencing on the MinION platform from Oxford Nanopore Technologies. The first step was to capture the diversity of the Pst global population. To achieve this, we carried out transcriptome sequencing on 100 Pst -infected wheat samples collected between 2015 and 2017 from nine countries, including those in eastern and southern Africa, Europe, North America and Asia (Additional file 1 : Table S1). Total RNA was extracted from each sample and subjected to RNA sequencing (RNA-seq) analysis using the Illumina HiSeq platform and our previously described field pathogenomics strategy [ 11 ]. To maximise the geographical distribution of Pst isolates, we combined these 100 RNA-seq datasets with previously published genomic and transcriptomic datasets from a further 201 Pst strains spanning a total of 19 countries, including Chile, New Zealand, Pakistan and an array of European countries [ 11 , 12 ] (Additional file 1 : Table S1). Raw reads were filtered for quality, and data from each Pst sample were independently aligned to the Pst race PST-130 reference genome [ 13 ]. An average of 37.3% (± 18.2%, S.D.) reads aligned to the reference genome for the combined RNA-seq datasets, and 82.7% (± 4.9%, S.D.) reads aligned for the genomic datasets [ 11 ] (Additional file 1 : Tables S2 and S3). Overall, the data from this global collection of Pst isolates comprised 280 transcriptomic and 21 genomic datasets from Pst isolates spanning 24 countries that could then be used for subsequent population genetic analysis. To determine the genetic relationships between these 301 global Pst samples, we carried out phylogenetic analysis using the third codon position of 2034 PST-130 gene models (589,519 sites) using a maximum-likelihood model (Additional files 2 and 3 ). Pst isolates tended to cluster based on their geographical origin, with only four of the 14 divisions containing Pst isolates that spanned continental boundaries (Fig. 1 a). These four clades included (i) clade 2 with Pst isolates from China and the USA, (ii) clade 9 with Pst isolates from Europe and South Africa, (iii) clade 10 containing Pst isolates from Ethiopia and New Zealand and (iv) clade 14 containing isolates from Europe and New Zealand (Fig. 1 a). This represents relatively recent shared ancestry between the populations within these four clades, which could be indicative of long-distance transmission of Pst strains either between these regions or from a common independent source area. Fig. 1 The global Pst population is highly diverse and largely consists of geographically isolated groups of distinct homogenous individuals. a The global Pst population analysed herein consisted of 14 distinct groups of individuals. Phylogenetic analysis was performed on a total of 280 transcriptomic and 21 genomic datasets from Pst isolates spanning 24 countries, using a maximum-likelihood model and 100 bootstraps. Scale indicates the mean number of nucleotide substitutions per site. Bootstrap values are provided in Additional file 3 . b Multivariate discriminant analysis of principal components (DAPC) could further define subdivisions within the global Pst population. A list of 135,139 biallelic synonymous single nucleotide polymorphisms (SNPs) was used for DAPC analysis. Assessment of the Bayesian Information Criterion (BIC) supported initial division of the Pst isolates into five genetically related groups (left; C1–5). Due to the high level of diversity among the global Pst population, this initial analysis could not resolve Pst isolates with lower levels of within-group variation. Therefore, a second DAPC analysis was carried out on each of the five initial population groups (right). Bar charts represent DAPC analysis, with each bar representing estimated membership fractions for each individual. Roman numerals represent the successive K values for each DAPC analysis. Numbers in circles are reflective of those assigned to distinct groups in the phylogenetic analysis Full size image We carried out multivariate discriminant analysis of principal components (DAPC) to further define subdivisions within the global Pst population. First, we generated a list of 135,372 synonymous single nucleotide polymorphisms (SNPs), of which 135,139 were biallelic in at least one Pst sample and were therefore used for DAPC analysis. Assessment of the Bayesian Information Criterion (BIC) supported division of the Pst isolates into five groups of genetically related Pst isolates (Additional file 4 ). However, due to the high level of diversity within the global Pst population, this initial DAPC analysis was able to separate only Pst populations with high levels of genetic differentiation and was unable to resolve lower levels of within-group variation [ 14 ] (Fig. 1 b). For instance, group 1 (C1) contained Pst isolates from Pakistan, Ethiopia, Europe and New Zealand, and group 2 (C2) contained Pst isolates from China and two European races that have been shown to be genetically distinct in previous population studies [ 12 , 15 ]. Therefore, we performed further DAPC analysis on each of the five population groups independently and, following analysis of the BIC, Pst isolates were separated into clear subsets of homogenous groups of individuals that better reflected the phylogenetic clustering (Fig. 1 b; Additional file 4 ). Overall, this analysis indicated that the global Pst population is highly diverse and, with only a few exceptions, consists of geographically isolated groups of distinct homogenous individuals. A subset of genes can be used to capture the global diversity of Pst isolates To identify specific Pst genes contributing to the separation of isolates into distinct groups in the population genetic analysis, we used comparative analysis to find the most variable genes among the 301 global Pst isolates that were conserved across all Pst isolates analysed. First, we calculated the number of SNPs per kilobase for each gene from alignments of sequences representing the 301 Pst isolates against the PST-130 reference genome [ 13 ]. SNPs per kilobase values were calculated by normalising the total number of SNPs found in the coding sequence of each gene across the 301 Pst isolates relative to the length of the coding sequence for each gene. A total of 1690 genes were identified as polymorphic (SNPs/kb ≥ 0.001) between Pst isolates and subsequently utilised for phylogenetic analysis with a maximum-likelihood model. Importantly, the sequences from these 1690 polymorphic genes were sufficient to reconstruct the topology of the global Pst phylogeny (Additional file 5 ). To determine the minimum number of gene sequences required to accurately reconstruct the global phylogeny, we ordered the 1690 genes based on the number of polymorphic sites across the 301 Pst isolates (Fig. 2 a). We then selected 1006, 748, 500, 402, 301, 204, 151 and 100 of the most polymorphic genes using progressively increasing cut-off values for SNPs per kilobase (0.006, 0.0105, 0.018, 0.023, 0.0285, 0.036, 0.042 and 0.051, respectively) and carried out phylogenetic analysis as described above with each of these subsets (Additional file 5 ). We noted that a single Pst isolate from clade 9 was mis-assigned to clade 4 in the phylogenies reconstructed from fewer than 500 genes (Additional file 5 ). This inconsistency was likely due to poor gene coverage for this Pst isolate when the data were aligned to the PST-130 reference genome; for instance, 96.5% of bases had less than 20× coverage when using 402 Pst genes to reconstruct the phylogeny. Therefore, this Pst isolate (14.0115) was excluded from the general evaluation. Overall, we concluded that whilst minor changes in clade ordering were observed when using sequence data from less than 500 genes, sequence data from as few as 100 genes were sufficient to generate a similar phylogeny topology (Additional file 5 ) and assign Pst isolates to the 14 previously defined groups. Fig. 2 The sequences of 242 highly polymorphic Pst genes are sufficient to reconstruct the topology of the global phylogeny generated from full transcriptome and genome sequencing . a Ordered distribution of average SNP content per gene across the 301 Pst global isolates. To determine the minimum number of gene sequences required to accurately reconstruct the global phylogeny, the 1690 genes identified as polymorphic (SNPs/kb ≥ 0.001) between Pst isolates were ordered according to number of polymorphic sites across the 301 global Pst isolates. b The 242 polymorphic genes selected were not biased in their selection by a high degree of divergence from the reference race PST-130 for any particular group of individuals. Box plots represent the total number of SNPs across these 242 genes for Pst isolates belonging to each of the five major genetic groups identified through DAPC analysis. Bar represents median value, box signifies the upper (Q3) and lower (Q1) quartiles, data falling outside the Q1–Q3 range are plotted as outliers. c The 242 genes selected could be used successfully to reconstruct the global phylogeny and assign Pst isolates to the 14 previously defined groups (numbers in circles). Phylogenetic analysis was performed using sequence data for the 242 genes from the 301 global Pst isolates using a maximum-likelihood model and 100 bootstraps. Bootstrap values are provided in Additional file 7 Full size image The next step was to use the minimal number of polymorphic genes required to represent Pst population diversity to define a subset of genes for PCR amplification in preparation for sequencing on the MinION platform. We reasoned that sequencing a small subset of highly variable genes would reduce the volume of data generated and associated cost per sample, whilst maintaining our ability to define individual strains. We selected the 500 most polymorphic genes between Pst isolates and within this subset randomly selected 250 of these genes; oligonucleotides were successfully designed for 242 genes (Additional file 1 : Table S4). Given that a minimum of 100 genes was sufficient to accurately assign Pst isolates, the additional 142 genes were included to ensure that Pst isolates could be correctly assigned even if a large proportion (up to 58%) of the genes failed to amplify under field conditions. To validate that the 242 polymorphic genes were not biased in their selection by a high degree of divergence from the reference isolate PST-130 for any particular group of individuals, we assessed the total number of SNPs across these 242 genes for Pst isolates belonging to each of the five major genetic groups identified through DAPC analysis (Fig. 1 b). The SNPs were distributed across all the major genetic groups, with the least number of SNPs identified in Pst isolates of genetic group 2 and the greatest number identified in Pst isolates from genetic group 4 (Fig. 2 b). The low differentiation of Pst isolates in genetic group 2 from the PST-130 reference isolate likely reflects a close genetic relationship. Finally, we confirmed that the 242 genes selected could be used successfully to reconstruct the global phylogeny and assign Pst isolates to the 14 previously defined groups (Fig. 2 c; Additional files 6 and 7 ). Overall, this analysis illustrated that using sequence data from a minimal set of 242 polymorphic Pst genes was sufficient to accurately genotype Pst isolates and re-construct a comparable phylogeny to that achieved from full-genome or transcriptome sequencing. Genes selected for amplicon sequencing are distributed across the Pst genome and the majority encode enzymes To characterise the 242 Pst genes selected for sequencing on the MinION platform, we carried out positional and functional annotation. To assess the distribution of the 242 polymorphic genes across the Pst genome, we identified their genomic locations in the highly contiguous Pst -104 reference genome [ 16 ]. For 241 of the 242 genes, near-identical (> 94% pairwise identity) hits in the genome were obtained when gene sequences were mapped to the genome using minimap2 [ 17 ]. These 241 genes were distributed across a total of 135 genome scaffolds, with the majority of genes (60%) located on scaffolds that contained only one of the 241 genes (Additional file 1 : Table S5). Only 10 scaffolds contained more than five of these genes, suggesting that the majority of the 241 genes were scattered across the genome and not grouped in gene clusters (Fig. 3 a). Using gene ontology (GO) term analysis, we found that the majority (64%) of the 242 genes encoded proteins with enzymatic functions (GO: 0003824—catalytic activity; GO: 0005488—binding) and were involved in different metabolic and cellular processes (Fig. 3 b; Additional file 1 : Table S5). Overall, this analysis indicates that 241 of the 242 Pst genes selected are well distributed across the Pst genome and are enriched for functions in fungal metabolism. Fig. 3 The 242 Pst genes selected are evenly distributed across the Pst genome and a large proportion encode proteins with enzymatic functions. a For 241 of the 242 genes, near-identical (> 94% pairwise identity) hits were identified in the more contiguous Pst -104 genome and 60% were located on scaffolds that contained only one of the 241 genes. Bar chart illustrates the number of genes identified on the given numbers of scaffolds. b Functional annotation of the 242 Pst genes selected for MinION sequencing revealed that they largely encode proteins with enzymatic functions. Bar charts illustrate GO term analysis, with gene functions associated with ‘Biological process’, ‘Metabolic function’ and ‘Cellular component’ highlighted Full size image Comparative analysis of the Illumina and Oxford Nanopore sequencing platforms To assess the suitability of the mobile MinION sequencer for population diversity analysis using the 242 Pst genes selected, we carried out a comparative analysis with data generated on the Illumina MiSeq platform, which is frequently used for this purpose [ 18 ]. Four Pst -infected wheat samples were collected in 2017 in Ethiopia (Additional file 1 : Table S1). Following genomic DNA extraction, each of the aforementioned 242 Pst genes was amplified from each sample. Each gene was then used for amplicon sequencing on both the MinION and MiSeq platforms. A total of 6.9, 3.6, 6.2 and 6.4 million paired-end Illumina reads and 109, 102, 128 and 113 thousand MinION reads were generated for each of the four Pst -infected wheat samples (17.0504, 17.0505, 17.0506 and 17.0507 respectively). Following base calling and quality filtering, reads were aligned to the gene sequences for the 242 genes from the PST-130 reference [ 13 ] (Additional file 1 : Table S6 and S7). For each Pst -infected sample, consensus sequences were generated for each of the 242 genes, using data produced on the Illumina MiSeq platform. Each consensus gene set separately incorporated the SNPs identified within the gene space by mapping the reads from each of the four Pst isolates against the gene sequences of the 242 genes. These four sets of sequences formed an accurate baseline for comparison with sequence data generated on the MinION sequencer. To evaluate the minimum depth of coverage required to obtain similar levels of accuracy on the MinION sequencer, we performed a comparative analysis between the two platforms. Sequence data generated on the MinION platform were used to create consensus sequences for each of the aforementioned 242 Pst genes using varying depths of coverage for each of the four Pst -infected wheat samples. The percentage identity of these consensus sequences was then determined through comparative analysis with the MiSeq baseline consensus sequences. A minimum depth of 20× coverage on the MinION sequencer was sufficient to achieve 98.74% sequence identity between the two datasets (Fig. 4 a). Fig. 4 A minimum of 20x depth of coverage on the MinION sequencer is sufficient to generate comparable gene sequence data to the Illumina MiSeq platform . a At 20x coverage on the MinION sequencer, comparisons with data generated on the Illumina MiSeq platform showed 98.74% sequence identity. b No notable selective bias occurred during library preparation and sequencing of individual genes using either the MiSeq or MinION platforms. Box plots show the percentage coverage for each of the 242 Pst genes sequenced for the four Pst isolates tested on the MinION and MiSeq platforms. c The number of SNPs per gene detected in each of the four MinION datasets was comparable to that from the MiSeq platform. Heatmaps represent the number of SNPs identified per gene ( y -axis) for the four Pst isolates sequenced on the MinION and MiSeq platforms. Full details regarding the number of SNPs identified per gene are provided in Additional file 1 : Table S9. In box plots a and b , bars represent median value, boxes signify the upper (Q3) and lower (Q1) quartiles, data falling outside the Q1–Q3 range are plotted as outliers Full size image We then investigated whether there was any notable selective bias during library preparation and sequencing of individual genes using either the MiSeq or MinION platform. We determined the percentage coverage for each of the 242 genes sequenced for the four Pst isolates on the two sequencing platforms. The average coverage per gene for the MiSeq (0.41 ± 0.02, S.E.) and MinION (0.41 ± 0.03, S.E.) platforms was comparable (Fig. 4 b). Using the predefined 20× coverage level, we evaluated the required run time to achieve this level of coverage across all 242 selected Pst genes on the MinION platform. Assuming equal coverage of all genes, we determined that to reach 20× coverage for all 242 genes in each of the four samples (4840 reads) would take less than 30 min from starting the MinION sequencing run [18.75 (17.0504), 21.77 (17.0505), 17.65 (17.0506) and 19.20 (17.0507) minutes] (Additional file 1 : Table S8). Finally, using the minimum level of 20× depth of coverage for data generated on the MinION sequencer, we defined the number of SNPs per gene in each of the four MinION datasets. This was then compared with SNP analysis using sequence data generated on the MiSeq platform. SNP profiles for each of the samples sequenced on the MinION and the MiSeq platforms were largely comparable, with the general trend being that more SNPs (compared with the reference) were identified when sequencing was carried out on the MinION platform (Fig. 4 c; Additional file 1 : Table S9). In particular, we observed that several positions that were designated as being homokaryotic from data generated on the MiSeq platform appeared as heterokaryotic when using the MinION sequencer. The average ratio of heterokaryotic to homokaryotic nucleotide positions using the MiSeq platform was 0.01 (± 0.0002, S.D.), which was 20% higher (0.012 ± 0.0004, S.D.) when the MinION sequencer was used (Additional file 1 : Table S10). However, as the overall average sequence identity between samples sequenced using the MiSeq and MinION platforms was > 98%, we concluded that when a minimum of 20× depth of coverage is achieved, the data generated on the MinION sequencer are largely comparable in accuracy to those from the MiSeq platform and therefore should be suitable for population genetic analysis. Pst isolates from Ethiopia in the 2017/2018 wheat crop season are genetically closely related To further assess the ability of the MinION-based sequencing platform to accurately define Pst genotypes in field-collected infected samples, we expanded our analysis to a larger sample of 51 Pst -infected wheat samples collected in Ethiopia predominantly during the 2017/2018 growing season (Additional file 1 : Table S1). DNA was extracted from each sample independently, and each of the aforementioned 242 Pst genes was amplified and prepared for amplicon sequencing on the MinION platform. In parallel, RNA was extracted and RNA-seq analysis was undertaken using the Illumina HiSeq platform and our field pathogenomics strategy for comparison [ 11 ]. An average of 114,519.37 (± 91,448.42, S.D.) reads per library were generated using the MinION sequencer and a total of 23,434,565.49 (± 2,468,438.63, S.D.) reads per library were generated on the HiSeq platform (Additional file 1 : Tables S7 and S11). Following base calling and data filtering, reads generated on the HiSeq or MinION platforms were aligned independently for the 51 Pst isolates to sequences of the 242 Pst genes selected. We then carried out the phylogenetic analysis as described above, using data from either the MinION or HiSeq platforms independently (Fig. 5 ; Additional files 8 , 9 , 10 , 11 and 12 ). To compare the Ethiopian Pst isolates with the global Pst population groups, we also included sequence data for the 242 genes from the 301 global Pst isolates in the phylogenetic analysis. The positioning of the 51 Ethiopian samples in the phylogenies was similar between the two datasets, with the 51 Pst field isolates grouping in two closely related clades in both cases (Fig. 5 and Additional file 8 ). This analysis further supports the conclusion that when a sufficient level of coverage is used, data generated on the MinION platform can be used to accurately define Pst genotypes. Fig. 5 Gene sequencing on the MinION platform can be used to accurately genotype Pst isolates and define specific race groups . All Ethiopian Pst isolates collected from 2016 onwards cluster in a single monophyletic group (orange diamonds). The 13 representatives of previously defined race groups (numbered squares) tended to cluster in the phylogeny with Pst isolates of a similar genetic background. Phylogenetic analysis was carried out using a maximum-likelihood model and 100 bootstraps. Scale indicates the mean number of nucleotide substitutions per site. Bootstrap values are provided in Additional file 10 Full size image Assigning Pst isolates to known genetic groups defined by SSR marker analysis To compare the phylogenetic clades with previously defined Pst genetic groups based on simple sequence repeat (SSR) marker analysis and pathogenicity testing [ 15 , 19 , 20 , 21 ], we selected 13 additional Pst isolates of diverse origin representing these groups (Additional file 1 : Table S12). DNA was extracted from each sample independently, and the 242 Pst genes were amplified and prepared for sequencing on the MinION platform. Following base calling and quality filtering, reads were aligned to sequences of the 242 PST-130 genes. The resulting data were then combined with those from the 301 global Pst isolates and the 51 Ethiopian Pst isolates collected predominantly during the 2017/2018 field season, and phylogenetic analysis was performed (Fig. 5 ; Additional files 9 and 10 ). The 13 Pst isolates representing previously defined Pst groups and races clustered in the phylogeny as follows. US312/14 (a.k.a AR13–06), representing a new group of isolates in North America carrying virulence to the yellow rust (Yr) resistance gene Yr17 , grouped in a clade with other recent Pst isolates that were collected in the USA and Canada in 2015 and 2016. AZ160/16 and AZ165/16 belonging to the PstS2 , v27 group, which has been prevalent in eastern and northern Africa and western Asia, grouped with Pst isolates from Ethiopia. UZ180/13 and UZ14/10, both representing the PstS5 , v17 group prevalent in central Asia, was basal to a clade of Ethiopian Pst isolates. UZ189/16 ( PstS9 , v17), frequently found in central Asia, formed a distinct branch in the phylogeny. ET08/10, representative of the PstS6 group and carrying virulence to Yr27 , formed a long unique branch. SE225/15, which belongs to the PstS4 race (a.k.a. ‘Triticale2006’) and is frequently found on triticale in Europe, formed a distinct branch close to Pst isolates from Ethiopia. KE86058, a representative of the PstS1 aggressive strain recovered from the ‘Stubbs Collection’, grouped with isolates from Ethiopia. DK14/16 and SE427/17 representing the ‘Warrior’ PstS7 group, DK52/16 representing the ‘Kranich’ PstS8 group and DK15/16 representing the ‘Warrior(-)’ PstS10 group, shown to be analogous to ‘genetic group 1’, ‘genetic group 5-1’ and ‘genetic group 4’, respectively [ 12 ], clustered accordingly in the phylogeny (Fig. 5 ). This result illustrates that data generated on the MinION platform for the 242 polymorphic Pst genes can be used to accurately distinguish the genetic groups previously defined from SSR marker-based classification, providing additional support to the methodology herein. Furthermore, the inclusion of these reference Pst isolates in future analysis will enable isolates of similar genetic background to be rapidly identified. In-field MinION-based diagnostics can define Pst isolates in Ethiopia in real-time As resource-poor locations frequently bear the brunt of plant disease epidemics, we developed a simplistic Mobile And Real-time PLant disEase (MARPLE) diagnostics pipeline so that the 242 polymorphic Pst genes could be amplified and sequenced on the MinION sequencer for phylogenetic analysis in situ (Fig. 6 ; Additional file 13 ). To test our MARPLE diagnostics pipeline, we collected four Pst -infected wheat samples in 2018 and carried out analysis in situ in Ethiopia (Additional file 1 : Table S1). As Ethiopia may act as a gateway for new Pst isolates entering Africa from sexually recombining populations in Asia, this pipeline would enable rapid detection of any new Pst strains entering East Africa. Fig. 6 Illustration of the MARPLE pipeline . A simplistic Mobile And Real-time PLant disEase (MARPLE) diagnostics pipeline was developed so that the 242 polymorphic Pst genes could be amplified and sequenced on the MinION platform for population genetic analysis in situ . This pipeline consists of three stages (DNA preparation, Sequencing and Data analysis) and can be executed independently of stable electricity or internet connectivity in less than 2 days from sample collection to completion of the phylogenetic analysis Full size image First, DNA was extracted from each Pst -infected wheat sample using a simplified method wherein Pst -infected plant tissue was homogenised, cells lysed and DNA isolated using magnetic-bead-based purification (Fig. 6 ). Next, a panel of 242 oligonucleotide pairs was used in PCR amplification to enrich for the previously defined set of Pst gene sequences. This enrichment enabled direct analysis of each of the field-collected Pst -infected wheat tissue samples. The 242 oligonucleotide pairs were pooled into four groups, where concentrations were optimised individually to amplify all genes in each individual group (Additional file 1 : Table S4). To ensure ease of portability and avoid the need for continuous electricity, PCR amplification was performed using thermostable Taq polymerase and a battery-powered mobile miniPCR machine (Fig. 6 ; Additional file 1 : Table S13). Finally, a simple analysis pipeline independent of internet connectivity was utilised on a laptop computer for phylogenetic analysis of Pst isolates (Fig. 6 ). Overall, the entire pipeline from sample collection to completion of the phylogenetic analysis was achieved within 2 days, providing rapid real-time information on the population dynamics of Pst in Ethiopia. The resulting phylogenetic analysis of the four Pst -infected wheat samples illustrated that the late 2018 Pst population in Ethiopia was similar to that defined in the previous 2017/2018 growing season (Fig. 5 ). Discussion Utility of mobile gene sequencing for plant pathogen surveillance Effective disease management depends on timely and accurate diagnosis that can be used to guide appropriate disease control decisions. For many plant pathogens, including Pst , visual inspection at the symptomatic stage provides clearly recognisable indications of the causative agent. However, the ability to go beyond visual species-level diagnostics and rapidly define newly emergent strains or identify those with specific properties such as fungicide resistance, toxin production or specific virulence profiles (races) helps tailor proportionate and effective disease control measures. For most fungal plant pathogens, diagnostic methods providing strain-level resolution remain highly dependent on time-consuming and costly controlled bioassays carried out by specialised laboratories. However, the genomic revolution has provided opportunities to explore rapid strain-level diagnostics. The advent of mobile sequencing platforms allows these systems to become geographically flexible and independent of highly specialised expertise and costly infrastructure investment. Here, we used the mobile MinION sequencing platform to develop a genomic-based method called MARPLE diagnostics for near real-time PoC plant disease diagnostics for fungal pathogens, which have proved less tractable for such approaches. The size of fungal genomes makes full-genome or transcriptome sequencing on portable devices prohibitively expensive. Furthermore, for at least the wheat rust pathogens the lengthy processes associated with purification and multiplication of isolates for high molecular weight DNA extraction has prevented whole genome sequencing being used for PoC strain-level diagnostics. By focusing on sequencing 242 highly variable genes that are informative for distinguishing individual Pst lineages, we were able to reduce the volume of data required whilst maintaining the ability to define individual strains. Analysis of this highly polymorphic gene set revealed it to be rich in genes with functions in fungal metabolism, with a large number of these genes encoding enzymes. The approach we have taken herein is extremely flexible, and the existing gene panel can be easily supplemented with additional genes as required. For instance, as avirulence proteins that trigger host immune responses are identified in Pst , the corresponding genes can be incorporated into the method and monitored for mutations that could be linked to a gain of virulence. Furthermore, including genes that encode proteins identified as conserved fungicide targets across fungal pathogens would be extremely valuable. This would enable real-time monitoring of known mutations that have been linked to decreases in sensitivity in other pathosystems. For the wheat rust pathogens, the two main classes of fungicides at risk of resistance developing are the triazole demethylation inhibitors that target the cyp51 gene and the succinate dehydrogenase (SDH) inhibitors that target genes encoding the four subunits of the SDH complex [ 22 ]. Incorporation of cyp51 and the four SDH complex genes in the gene panel for Pst is currently underway and will provide real-time monitoring that can rapidly detect any novel mutations as they emerge, ensuring chemical control strategies are modified accordingly. The incorporation of high-resolution genomic data into clinical diagnostics and surveillance for human health has demonstrated the utility of such approaches in rapidly identifying drug-resistance mutations, accurately typing strains and characterising virulence factors [ 23 ]. The integration of such rapid genomic-based diagnostic data enables detection and appropriate action to be taken in real-time to potentially circumvent pathogen spread. Ethiopia as a test case for real-time, gene-sequencing-based Pst diagnostics and surveillance Ethiopia is the largest wheat producer in sub-Saharan Africa and is currently facing a major threat from wheat rust diseases, including yellow rust caused by Pst . As a potential gateway for new Pst strains entering Africa, it is the highest priority country in the region for rapid diagnostics [ 24 ]. In recent years, at least two novel virulent rust races have migrated into Ethiopia from other regions on prevailing winds [ 25 ]. For none of these recent incursions was it possible to obtain early, in-season detection and diagnosis of the new virulent races. Identification was possible only after disease establishment and spread had already occurred. The reliance on specialised laboratories outside of Africa for diagnosing individual Pst strains slows disease management decisions, a situation exacerbated by the lengthy nature of the assays, which can take many months to complete. Currently, no developing country has the capacity to undertake real-time pathogen diagnostics on important crop diseases such as wheat yellow rust. Yet, developing countries bear the brunt of the epidemics. Therefore, we focused the deployment of our nanopore-based Pst genotyping system in Ethiopia. As infrastructure and logistics in developing countries can often limit the deployment of advanced diagnostic tools, we developed a mobile lab system contained in a single hard case to facilitate the movement of our MARPLE diagnostic platform between locations. Although still dependent on specialist expertise in the design phase, the resulting system itself is simple, making it highly suitable for resource-poor regions. For instance, an in-country trial illustrated that the pipeline can be used directly in Ethiopia in any lab irrespective of existing infrastructure and without the need for continuous electricity or access to additional laboratory equipment [ 26 ] (Additional file 13 ; Additional file 1 : Table S13). Using this platform, we determined that the Ethiopian Pst population structure has remained stable since 2016, with all isolates analysed being genetically closely related. As genomic-based PoC diagnostics enters the mainstream, such real-time genotyping techniques will enable rapid detection of new Pst strains entering East Africa. This high-resolution genetic data can then help inform deployment of Yr disease resistance genes to match the most prevalent races present in the region. Furthermore, such data can also be incorporated in near real-time into spatio-temporal population models for Pst , linking epidemiological modelling and genomic data to elucidate likely transmission events and enhance the predictive power of disease forecasting [ 27 ]. The future of genomic-based plant pathogen diagnostics and surveillance The utility of genomic-based approaches for real-time disease diagnostics and surveillance has been illustrated time and again during human health outbreaks. However, transferring these approaches to track fungal threats to plant health can be challenging, particularly considering their frequent obligately biotrophic nature and large genome sizes. The approach we developed herein provides a means to overcome these limitations and generate comprehensive genotypic data for pathogen strains within days of collecting material from the field, making it highly suited to disease emergencies. The mobility of our approach also obviates the movement of live samples and transfers ownership back to sample collectors in-country. In addition, such molecular-based approaches enhance our testing capacity and provide the means for rapid pre-selection of the most notable and representative isolates for complementary virulence profiling, which remains an essential but costly and time-consuming process. One future challenge when designing similar approaches for other pathosystems will be the need for existing genomic data to define polymorphic genes for amplification. However, draft genome assemblies are available for many important fungal plant pathogens and the cost of re-sequencing diverse isolates is ever decreasing. By focusing on generating data from a small subset of genes, our approach is also relatively inexpensive and generates small, unified datasets that can then be readily explored using analytic and visualisation tools created for smaller bacterial and viral genomic datasets such as Nextstrain [ 28 ]. These tools have proved extremely informative in tracking viral pathogen evolution and spread for global human health threats [ 29 ]. Using our approach, data for plant pathogens could be incorporated immediately into such a tool to understand how disease outbreaks and novel variants spread. Conclusions In this study, we developed a rapid PoC method called MARPLE diagnostics for genotyping individual Pst isolates directly from field-collected infected plant tissue in situ. Our targeted sequencing approach unlocks new opportunities for mobile, genomic-based, strain-level diagnostics to be applied to complex fungal pathogens. The ability to rapidly identify individual strains with specific properties such as fungicide resistance will be invaluable in guiding disease control measures and represents a new paradigm for approaches to tracking plant disease. Methods RNA extraction and RNA-seq of global Pst -infected plant samples A total of 100 Pst -infected wheat samples were collected from 2015 to 2017 from nine countries and stored in the nucleic acid stabilisation solution RNAlater® (Thermo Fisher Scientific, Paisley, UK). RNA was extracted using a Qiagen RNeasy Mini Kit following the manufacturer’s instructions (Qiagen, Manchester, UK), with the quality and quantity of RNA assessed using an Agilent 2100 Bioanalyzer (Agilent Technologies, CA, USA). cDNA libraries were prepared using an Illumina TruSeq RNA Sample Preparation Kit (Illumina, CA, USA) and sequenced on the Illumina HiSeq 2500 platform at GENEWIZ (NJ, USA). Adaptor and barcode trimming and quality filtering were performed using the FASTX-Toolkit (version 0.0.13.2). Paired-end reads (101 bp) were aligned to the PST-130 reference genome [ 13 ], and single nucleotide polymorphism (SNP) calling was performed as described previously [ 11 ]. Phylogenetic analysis All phylogenetic analyses were carried out using a maximum-likelihood approach with RAxML 8.0.20 using the GTRGAMMA model, with 100 replicates using the rapid bootstrap algorithm [ 30 ]. For analysis of the global Pst population, nucleotide residues were filtered using a minimum of 20× depth of coverage for sites that differed from the PST-130 reference genome [ 13 ] and 2x coverage for sites that were identical. These filtered positions were then used to independently generate consensus gene sets that incorporated separately the SNPs identified within the gene space for each Pst isolate as described previously [ 31 ]. The third codon position of these genes was used for phylogenetic analysis. For samples sequenced on the MinION platform, the 242 polymorphic Pst genes were utilised for phylogenetic analysis. All phylogenetic trees were visualised in Dendroscope version 3.5.9 [ 32 ] or MEGA version 7 [ 33 ]. Population structure analysis of global Pst isolates The genetic subdivision of the 301 global Pst isolates was assessed using nonparametric multivariate clustering without any predetermined genetic model. This method was selected to avoid bias associated with providing location information of Pst isolates from different lineages to the model. First, biallelic SNP sites introducing a synonymous change in at least one isolate were selected and extracted for all 301 Pst isolates. These data were used for multivariate analysis using DAPC implemented in the Adegenet package version 2.1.1 in the R environment [ 14 ]. The number of population clusters (Kmax) was identified using the Bayesian Information Criterion (BIC). After initially selecting five genetic groups, DAPC was repeated for isolates within each of these population clusters to define subdivisions within each group. Selection of highly polymorphic Pst genes To select a polymorphic Pst gene set that could be used to accurately reconstruct the Pst global phylogeny, alignments of sequences from the 301 Pst global isolates against the PST-130 reference genome [ 13 ] were filtered for sites represented in at least 60% of the isolates. Next, Pst isolates which had at least 60% of the sites represented at 20× coverage were selected. For each position in the alignment, the degree of polymorphism was determined by calculating the number of unique bases found in a given position in each of the 301 Pst global isolates. This number was then divided by the length of the gene to calculate the number of SNPs per kilobase for each gene (SNPs/kb). All genes within a range of 1–3 kb that met a minimum SNPs/kb value threshold were then aggregated to select 1690, 1006, 748, 500, 402, 301, 204, 151 and 100 of the most polymorphic genes using progressively increasing SNPs/kb cut-off values (0.001, 0.006, 0.0105, 0.018, 0.023, 0.0285, 0.036, 0.042 and 0.051, respectively) and used to carry out phylogenetic analysis as described previously. To calculate the number of SNPs present in each of the five global groups defined by DAPC analysis, concatenated alignments of the 242 polymorphic Pst genes for each of the 301 global Pst isolates were used to calculate the total number of SNPs present in each sample using SNP-sites [ 34 ] and plotted using the ggplot2 package [ 35 ] in R. Annotation of the polymorphic Pst gene set The genomic location of each of the 242 polymorphic Pst genes was identified by mapping these gene sequences to the Pst -104 genome [ 16 ] using minimap2 version 2.15 [ 17 ] with parameters recommended in the manual for pairwise genome alignment (minimap -ax asm10). Locations were processed into BED format using bedtools version 2.27.0 [ 36 ] and analysed and plotted using R. GO term analysis of the 242 genes was conducted using BLAST2GO version 5.2 [ 37 ]. DNA extraction and amplification of Pst genes Pst -infected wheat leaf samples were collected from the field and stored in RNAlater®. These samples consisted of a single lesion or rust pustule. Excess RNAlater® was removed, and ~ 10–20 mg of tissue was used for each DNA extraction. DNA was extracted using a DNeasy 96 Plant Kit (Qiagen, Manchester, UK) following the manufacturer’s instructions and eluted twice through the column in a total of 30 μl elution buffer. The DNA extracted was used for amplifying the 242 variable Pst genes via PCR with four pools containing oligonucleotides (primers) with different concentrations optimised for multiplex PCR (Additional file 1 : Table S4) using Q5® Hot Start High-Fidelity 2X Master Mix (New England Biolabs, MA, USA). PCR conditions used were 98 °C for 30 s, 40 cycles of 98 °C for 10 s, 63 °C for 30 s and 72 °C for 2 min 30 s, and a final extension of 72 °C for 2 min. PCR products were purified using a QIAquick PCR Purification Kit (Qiagen, Manchester, UK) following the manufacturer’s instructions and eluted twice through the column in a total of 30 μl elution buffer. The concentration of purified PCR products from each primer pool was measured using a Qubit dsDNA HS Assay Kit (Invitrogen, MA, USA) following the manufacturer’s instructions. Illumina library preparation for amplicon sequencing Four Pst -infected wheat samples (17.0504, 17.0505, 17.0506 and 17.0507) were utilised for amplicon sequencing using the MiSeq platform (Illumina, CA, USA). Following DNA extraction and PCR amplification of the 242 selected Pst genes, an equal mass of purified PCR products from each of the four primer pools was combined prior to library preparation, giving a total of 1 μg DNA (250 ng per primer pool; Additional file 1 : Table S14). Samples were prepared for sequencing using a KAPA HyperPlus Library Preparation Kit (Roche, Basel, Switzerland) following the manufacturer’s instructions. PCR products were fragmented enzymatically into sizes of approximately 600 bp using a reaction time of 10 min. Each sample was tagged with a unique barcode to enable sample identification. The resulting libraries had insert sizes of 790–911 bp and were made into an equimolar pool of 40 μl prior to sequencing (Additional file 1 : Table S14). Libraries were sequenced using an Illumina MiSeq platform and MiSeq Reagent Kit v3 150 cycles (Illumina, CA, USA) following the manufacturer’s instructions. MinION sequencing of Ethiopian Pst -infected wheat samples For each of the 51 Pst -infected wheat samples collected in Ethiopia in 2016 (one sample) and 2017 (50 samples), an equal mass of PCR products from each of the four primer pools was combined prior to library preparation with a total of between 16 and 400 ng amplicon DNA (4–100 ng per primer pool; Additional file 1 : Table S15). Samples were then processed into multiplexed libraries containing eight samples each using a PCR Barcoding Kit, SQK-PBK004 (Oxford Nanopore Technologies, Oxford, UK) following the manufacturer’s instructions. Equimolar pools were made using eight samples having different barcode tags with a total mass of DNA between 10 and 1000 ng (1.3–100 ng per sample; Additional file 1 : Table S15). Pooled samples were sequenced on a MinION sequencer using Flow Cells FLO-MIN106D R9 version or FLO-MIN107 R9 version (Oxford Nanopore Technologies, Oxford, UK) following the manufacturer’s instructions until 2 million reads were generated (250,000 per sample; Additional file 1 : Table S15). In-field sequencing of Pst -infected wheat samples in Ethiopia Four Pst -infected wheat leaf samples (Et-0001, Et-0002, Et-0003, Et-0004) were collected from different locations in Ethiopia in 2018 (Additional file 1 : Table S1) and stored in RNAlater®; approximately 10–20 mg of tissue was used for DNA extraction. Samples were disrupted in 200 μl lysis buffer [0.1 M Tris-HCl pH 7.5, 0.05 M ethylenediaminetetraacetic acid (EDTA) pH 8 and 1.25% sodium dodecyl sulphate (SDS)] using a micropestle for approximately 30 s. The ground tissue was allowed to settle and the supernatant removed. DNA was purified from the supernatant by adding 200 μl AMPure XP beads (Beckman Coulter, CA, USA) to each sample, mixing briefly and incubating at room temperature for 15 min. Tubes were placed on a magnetic rack to allow the supernatant to clear. The supernatant was removed and discarded before beads were washed twice with 80% ethanol and the supernatant removed. The beads were left on the magnetic rack to dry, and 30 μl nuclease-free water was added to resuspend the pellet. Tubes were removed from the magnet and mixed before incubation at room temperature for 2 min. The tubes were incubated briefly on the magnetic rack, and the clear supernatant containing DNA was transferred into a new tube. The extracted DNA was used for amplifying the 242 variable Pst genes via PCR with four pools containing primers with different concentrations optimised for multiplex PCR (Additional file 1 : Table S4) using AmpliTaq Gold™ 360 Master Mix (Applied Biosystems, CA, USA) in a 50 μl reaction volume. The PCR conditions used were 95 °C for 10 min, 40 cycles of 95 °C for 15 s, 51 °C for 30 s and 72 °C for 4 min, and a final extension of 72 °C for 7 min. DNA was purified from the PCR product using 50 μl AMPure XP beads (Beckman Coulter, CA, USA). For each sample, an equal volume of each purified PCR pool was combined for each library preparation. The final volume per sample entered into each library preparation was 7.5 μl (1.88 μl per purified PCR pool). Samples were prepared for sequencing using a Rapid Barcoding Kit, SQK-RBK004 (Oxford Nanopore Technologies, Oxford, UK). Libraries were sequenced on the MinION platform using Flow Cells FLO-MIN106D R9 version (Oxford Nanopore Technologies, Oxford, UK) following the manufacturer’s instructions until 250,000 reads were generated (Additional file 1 : Table S15). Data analysis of samples sequenced using the MinION platform Following base calling and demultiplexing using Albacore version 2.3.3 (Oxford Nanopore Technologies, Oxford, UK), reads from each sample generated on the MinION platform were trimmed using porechop version 0.2.3 ( ) and aligned to the 242-gene set from PST-130 using BWA-MEM version 0.7.17 [ 38 ] with default settings and processed using SAMTOOLS version 1.8 [ 39 ]. Oxford nanopore is known to be error prone and hence BWA-MEM was selected as it is particularly suited to such datasets. Consensus sequences based on these alignments were generated for each sample by calling bases with a minimum of 20× coverage. Heterokaryotic positions were deemed as such when the minor allele had a minimum allele frequency of at least 0.25. For phylogenetic analysis, concatenated alignments of the 242-gene set from each of the Pst samples were used. Comparative analysis of the Illumina MiSeq and MinION sequencing platforms Four samples (17.0504, 17.0505, 17.0506 and 17.0507) were sequenced on the MinION and the Illumina MiSeq platforms as described above. Data generated on the MinION platform were analysed as described. The MiSeq data were aligned to the 242 Pst gene set using BWA-MEM version 0.7.17 [ 38 ] with default settings and processed using SAMTOOLS version 1.8 [ 39 ]. Consensus sequences based on these alignments were generated for each sample by calling bases with a minimum of 20x coverage. Heterozygous positions were deemed as such when the minor allele had a minimum allele frequency of at least 0.25. To compare the MinION and MiSeq platforms, the above procedure of generating MinION consensus sequences was repeated using different coverage cut-off values and the sequences for each of the 242 Pst genes at each of the different coverage cut-off values were compared against the Illumina consensus sequence (called using a 20× coverage cut-off). Positions that were deemed ambiguous (< 20× coverage) in the MiSeq consensus sequences were excluded from the analysis. Percentage identity between the MinION and MiSeq consensus sequences was calculated using the ggplot2 package in R [ 35 ]. The coverage values for each gene as a percentage of the total coverage for each of the four samples sequenced using the MiniON and MiSeq platforms was calculated using SAMTOOLS version 1.8 [ 39 ] and R. A heatmap of the number of SNPs found in each of the 242 genes for each of the four samples compared with the PST-130 reference genome using Illumina MiSeq and MinION sequencing technologies was generated using the pheatmap package in R [ 40 ]. Availability of data and materials The raw transcriptomic and genomic sequence data that support the findings of this study have been deposited in the European Nucleotide Archive (ENA: ERP113880) [ 41 ]. All custom computer code and an easy-to-follow guide for installing prerequisite software using the Python Conda package manager has been deposited on github ( ) [ 42 ].
Scientists have created a new mobile surveillance technique to rapidly diagnose one of agriculture's oldest enemies—wheat rusts. Using a hand-held DNA sequencing device, they can define the precise strain of the wheat rust fungus in a farmer's field within just 48 hours of collecting samples. This gives researchers worldwide vital time needed to spot and control emerging epidemics. The wheat rust fungi have threatened wheat production almost since the dawn of agriculture and harvests in all major wheat growing areas worldwide remain under threat. The best defense is to grow wheat varieties resistant to infection, but over time, new strains of rust develop that lead to new epidemics. The best way to stay ahead of the rusts is to quickly identify and track the disease in the field. The paper, "MARPLE, a point-of-care, strain-level disease diagnostics and surveillance tool for complex fungal pathogens," in BMC Biology shows how a research partnership reduced the speed of diagnostics from many months in high-end labs, to just 2 days from the side of an Ethiopian field. "Knowing which strain you have, is critical information that can be incorporated into early warning systems and results in more effective control of disease outbreaks in farmer's fields" said Dr. Dave Hodson, a rust pathologist at CIMMYT in Ethiopia and co-author. "The challenge is that tracking the wheat rusts is not as simple as you would expect. There are many different strains, all with unique characteristics that cannot be told apart without lengthy in-lab tests. Consequently, identifying which ones are a threat can take many, many months, likely by which time the infection has spread." said Dr. Diane Saunders, lead author and Group Leader at the John Innes Centre. The new MARPLE (Mobile And Real-time PLant disEase) diagnostic platform the researchers created, targets parts of the rust genetic code that can be sequenced on the portable MinION sequencing platform from Oxford Nanopore. "This helps us tell strains apart and quickly recognize those we've seen before or spot new ones that could be a new threat." said first author Dr. Guru Radhakrishnan from the John Innes Centre. "What started as a proof of concept is now already being used in the field," said Dr. Saunders, "this development will enable increased surveillance of crop disease pressure and more targeted control." Part of the challenge for wheat farmers is that they are in a constant game of cat and mouse with the disease. Knowing which wheat rust strains are in the local area can feed into advising which wheat varieties are safest to grow. "Finally, with this project we can bring the latest technology to field sites to inform not just the researchers but also the farmers," said Tadessa Daba, Director, Agricultural Biotechnology Research Directorate, EIAR. Credit: John Innes Centre Saving time is not the only benefit, the MARPLE diagnostics method can also be carried out anywhere. Previously if researchers at field sites wanted to test a suspected infected sample, they would have to ship it to a handful of specialist labs frequently overseas. The MARPLE diagnostics method was formulated to operate directly in the field. This in itself can be challenging with intermittent electricity, no internet access in remote locations and a lack of refrigeration for lab reagents. Yet, if the pipeline was to function at these research stations, it needed to work despite these barriers. The new platform takes protocols that normally require a lot of equipment and expertise and brings it to a level where you require less facilities or specialist knowledge. "We've tried to make as few cold chain elements as possible," said Ph.D. student and co-author Nicola Cook, "with simple steps that you can perform with chemicals that are readily available locally," This combined speed and self-reliance allows in-country research groups to coordinate more closely with government ministries and national breeding programs which work to protect the local farmers. As a proof of principle, the entire platform from field sample to strain level result was conducted in Holeta, Ethiopia in September last year. The research group demonstrated the MARPLE diagnostics pipeline operating successfully beside a wheat field, from the back of a Landcruiser. "I'm really highly impressed with this project," said Tesfaye Disasa, Director of Biosciences Institute, EIAR, "it introduces new technology into the country as well as the capacity building it brings to the institute." For their work on creating the MARPLE platform, the team were awarded Innovator of the Year award for international impact from the Biotechnology and Biological Sciences Research Council in May this year. Following this award and through the support of the CGIAR Inspire challenge and the Delivering Genetic Gain in Wheat Project, a further four field stations across Ethiopia will be setup to use the MARPLE mobile lab. "This is real national and international work that ultimately helps the resource-poor farmers" said Dr. Badada Girima, Rust Pathologist, Delivering Genetic Gain in Wheat program. The paper outlines the steps that were taken to deliver this combined computational and experimental framework. It is hoped that by publishing this process, similar surveillance methods could be developed for other complex fungal pathogens that pose threats to plant, animal and human health.
10.1186/s12915-019-0684-y
Medicine
Researchers discover new link between heart disease and red meat
Paper: dx.doi.org/10.1038/nm.3145 Journal information: Nature Medicine
http://dx.doi.org/10.1038/nm.3145
https://medicalxpress.com/news/2013-04-link-heart-disease-red-meat.html
Abstract Intestinal microbiota metabolism of choline and phosphatidylcholine produces trimethylamine (TMA), which is further metabolized to a proatherogenic species, trimethylamine- N -oxide (TMAO). We demonstrate here that metabolism by intestinal microbiota of dietary l -carnitine, a trimethylamine abundant in red meat, also produces TMAO and accelerates atherosclerosis in mice. Omnivorous human subjects produced more TMAO than did vegans or vegetarians following ingestion of l -carnitine through a microbiota-dependent mechanism. The presence of specific bacterial taxa in human feces was associated with both plasma TMAO concentration and dietary status. Plasma l -carnitine levels in subjects undergoing cardiac evaluation ( n = 2,595) predicted increased risks for both prevalent cardiovascular disease (CVD) and incident major adverse cardiac events (myocardial infarction, stroke or death), but only among subjects with concurrently high TMAO levels. Chronic dietary l -carnitine supplementation in mice altered cecal microbial composition, markedly enhanced synthesis of TMA and TMAO, and increased atherosclerosis, but this did not occur if intestinal microbiota was concurrently suppressed. In mice with an intact intestinal microbiota, dietary supplementation with TMAO or either carnitine or choline reduced in vivo reverse cholesterol transport. Intestinal microbiota may thus contribute to the well-established link between high levels of red meat consumption and CVD risk. Main The high level of meat consumption in the developed world is linked to CVD risk, presumably owing to the large content of saturated fats and cholesterol in meat 1 , 2 . However, a recent meta-analysis of prospective cohort studies showed no association between dietary saturated fat intake and CVD, prompting the suggestion that other environmental exposures linked to increased meat consumption are responsible 3 . In fact, the suspicion that the cholesterol and saturated fat content of red meat may not be sufficiently high enough to account for the observed association between CVD and meat consumption has stimulated investigation of alternative disease-promoting exposures that accompany dietary meat ingestion, such as high salt content or heterocyclic compounds generated during cooking 4 , 5 . To our knowledge, no studies have yet explored the participation of commensal intestinal microbiota in modifying the diet-host interaction with reference to red meat consumption. The microbiota of humans has been linked to intestinal health, immune function, bioactivation of nutrients and vitamins, and, more recently, complex disease phenotypes such as obesity and insulin resistance 6 , 7 , 8 . We recently reported a pathway in both humans and mice linking microbiota metabolism of dietary choline and phosphatidylcholine to CVD pathogenesis 9 . Choline, a trimethylamine-containing compound and part of the head group of phosphatidylcholine, is metabolized by gut microbiota to produce an intermediate compound known as TMA ( Fig. 1a ). TMA is rapidly further oxidized by hepatic flavin monooxygenases to form TMAO, which is proatherogenic and associated with cardiovascular risks. These findings raise the possibility that other dietary nutrients possessing a trimethylamine structure may also generate TMAO from gut microbiota and promote accelerated atherosclerosis. TMAO has been proposed to induce upregulation of macrophage scavenger receptors and thereby potentially contribute to enhanced “forward cholesterol transport.” 10 . Whether TMAO is linked to the development of accelerated atherosclerosis through additional mechanisms, and which specific microbial species contribute to TMAO formation, have not been fully clarified. Figure 1: TMAO production from l -carnitine is a microbiota-dependent process in humans. ( a ) Structure of carnitine and scheme of carnitine and choline metabolism to TMAO. l -Carnitine and choline are both dietary trimethylamines that can be metabolized by microbiota to TMA. TMA is then further oxidized to TMAO by flavin monooxygenases (FMOs). ( b ) Scheme of the human l -carnitine challenge test. After a 12-h overnight fast, subjects received a capsule of d3-(methyl)-carnitine (250 mg) alone, or in some cases (as in data for the subject shown) also an 8-ounce steak (estimated 180 mg l -carnitine), whereupon serial plasma and 24-h urine samples were obtained for TMA and TMAO analyses (visit 1). After a weeklong regimen of oral broad-spectrum antibiotics to suppress the intestinal microbiota, the challenge was repeated (visit 2), and then again a final third time after a ≥3-week period to permit repopulation of intestinal microbiota (visit 3). ( c , d ) LC-MS/MS chromatograms of plasma TMAO ( c ) and d3-TMAO ( d ) in an omnivorous subject using specific precursor → product ion transitions indicated at t = 8 h for each visit. ( e ) Stable-isotope-dilution LC-MS/MS time course measurements of d3-labeled TMAO and carnitine in plasma collected from sequential venous blood draws at the indicated time points. Data shown in c – e are from a representative female omnivorous subject who underwent carnitine challenge. Data are organized vertically to correspond with the visit schedule indicated in b . Full size image l -carnitine is an abundant nutrient in red meat and contains a trimethylamine structure similar to that of choline ( Fig. 1a ). Although dietary ingestion is a major source of l -carnitine in omnivores, it is also endogenously produced in mammals from lysine and serves an essential function in transporting fatty acids into the mitochondrial compartment 10 , 11 . l -Carnitine ingestion and supplementation in industrialized societies have markedly increased 12 . Whether there are potential health risks associated with the rapidly growing practice of consuming l -carnitine supplements has not been evaluated. Herein we examine the gut microbiota–dependent metabolism of l -carnitine to produce TMAO in both rodents and humans (omnivores and vegans or vegetarians). Using isotope tracer studies in humans, clinical studies to examine the effects on cardiovascular disease risk, and animal models including germ-free mice, we demonstrate a role for gut microbiota metabolism of l -carnitine in atherosclerosis pathogenesis. We show that TMAO, and its dietary precursors choline and carnitine, suppress reverse cholesterol transport (RCT) through gut microbiota–dependent mechanisms in vivo . Finally, we define microbial taxa in feces of humans whose proportions are associated with both dietary carnitine ingestion and plasma TMAO concentrations. We also show microbial compositional changes in mice associated with chronic carnitine ingestion and a consequent marked enhancement in TMAO synthetic capacity in vivo . Results Metabolomic studies link l -carnitine with CVD Given the similarity in structure between l -carnitine and choline ( Fig. 1a ), we hypothesized that dietary l -carnitine in humans, like choline and phosphatidylcholine, might be metabolized to produce TMA and TMAO in a gut microbiota–dependent fashion and be associated with atherosclerosis risk. To test this hypothesis, we initially examined data from our recently published unbiased small-molecule metabolomics analyses of plasma analytes and CVD risks 9 . An analyte with identical molecular weight and retention time to l -carnitine was not in the top tier of analytes that met the stringent P value cutoff for association with CVD. However, a hypothesis-driven examination of the data using less stringent criteria (no adjustment for multiple testing) revealed an analyte with the appropriate molecular weight and retention time for l -carnitine that was associated with cardiovascular event risk ( P = 0.04) ( Supplementary Table 1 ). In further studies we were able to confirm the identity of the plasma analyte as l -carnitine and develop a quantitative stable-isotope-dilution liquid chromatography tandem mass spectrometry (LC-MS/MS) method for measuring endogenous l -carnitine concentrations in all subsequent investigations ( Supplementary Figs. 1–3 ). Human gut microbiota is required to form TMAO from l -carnitine The participation of gut microbiota in TMAO production from dietary l -carnitine in humans has not previously been shown. In initial subjects (omnivores), we developed an l -carnitine challenge test in which the subjects were fed a large amount of l -carnitine (an 8-ounce sirloin steak, corresponding to an estimated 180 mg of l -carnitine) 13 , 14 , 15 , together with a capsule containing 250 mg of a heavy isotope–labeled l -carnitine (synthetic d3-(methyl)- l -carnitine). At visit 1 post-prandial increases in plasma d3-TMAO and d3- l -carnitine concentrations were readily detected, and 24-h urine collections also revealed the presence of d3-TMAO ( Fig. 1b–e and Supplementary Figs. 4 and 5 ). Figure 1 and Supplementary Figure 4 show tracings from a representative omnivorous subject, of five studied with sequential serial blood draws after carnitine challenge. In most subjects examined, despite clear increases in plasma d3-carnitine and d3-TMAO concentrations over time ( Fig. 1e ), post-prandial changes in endogenous (unlabeled) carnitine and TMAO concentrations were modest ( Supplementary Fig. 5 ), consistent with total body pools of carnitine and TMAO that are relatively very large in relation to the amounts of carnitine ingested and TMAO produced from the carnitine challenge. To examine the potential contribution of gut microbiota to TMAO formation from dietary l -carnitine, we placed the five volunteers studied above on oral broad-spectrum antibiotics to suppress intestinal microbiota for a week and then performed a second l -carnitine challenge (visit 2). We noted near complete suppression of detectable endogenous TMAO in both plasma and urine after a week-long treatment with the antibiotics (visit 2) ( Fig. 1b–e and Supplementary Fig. 5 ). Moreover, we observed virtually no detectable formation of either native or d3-labeled TMAO in all post-prandial plasma samples or 24-h urine samples examined after carnitine challenge, consistent with an obligatory role for gut microbiota in TMAO formation from l -carnitine ( Fig. 1b–e and Supplementary Fig. 4 ). In contrast, we detected both d3- l -carnitine and unlabeled l -carnitine after the l -carnitine challenge, and there was little change in the overall time course before (visit 1) versus after (visit 2) antibiotic treatment ( Fig. 1e and Supplementary Fig. 5 ). We rechallenged the same subjects several weeks after discontinuation of antibiotics (visit 3). Baseline and post- l -carnitine challenge plasma and urine samples again showed TMAO and d3-TMAO formation, consistent with intestinal recolonization ( Fig. 1b–e and Supplementary Figs. 4 and 5 ). Collectively, these data show that TMAO production from dietary l -carnitine in humans is dependent on intestinal microbiota. Vegans and vegetarians produce less TMAO from l -carnitine The capacity to produce TMAO (native and d3-labeled) after l -carnitine ingestion was variable among individuals. A post hoc nutritional survey that the volunteers completed suggested that antecedent dietary habits (red meat consumption) may influence the capacity to generate TMAO from l -carnitine (data not shown). To test this prospectively, we examined TMAO and d3-TMAO production after the same l -carnitine challenge, first in a long-term (>5 years) vegan who consented to the carnitine challenge (including both steak and d3-(methyl)-carnitine consumption) ( Fig. 2a ). Also shown for comparison are data from a single representative omnivore with self-reported frequent (near daily) dietary consumption of red meat (beef, venison, lamb, mutton, duck or pork). Post-prandially, the omnivore showed increases in TMAO and d3-TMAO concentrations in both sequential plasma measurements ( Fig. 2a ) and in a 24-h urine collection sample ( Fig. 2b ). In contrast, the vegan showed nominal plasma and urine TMAO levels at baseline, and virtually no capacity to generate TMAO or d3-TMAO in plasma after the carnitine challenge ( Fig. 2a,b ). The vegan subject also had lower fasting plasma levels of l -carnitine compared to the omnivorous subject ( Supplementary Fig. 6 ). Figure 2: The formation of TMAO from ingested l -carnitine is negligible in vegans, and fecal microbiota composition associates with plasma TMAO concentrations. ( a , b ) Data from a male vegan subject in the carnitine challenge consisting of co-administration of 250 mg d3-(methyl)-carnitine and an 8-ounce sirloin steak and, for comparison, a representative female omnivore who frequently consumes red meat. Plasma TMAO and d3-TMAO were quantified after l -carnitine challenge ( a ) and in a 24-h urine collection ( b ). Urine TMAO and d3-TMAO reported as ratio with urinary creatinine (Cr) to adjust for urinary dilution. Data are expressed as means ± s.e.m. ( c ) Baseline fasting plasma concentrations of TMAO and d3-TMAO from male and female vegans and vegetarians ( n = 26) and omnivores ( n = 51). Boxes represent the 25th, 50th, and 75th percentiles and whiskers represent the 10th and 90th percentiles. ( d ) Plasma d3-TMAO concentrations in male and female vegans and vegetarians ( n = 5) and omnivores ( n = 5) participating in a d3-(methyl)-carnitine (250 mg) challenge without concomitant steak consumption. The P value shown is for the comparison of the area under the curve (AUC) of groups using the Wilcoxon nonparametric test. Data points represent mean ± s.e.m. of n = 5 per group. ( e ) Baseline TMAO plasma concentrations associate with enterotype 2 ( Prevotella ) in male and female subjects with a characterized gut microbiome enterotype. Boxes represent the 25th, 50th (middle lines) and 75th percentiles, and whiskers represent the 10th and 90th percentiles. ( f ) Plasma TMAO concentrations (plotted on x axes) and the proportion of taxonomic operational units (OTUs, plotted on y axes), determined as described in Supplementary Methods . Subjects were grouped by dietary status as either vegan or vegetarian ( n = 23) or omnivore ( n = 30). P value shown is for comparisons between dietary groups using a robust Hotelling T 2 test. Data are expressed as means ± s.e.m. for both TMAO concentration ( x axis) and the proportion of OTUs ( y axis). Full size image To confirm and extend these findings, we examined additional vegans and vegetarians ( n = 23) and omnivorous subjects ( n = 51). Fasting baseline TMAO levels were significantly lower among vegan and vegetarian subjects compared to omnivores ( Fig. 2c ). In a subset of these individuals, we performed an oral d3-(methyl)-carnitine challenge (but with no steak) and confirmed that long-term (all >1 year) vegans and vegetarians have a markedly reduced capacity to synthesize TMAO from oral carnitine ( Fig. 2c,d ). Vegans and vegetarians challenged with d3-(methyl)-carnitine also had significantly higher post-challenge plasma concentrations of d3-(methyl)-carnitine compared to omnivorous subjects ( Supplementary Fig. 7 ), perhaps due to decreased intestinal microbial metabolism of carnitine before absorption. TMAO levels are associated with human gut microbial taxa Dietary habits (for example, vegan or vegetarian versus omnivore or carnivore) are associated with significant alterations in intestinal microbiota composition 16 , 17 , 18 . To determine microbiota composition, we sequenced the gene encoding bacterial 16S rRNA in fecal samples from a subset of the vegans and vegetarians ( n = 23) and omnivores ( n = 30) studied above. In parallel, we quantified plasma TMAO, carnitine and choline concentrations by stable-isotope-dilution LC-MS/MS. Global analysis of taxa proportions ( Supplementary Methods ) revealed significant associations with plasma TMAO concentrations ( P = 0.03), but not with plasma carnitine ( P = 0.77) or choline ( P = 0.74) concentrations. After false discovery rate (FDR) adjustment for multiple comparisons, several bacterial taxa remained significantly (FDR-adjusted P < 0.10) associated with plasma TMAO concentration ( Supplementary Fig. 8 ). When we classified subjects into previously reported enterotypes 19 on the basis of fecal microbial composition, individuals with an enterotype characterized by enriched proportions of the genus Prevotella ( n = 4) had higher ( P < 0.05) plasma TMAO concentrations than did subjects with an enterotype notable for enrichment in the Bacteroides ( n = 49) genus ( Fig. 2e ). Examination of the proportion of specific bacterial genera and subject plasma TMAO concentrations revealed several taxa (at the genus level) that simultaneously were significantly associated with both vegan or vegetarian versus omnivore status and plasma TMAO concentration ( Fig. 2f ). These results indicate that preceding dietary habits may modulate both intestinal microbiota composition and ability to synthesize TMA and TMAO from dietary l -carnitine. TMAO production from dietary l -carnitine is inducible We next investigated the ability of chronic dietary l -carnitine intake to induce gut microbiota–dependent production of TMA and TMAO in mice. Initial LC-MS/MS studies in germ-free mice showed no detectable plasma d3-(methyl)-TMA or d3-(methyl)-TMAO after oral (gastric gavage) d3-(methyl)-carnitine challenge. However, after a several-week period in conventional cages to allow for microbial colonization ('conventionalization'), previously germ-free mice acquired the capacity to produce both d3-(methyl)-TMA and d3-(methyl)-TMAO following oral d3-(methyl)-carnitine challenge ( Supplementary Fig. 9 ). Parallel studies with non-germ-free ('conventional') Apoe −/− mice (lacking apolipoprotein E; on a C57BL/6J background) that had been placed on a cocktail of oral, relatively nonabsorbable broad-spectrum antibiotics previously shown to suppress intestinal microbiota 9 , 20 showed similar results (complete suppression of both TMA and TMAO formation; Supplementary Fig. 10 ). Collectively, these studies confirm in mice an obligatory role for gut microbiota in TMA and TMAO production from dietary l -carnitine. To examine whether dietary l -carnitine can induce TMA and TMAO production from intestinal microbiota, we compared the pre- and post-prandial plasma profiles of Apoe −/− mice on normal chow diet versus a normal chow diet supplemented with l -carnitine for 15 weeks. The production of both d3-(methyl)TMA and d3-(methyl)TMAO after gastric gavage of d3-(methyl)-carnitine was induced by approximately tenfold in mice on the l -carnitine–supplemented diet ( Fig. 3a ). Furthermore, plasma post-prandial d3-(methyl)-carnitine levels in mice in the l -carnitine–supplemented diet arm were substantially lower than those observed in mice on the l -carnitine–free diet (normal chow), consistent with enhanced microbiota-dependent catabolism before absorption in the l -carnitine–supplemented mice. Figure 3: The metabolism of carnitine to TMAO is an inducible trait and associates with microbiota composition. ( a ) d3-carnitine challenge of mice on either an l -carnitine–supplemented diet (1.3%) for 10 weeks and compared to age-matched normal chow–fed controls. Plasma d3-TMA and d3-TMAO were measured at the indicated times after d3-(methyl)-carnitine administration by oral gavage using stable-isotope-dilution LC-MS/MS. Data points represent mean ± s.e.m. of n = 4 per group. ( b ) Correlation heat map demonstrating the association between the indicated microbiota taxonomic genera and TMA and TMAO concentrations (all reported as mean ± s.e.m. in μM) of mice grouped by dietary status (chow, n = 10 (TMA, 1.3 ± 0.4; TMAO, 17 ± 1.9); and l -carnitine, n = 11 (TMA, 50 ± 16; TMAO, 114 ± 16). Red denotes a positive association, blue a negative association, and white no association. A single asterisk indicates a significant FDR-adjusted association of P ≤ 0.1, and a double asterisk indicates a significant FDR-adjusted association of P ≤ 0.01. ( c ) Plasma TMAO and TMA concentrations determined by stable-isotope-dilution LC-MS/MS (plotted on x axes) and the proportion OTUs (plotted on y axes). Statistical and laboratory analyses were performed as described in Supplementary Methods . Data are expressed as means ± s.e.m. for both TMAO or TMA concentrations ( x axis) and the proportion of OTUs ( y axis). Full size image Plasma TMA and TMAO associate with mouse gut microbial taxa The marked effects of an acute l -carnitine challenge (d3-(methyl)-carnitine by gavage) on TMA and TMAO production suggested that chronic l -carnitine supplementation may significantly alter intestinal microbial composition, with an enrichment for taxa better suited for TMA production from l -carnitine. To test this hypothesis, we first identified the cecum as the segment of the entire intestinal tract of mice showing the highest synthetic capacity to form TMA from carnitine (data not shown). We then sequenced 16S rRNA gene amplicons from ceca of mice on either normal chow ( n = 10) or l -carnitine-supplemented ( n = 11) diets and in parallel quantified plasma concentrations of TMA and TMAO ( Fig. 3b ). Global analyses of the presence of the microbiota taxa revealed that, in general, taxa that were at a relatively high proportion coincident with high TMA plasma concentrations also tended to be a relatively high proportion coincident with high TMAO plasma concentrations. Several bacterial taxa remained significantly associated with plasma TMA and/or TMAO levels after FDR adjustment for multiple comparisons ( Fig. 3b ). Further analyses revealed several bacterial taxa whose proportion was significantly associated (some positively, others inversely) with dietary l -carnitine and with plasma TMA or TMAO concentrations ( P < 0.05) ( Fig. 3c and Supplementary Fig. 11 ). Notably, a direct comparison of taxa associated with plasma TMAO concentrations in humans versus in mice failed to identify common taxa. These results are consistent with prior reports that microbes identified from the distal gut of the mouse represent genera that are typically not detected in humans 16 , 21 . High plasma l -carnitine concentration is associated with CVD We next investigated the relationship of fasting plasma concentrations of l -carnitine with CVD risk in an large, independent cohort of stable subjects ( n = 2,595) undergoing elective cardiac evaluation. Patient demographics, laboratory values and clinical characteristics are provided in Supplementary Table 2 . We observed significant dose-dependent associations between carnitine concentration and risks of prevalent coronary artery disease (CAD) ( P < 0.05), peripheral artery disease (PAD) ( P < 0.05) and overall CVD ( P < 0.05) ( Fig. 4a–c ). Moreover, these associations remained significant following adjustments for traditional CVD risk factors ( P < 0.05) ( Fig. a–c ). Plasma concentrations of carnitine were high in subjects with angiographic evidence of CAD (≥50% stenosis), regardless of the extent (for example, single- versus multivessel) of CAD, as revealed by diagnostic cardiac catheterization (Kruskal-Wallis P < 0.001) ( Fig. 4d ). Figure 4: Relationship between plasma carnitine concentration and CVD risks. ( a – c ) Forrest plots of the odds ratio of CAD ( a ), PAD ( b ) and CVD ( c ) and quartiles of carnitine before (closed circles) and after (open circles) logistic regression adjustments with traditional cardiovascular risk factors, including age, sex, history of diabetes mellitus, smoking, systolic blood pressure, LDL cholesterol and HDL cholesterol. Bars represent 95% confidence intervals. ( d ) Relationship of fasting plasma carnitine concentrations and angiographic evidence of CAD. Boxes represent the 25th, 50th and 75th percentiles of plasma carnitine concentration, and whiskers represent the 10th and 90th percentiles. The Kruskal-Wallis test was used to assess the degree of CAD (none, single-, double- or triple-vessel disease) association with plasma carnitine concentrations. ( e ) Forrest plot of the hazard ratio of MACE and quartiles of carnitine unadjusted (closed circles) and after adjusting for traditional cardiovascular risk factors (open circles), or traditional cardiac risk factors plus creatinine clearance, history of myocardial infarction, history of CAD, burden of CAD (one-, two- or three-vessel disease), left ventricular ejection fraction, baseline medications (angiotensin-converting enzyme (ACE) inhibitors, statins, beta blockers and aspirin) and TMAO levels (open squares). Bars represent 95% confidence intervals. ( f ) Kaplan-Meier plot and hazard ratios with 95% confidence intervals for unadjusted model, or following adjustments for traditional risk factors as in e . Median plasma concentration of carnitine (46.8 μM) and TMAO (4.6 μM) within the cohort were used to stratify subjects as having 'high' (≥median) or 'low' (<median) values. Full size image We also examined the relationship between fasting plasma concentrations of carnitine and incident (3-year) risk for major adverse cardiac events (MACE: composite of death, myocardial infarction, stroke and revascularization). Elevated carnitine (4th quartile) concentration was an independent predictor of MACE, even after adjustments for traditional CVD risk factors ( Fig. 4e ). After further adjustment for both plasma TMAO concentration and a larger number of comorbidities that might be known at time of presentation (for example, extent of CAD, ejection fraction, medications and estimated renal function), the significant relationship between carnitine and MACE risk was completely abolished ( Fig. 4e ). Notably, we observed a significant association between carnitine concentration and incident cardiovascular event risks in Cox regression models after multivariate adjustment, but only among those subjects with concurrent high plasma TMAO concentrations ( P < 0.001) ( Fig. 4f ). Thus, although plasma concentrations of carnitine seem to be associated with both prevalent and incident cardiovascular risks, these results suggest that TMAO, rather than carnitine, is the primary driver of the association of carnitine with cardiovascular risks. Dietary l -carnitine promotes microbiota-dependent atherosclerosis We next investigated whether dietary l -carnitine has an impact on the extent of atherosclerosis in the presence or absence of TMAO formation. We fed Apoe −/− mice from the time of weaning a normal chow diet versus the same diet supplemented with l -carnitine. Aortic root atherosclerotic plaque quantification revealed approximately a doubling of disease burden in l -carnitine supplemented mice compared to normal chow–fed mice ( Fig. 5a,b ). Parallel studies in mice placed on an oral antibiotic cocktail to suppress intestinal microbiota showed marked reductions in plasma TMA and TMAO concentrations ( Fig. 5c ) and complete inhibition of the dietary l -carnitine–dependent increase in atherosclerosis ( Fig. 5b ). Of note, the increase in atherosclerotic plaque burden with dietary l -carnitine occurred in the absence of proatherogenic changes in plasma lipid, lipoprotein, glucose or insulin levels; moreover, both biochemical and histological analyses of livers from any group of the mice failed to show evidence of steatosis ( Supplementary Tables 3 and 4 and Supplementary Fig. 12 ). Figure 5: Dietary l -carnitine accelerates atherosclerosis and inhibits reverse cholesterol transport in a microbiota dependent fashion. ( a ) Representative oil red O–stained aortic roots (counterstained with hematoxylin) of 19-week-old Apoe −/− female mice on the indicated diets in the presence versus absence of antibiotics (ABS) as described in the Online Methods. ( b ) Quantification of mouse aortic root plaque lesion area. Apoe −/− female mice at 19 weeks of age were started on the indicated diets at the time of weaning (4 weeks of age) before killing, and lesion area was quantified as described in the Online Methods. ( c ) Carnitine, TMA and TMAO concentrations as determined using stable-isotope-dilution LC-MS/MS analysis of plasma recovered from mice at the time of killing. ( d ) RCT (72-h stool collection) in adult female (>8 weeks of age) Apoe −/− mice on normal chow versus diet supplemented with either l -carnitine or choline, as well as after suppression of microbiota using cocktail of antibiotics (+ ABS). Also shown are RCT (72-h stool collection) results in adult female (>8 weeks of age) Apoe −/− mice on normal chow versus diet supplemented with TMAO. ( e , f ) Relative mRNA levels (to Actb ) of mouse liver candidate genes involved in bile acid synthesis or transport. Data are expressed as means ± s.e.m. Full size image Plasma concentrations of carnitine were significantly higher in l -carnitine–fed mice compared to normal chow–fed controls ( P < 0.05) ( Fig. 5c ). Plasma carnitine concentrations were even higher in mice supplemented with l -carnitine in the antibiotic arm of the study ( Fig. 5c ), presumably as a result of the reduced capacity of microbiota to catabolize l -carnitine. However, as the l -carnitine–supplemented mice that received antibiotics did not show enhanced atherosclerosis, these results are consistent with the notion that it is a downstream microbiota-dependent metabolite, not l -carnitine itself, that promotes atherosclerosis. TMAO inhibits RCT To identify additional mechanisms by which TMAO might promote atherosclerosis, we first noted that TMAO and its trimethylamine nutrient precursors are all cationic quaternary amines that could potentially compete with arginine, thereby limiting its bioavailability and reducing nitric oxide synthesis. However, a direct test of this hypothesis with competition studies using [ 14 C]arginine and TMAO in bovine aortic endothelial cells demonstrated no decrease in [ 14 C]arginine transport ( Supplementary Fig. 13 ). In recent studies we showed that TMAO can promote macrophage cholesterol accumulation in a microbiota-dependent manner by increasing cell surface expression of two proatherogenic scavenger receptors, CD36 and scavenger receptor A (SRA) 9 , 22 , 23 . We envisioned three non-exclusive mechanisms through which cholesterol can accumulate within cells of the artery wall: enhancing the rate of influx (as noted above), enhancing synthesis or diminishing the rate of efflux. To test whether TMAO might alter the canonical regulation of cholesterol biosynthesis genes 24 , we loaded macrophages with cholesterol in the presence or absence of physiologically relevant TMAO concentrations. However, TMAO failed to alter mRNA levels of the low-density lipoprotein (LDL) receptor or cholesterol synthesis genes ( Supplementary Fig. 14 ). Parallel studies examining macrophage inflammatory gene expression 25 and desmosterol levels in the culture medium also failed to show any effect of TMAO ( Supplementary Figs. 14 and 15 ). We next examined whether TMAO might inhibit cholesterol removal from peripheral macrophages by testing whether dietary sources of TMAO (choline or l -carnitine) inhibit RCT in vivo using an adaptation of an established model system 26 . Mice on either choline (1.3% choline chloride by mass)- or l -carnitine–supplemented diets showed significantly less ( ∼ 30%, P < 0.05) RCT compared to normal chow–fed controls ( Fig. 5d ). Furthermore, suppression of intestinal microbiota (and plasma TMAO concentrations) with oral broad-spectrum antibiotics completely blocked the diet-dependent (for both choline and l -carnitine) suppression of RCT ( Fig. 5d ), suggesting that a microbiota-generated product (for example, TMAO) inhibits RCT ( Supplementary Fig. 16 ). To further test this hypothesis, we placed mice on a TMAO-containing diet. They showed a 35% decrease in RCT relative to mice on a normal chow diet ( Fig. 5d , P < 0.05). Further examination of plasma, liver and bile showed significantly less [ 14 C]cholesterol recovered from plasma of TMAO-fed compared to chow-fed mice (16% lower, P < 0.05) but no changes in counts recovered from liver or bile ( Supplementary Fig. 17 ). TMAO alters sterol metabolism in vivo To better understand the molecular mechanisms through which TMAO suppresses RCT, we examined candidate genes and biological processes in compartments (macrophages, plasma, liver and intestine) known to participate in cholesterol and sterol metabolism and RCT. We exposed peritoneal macrophages recovered from wild-type C57BL/6J mice to TMAO in vitro and quantified mRNA levels of the cholesterol transporters Abca1, Srb1 and Abcg1. TMAO treatment led to modest but statistically significant increases in expression of Abca1 and Abcg1 ( P < 0.05; Supplementary Fig. 18 ). Parallel studies showed corresponding modest TMAO-dependent increases in Abca1-dependent cholesterol efflux to apoA1 as cholesterol acceptor in RAW 264.7 macrophages ( P < 0.01; Supplementary Fig. 19 ). Collectively, these results suggest that the observed global reduction in RCT in vivo induced by TMAO is unlikely to be accounted for by changes in the expression of these transporters. Parallel examination of plasma recovered from mice in the RCT experiments showed no differences in total cholesterol and high-density lipoprotein cholesterol concentrations ( Supplementary Table 5 ). In parallel studies, we examined the mRNA levels of known cholesterol transporters (Srb1, Abca1, Abcg1, Abcg5, Abcg8 and Shp) in mouse liver but found only a modest difference for Srb1 expression ( Supplementary Fig. 20 ). Western blot analysis of liver from TMAO-supplemented mice, however, showed no change in the abundance of Srb1 protein compared to chow (control) mouse livers ( Supplementary Fig. 21 ). In contrast, mRNA levels in the liver of the key bile acid synthetic enzymes Cyp7a1 and Cyp27a1 were significantly lower in mice supplemented with dietary TMAO, with no change in expression of the upstream regulator Shp ( P < 0.05 for each; Fig. 5e and Supplementary Fig. 20 ). Multiple bile acid transporters in the liver (Oatp1, Oatp4, Mrp2, and Ntcp) also showed significant dietary TMAO–induced decreases in expression ( P < 0.05 each); however, Bsep and Ephx1 did not ( Fig. 5f ). In contrast to the liver, TMAO-induced changes in bile acid transporter gene expression were not observed in the gut ( Supplementary Fig. 22 ). Taken together, these data show that the gut microbiota–dependent metabolite TMAO affects a major pathway for cholesterol elimination from the body, the bile acid synthetic pathway, at multiple levels. Consistent with the effects of TMAO on bile acid transporter gene expression, mice supplemented with TMAO had a significantly smaller total bile acid pool size compared to normal chow–fed mice ( P < 0.01) ( Fig. 6a ). Dietary supplementation with TMAO also markedly lowered mRNA expression of both types of intestinal cholesterol transporters: Npc1L1, which transports cholesterol into enterocyte from the gut lumen 27 , and Abcg5-Abcg8, which transport cholesterol out of enterocytes into the gut lumen 27 ( Supplementary Fig. 23 ). Previous studies using either Cyp7a1 - or Cyp27a1 -null mice demonstrated a reduction in cholesterol absorption 28 , 29 . In separate studies, dietary TMAO supplementation compared to normal chow similarly promoted a decrease (26% reduced compared to normal chow–fed mice, P < 0.01) in total cholesterol absorption ( Fig. 6b ). Figure 6: Effect of TMAO on cholesterol and sterol metabolism. ( a , b ) Measurement of total bile acid pool size and composition ( a ) and cholesterol absorption ( b ) in adult female (>8 weeks of age) Apoe −/− mice on normal chow diet versus diet supplemented with TMAO for 4 weeks. Data are expressed as means ± s.e.m. ( c ) Summary scheme outlining the proposed pathway by which microbiota participate in atherosclerosis. The microbiota metabolizes dietary l -carnitine and choline to form TMA and TMAO. TMAO affects cholesterol and sterol metabolism in macrophages, liver and intestine. Full size image Discussion The dietary nutrient l -carnitine has been studied for over a century 30 . Although eukaryotes can endogenously produce l -carnitine, only prokaryotic organisms can catabolize it 11 . A role for intestinal microbiota in TMAO production from dietary l -carnitine was first suggested by studies in rats 31 . Although TMAO production from alternative dietary trimethylamines has been suggested in humans, a role for the microbiota in the production of TMAO from dietary l -carnitine in humans has not previously been demonstrated 31 , 32 , 33 . The present studies reveal an obligatory role of gut microbiota in the production of TMAO from ingested l -carnitine in humans. They also suggest a new nutritional pathway in CVD pathogenesis that involves dietary l -carnitine, the intestinal microbial community and production of the proatherosclerotic metabolite TMAO. Finally, these studies show that TMAO modulates cholesterol and sterol metabolism at multiple anatomic sites and processes in vivo , with a net effect of increasing atherosclerosis. Our results also suggest a previously unknown mechanism for the observed relationship between dietary red meat ingestion and accelerated atherosclerosis. Consuming foods rich in l -carnitine (predominantly red meat) can increase fasting human l -carnitine concentrations in the plasma 34 . Meats and full-fat dairy products are abundant components of the Western diet and are commonly implicated in CVD. Together, l -carnitine and choline-containing lipids can constitute up to 2% of a Western diet 14 , 15 , 35 . Numerous studies have suggested a decrease in atherosclerotic disease risk in vegan and vegetarian individuals compared to omnivores; reduced levels of dietary cholesterol and saturated fat have been suggested as the mechanism explaining this decreased risk 36 , 37 . Notably, a recent 4.8-year randomized dietary study showed a 30% reduction in cardiovascular events in subjects consuming a Mediterranean diet (with specific avoidance of red meat) compared to subjects consuming a control diet 38 . The present studies suggest that the reduced ingestion of l -carnitine and total choline by vegans and vegetarians, with attendant reductions in TMAO levels, may contribute to their observed cardiovascular health benefits. Conversely, an increased capacity for microbiota-dependent production of TMAO from l -carnitine may contribute to atherosclerosis, particularly in omnivores who consume high amounts of l -carnitine. One proatherosclerotic mechanism observed for TMAO in the current studies is suppression of RCT ( Fig. 6c ). Dietary l -carnitine and choline each suppressed RCT ( P < 0.05), but only in mice with intact intestinal microbiota and increased TMA and TMAO concentrations. Suppression of the intestinal microbiota completely eliminated choline- and l -carnitine-dependent suppression of RCT. Moreover, direct dietary supplementation with TMAO promoted a similar suppression of RCT. These results are consistent with a gut microbiota–dependent mechanism whereby generation of TMAO impairs RCT, potentially contributing to the observed proatherosclerotic phenotype of these interventions. Another mechanism by which TMAO may promote atherosclerosis is through increasing macrophage SRA and CD36 surface expression and foam cell formation 9 ( Fig. 6c ). Within macrophages, TMAO does not seem to alter known cholesterol biosynthetic and uptake pathways 24 , 39 or the more recently described regulatory role of desmosterol in integrating macrophage lipid metabolism and inflammatory gene responses 25 . In the liver, TMAO decreased the bile acid pool size and lowered the expression of key bile acid synthesis and transport proteins ( Fig. 6c ). However, it is unclear whether these changes contribute to the impairment of RCT. Of note, TMAO lowered expression of Cyp7a1, the major bile acid synthetic enzyme and rate-limiting step in the catabolism of cholesterol. The effect of TMAO is thus consistent with reports of human Cyp7a1 gene variants that are associated with reduced expression of Cyp7a1, leading to decreased bile acid synthesis, decreased bile acid secretion and enhanced atherosclerosis 40 , 41 , 42 . Furthermore, upregulation (as opposed to downregulation) of Cyp7a1 has been reported to lead to expansion of the bile acid pool, increased RCT and reduced atherosclerotic plaque area in susceptible mice 43 , 44 , 45 . Within the intestine, we found that TMAO concentration was also associated with changes in cholesterol metabolism. However, the reduction in cholesterol absorption observed, although consistent with the reduction in intestinal Npc1L1 expression 46 (as well as hepatic Cyp7a1 and Cyp27a1 expression 28 , 29 ), cannot explain the suppression of RCT observed after dietary supplementation with TMAO. Thus, the molecular mechanisms through which gut microbiota formation of TMAO leads to inhibition of RCT are not entirely clear. It is also not known whether TMAO interacts directly with a specific receptor or whether it acts to alter signaling pathways indirectly by altering protein conformation (that is, via allosteric effects). Whereas TMA has been reported to influence signal transduction by direct interaction with a family of G protein–coupled receptors 47 , 48 , TMAO, a small quaternary amine with aliphatic character, is reportedly capable of directly inducing conformational changes in proteins, stabilizing protein folding and acting as a small-molecule protein chaperone 49 , 50 . It is thus conceivable that TMAO may alter many signaling pathways without directly acting at a 'TMAO receptor'. A noteworthy finding is the magnitude with which long-term dietary habits affect TMAO synthetic capacity in both humans (vegans and vegetarians versus omnivores) and mice (normal chow versus chronic l -carnitine supplementation). Analyses of microbial composition in human feces and mice cecal contents revealed specific taxa that segregate with both dietary status and plasma TMAO concentrations. Recent studies have shown that changes in enterotype are associated with long-term dietary patterns 19 . We observed that plasma TMAO concentration varied significantly ( P < 0.05) according to previously reported enterotypes. We also showed an obligatory role for gut microbiota in TMAO formation from dietary l -carnitine in mice and humans. The differences observed in TMAO production after an l -carnitine challenge in omnivore versus vegan subjects is striking, and is consistent with the observed differences in microbial community composition. Recent reports have shown differences in microbial communities among vegetarians and vegans versus omnivores 51 . Of note, we observed an increase in baseline plasma TMAO concentrations in what has historically been called enterotype 2 ( Prevotella ), a relatively rare enterotype that in one study was associated with low animal-fat and protein consumption 19 . In our study, three of the four individuals classified into enterotype 2 are self-identified omnivores, suggesting more complexity in the human gut microbiome than anticipated. Indeed, other studies have demonstrated variable results in associating human bacterial genera, including Bacteroides and Prevotella , to omnivorous and vegetarian eating habits 18 , 52 . This complexity is no doubt in part attributable to the fact that there are many species within any genus, and distinct species within the same genus may have different capacities to use l -carnitine as a fuel and form TMA. Indeed, prior studies have suggested that multiple bacterial strains can metabolize l -carnitine in culture 53 , and species within the genus Clostridium differ in their ability to use choline as the sole source of carbon and nitrogen in culture 54 . Our results suggest that multiple 'proatherogenic' (that is, TMA- and TMAO-producing) species probably exist. Consistent with this supposition, others have reported that several bacterial phylotypes are associated with a history of atherosclerosis and that human microbiota biodiversity may in part be influenced by carnivorous eating habits 16 , 19 , 55 . The association between plasma carnitine concentrations and cardiovascular risks further supports the potential pathophysiological importance of a carnitine → gut microbiota → TMA/TMAO → atherosclerosis pathway ( Fig. 6c ). The association between high plasma carnitine concentration and CVD risk disappeared after TMAO levels were added to the statistical model. These observations are consistent with a proposed mechanism whereby oral l -carnitine ingestion contributes to atherosclerotic CVD risk via the microbiota metabolite TMAO. There are only a few reports of specific intestinal anaerobic and aerobic bacterial species that can use l -carnitine as a carbon nitrogen source 10 , 11 , 56 . l -carnitine is essential for the import of activated long-chain fatty acids from the cytoplasm into mitochondria for β-oxidation, and dietary supplementation with l -carnitine has been widely studied. Some case reports of l -carnitine supplementation have reported beneficial effects in individuals with inherited primary and acquired secondary carnitine deficiency syndromes 13 . l -Carnitine supplementation studies in chronic disease states have reported both positive and negative results 57 , 58 . Oral l -carnitine supplementation in subjects on hemodialysis raises plasma l -carnitine concentrations to normal levels but also substantially increases TMAO levels 57 . A broader potential therapeutic scope for l -carnitine and two related metabolites, acetyl- l -carnitine and propionyl- l -carnitine, has also been explored for the treatment of acute ischemic events and cardiometabolic disorders (reviewed in ref. 58 ). Here too, both positive and negative results have been reported. Potential explanations for the discrepant findings of various l -carnitine intervention studies are differences in the duration of dosing or in the route of administration. In many studies, l -carnitine or a related molecule is administered over a short interval or via the parenteral route, thereby bypassing gut microbiota (and hence TMAO formation). Discovery of a link between l -carnitine ingestion, gut microbiota metabolism and CVD risk has broad health-related implications. Our studies reveal a new pathway potentially linking dietary red meat ingestion with atherosclerosis pathogenesis. The role of gut microbiota in this pathway suggests new potential therapeutic targets for preventing CVD. Furthermore, our studies have public health relevance, as l -carnitine is a common over-the-counter dietary supplement. Our results suggest that the safety of chronic l -carnitine supplementation should be examined, as high amounts of orally ingested l -carnitine may under some conditions foster growth of gut microbiota with an enhanced capacity to produce TMAO and potentially advance atherosclerosis. Methods Mice and general procedures. Breeders of all conventional mice (C57BL/6J and Apoe −/− mice on a C57BL/6J background) were obtained from Jackson Laboratories. All animal studies were performed under approval of the Animal Research Committee of the Cleveland Clinic. Liver cholesterol was quantified by gas chromatography MS, and liver triglyceride was measured using glycerol phosphate oxidase reagent as described in Supplementary Methods . Mouse plasma lipids and glucose and human fasting lipid profile, C-reactive protein (CRP) and glucose were measured using the Abbott ARCHITECT platform, Model ci8200 (Abbott Diagnostics). Mouse HDL was isolated using density ultracentrifugation, and insulin levels were quantified by enzyme-linked immunosorbent assay as described in Supplementary Methods . Human plasma myeloperoxidase was measured using US Food and Drug Administration–cleared CardioMPO assay (Cleveland Heart Lab). Research subjects. All research subjects gave written informed consent. All protocols were approved by the Cleveland Clinic Institutional Review Board. Two cohorts of subjects were used in the present studies. The first group of volunteers ( n = 30 omnivores and n = 23 vegetarians or vegans) had extensive dietary questioning and stool, plasma and urine collection. A subset of subjects with stool collected also underwent oral l -carnitine challenge ( n = 5 omnivores and n = 5 vegans), consisting of d3(methyl)-carnitine (250 mg within a veggie capsule (Wonder Laboratories)). Where indicated, additional omnivores and one vegan also underwent l -carnitine challenge testing with combined ingestion of the synthetic d3-(methyl)- l -carnitine capsule (250 mg) and an 8-ounce beef steak (consumed within 10 min). Male and female volunteers were at least 18 years of age. Volunteers participating in the l -carnitine challenge tests were excluded if they were pregnant, had chronic illness (including a known history of heart failure, renal failure, pulmonary disease, gastrointestinal disorders or hematologic diseases), had an active infection, had received antibiotics within 2 months of study enrollment, had used any over-the-counter or prescriptive probiotic or bowel cleansing preparation within the past 2 months, had ingested yogurt within the past 7 d, or had undergone bariatric or other intestinal (for example, gallbladder removal, bowel resection) surgery. All other research subjects were derived from GeneBank, a large longitudinal tissue repository with connecting clinical database from sequential consenting stable subjects undergoing elective cardiac evaluation. Further description of the GeneBank cohort can be found in Supplementary Methods . Human l -carnitine challenge test. Consenting adult men and women fasted overnight (12 h) before performing the l -carnitine challenge test, which involved baseline blood and spot urine collection, and then oral ingestion ( t = 0 at time of initial ingestion) of a veggie capsule (size O) (Wonder Laboratories) containing 250 mg of a stable isotope–labeled d3- l -(methyl)-carnitine (under an Investigational New Drug exemption from the US Food and Drug Administration). Where indicated, for a subset of subjects, the l -carnitine challenge also included a natural source of l -carnitine (a cooked 8-ounce sirloin steak) eaten over a 10-min period concurrent with taking the capsule containing the d3- l -(methyl)-carnitine. After combined ingestion of the steak and d3- l -(methyl)-carnitine, a series of sequential venous blood draws were performed at the indicated time points, and a 24-h urine collection was performed. An ensuing 1-week treatment period of oral antibiotics (metronidazole 500 mg and ciprofloxacin 500 mg twice daily) was given to suppress intestinal microbiota that use carnitine to form TMA and TMAO; the l -carnitine challenge was then repeated. After at least 3 weeks off of all antibiotics to allow reacquisition of intestinal microbiota, a third and final l -carnitine challenge test was performed. Dietary habits (vegan versus ominivore) were determined using a questionnaire assessment of dietary l -carnitine intake, similar to that conducted by the Atherosclerotic Risk in Community study 59 . d3- l -(methyl)-carnitine was prepared by taking sodium l -norcarnitine dissolved in methanol and reacting it with d3-methyl iodide (Cambridge Isotope) in the presence of potassium hydrogen carbonate to give d3- l -(methyl)-carnitine. Further details regarding d3- l -(methyl)-carnitine synthesis, purification and characterization are described in Supplementary Methods . Metabolomics study. We previously reported results from a metabolomics study where small-molecule analytes were sought that associated with cardiovascular risks 9 . The metabolomics study had a two-stage screening strategy. In the first phase, unbiased metabolomics studies were performed on randomly selected plasma samples from a learning cohort generated from Genebank subjects who had experienced a major adverse cardiovascular event (defined as nonfatal myocardial infarction, stroke or death) ( n = 50) in the 3-year period following enrollment versus age- and gender-matched controls ( n = 50) who had not experienced an event. A second phase (validation cohort) of unbiased metabolomics analyses was then performed on a nonoverlapping second cohort of cases ( n = 25) and age- and gender-matched controls ( n = 25) using identical inclusion and exclusion criteria. Further details regarding the unbiased metabolomic approach can be found in Supplementary Methods . Identification of l -carnitine and quantification of TMAO, TMA and l -carnitine. Matching collision-induced dissociation (CID) spectra of an unknown plasma metabolite with identical retention time and mass-to-charge ratio ( m/z ) as authentic l -carnitine ( m/z = 162) were obtained as described in Supplementary Methods . Concentrations of carnitine, TMA and TMAO isotopologues in mouse and human plasma samples were determined by stable-isotope-dilution LC-MS/MS in positive multiple reaction monitoring (MRM) mode using deuterated internal standards on an AB Sciex API 5000 triple quadrupole mass spectrometer (Applied Biosystems) as described in Supplementary Methods . In studies quantifying endogenous carnitine and ingested d3- l -(methyl)-carnitine, d9-carnitine was used as internal standard. d9-carnitine was prepared by dissolving 3-hydroxy-4-aminobutyric acid (Chem-Impex Intl.) in methanol and exhaustive reaction with d3-methyl iodide (Cambridge Isotope Labs) in the presence of potassium hydrogen carbonate. Further details regarding synthesis, purification and characterization of d9-carnitine can be found in Supplementary Methods . Human microbiota analyses. Stool samples were stored at −80 °C, and DNA for the gene encoding 16S rRNA was isolated using the MoBio PowerSoil kit according to the manufacturer's instructions. DNA samples were amplified using V1-V2 region primers targeting bacterial 16S genes and sequenced using 454/Roche Titanium technology. Sequence reads from this study are available from the Sequence Read Archive (controlled feeding experiment: SRX037803 , SRX021237 , SRX021236 , SRX020772 , SRX020771 , SRX020588 , SRX020587 , SRX020379 , SRX020378 (metagenomic); cross-sectional study of diet and stool microbiome: SRX020773 , SRX020770 ). The overall association between TMAO measurements and microbiome compositions was assessed using PermanovaG 60 by combining both the weighted and unweighted UniFrac distances. Associations between TMAO measurements and individual taxa proportions were assessed by Spearman's rank correlation test. False discovery rate (FDR) control based on the Benjamini-Hochberg procedure was used to account for multiple comparisons when evaluating these associations. Each of the samples was assigned to an enterotype category on the basis of their microbiome distances (Jensen-Shannon distance) to the medoids of the enterotype clusters as defined in the COMBO data 19 . Association between enterotypes and plasma TMAO concentration was assessed by Wilcoxon rank-sum test. Student's t -test was used to test the difference in means of TMAO concentration between omnivores and vegans. A robust Hotelling T 2 test was used to examine the association between both the proportion of specific bacterial taxa and TMAO concentrations in groups using R software version 2.15 (ref. 61 ). Mouse microbiota analysis. Microbial community composition was assessed by pyrosequencing 16S rRNA genes derived from ceca of mice fed a normal chow ( n = 11) or l -carnitine ( n = 13) diet. DNA was isolated using the MoBio PowerSoil DNA Isolation Kit. The V4 region of the 16S ribosomal DNA gene was amplified using bar-coded fusion primers (F515/R806) with the 454 A Titanium sequencing adaptor as further described in Supplementary Methods . The relative abundances of bacteria at each taxonomic level were computed for each mouse, a single representative sequence for each OTU was aligned using PyNAST and a phylogenetic tree was built using FastTree as further described in Supplementary Methods . Spearman correlations were calculated to assess correlations between relative abundance of gut microbiota and mouse plasma TMA and TMAO concentrations. False discovery rates (FDR) of the multiple comparisons were estimated for each taxon based on the P values resulted from correlation estimates, as further described in Supplementary Methods . A robust Hotelling T 2 test was used to examine the association between both the proportion of specific bacterial taxa and mouse plasma TMA and TMAO concentrations in groups using R software version 2.15 (ref. 61 ). Aortic root lesion quantification. Apolipoprotein E–knockout mice on a C57BL/6J background ( Apoe −/− ) were weaned at 28 d of age and placed on a standard chow control diet (Teklad 2018). l -Carnitine was introduced into the diet by supplementing mouse drinking water with 1.3% l -carnitine (Chem-Impex Intl.), 1.3% l -carnitine and antibiotics, or antibiotics alone. The antibiotic cocktail dissolved in mouse drinking water has previously been shown to suppress commensal gut microbiota and included 0.1% ampicillin sodium salt (Fisher Scientific), 0.1% metronidazole, 0.05% vancomycin (Chem Impex Intl.) and 0.1% neomycin sulfate (Gibco) 20 . Mice were anaesthetized with ketamine and xylazine before terminal bleeding by cardiac puncture to collect blood. Mouse hearts were fixed and stored in 10% neutral-buffered formalin before being frozen in optimal cutting temperature medium for sectioning. Aortic root slides were stained with oil red O and counterstained with hematoxylin. The aortic root atherosclerotic lesion area was quantified as the mean of sequential sections of 6 microns approximately 100 microns apart 9 . Germ-free mice and conventionalization studies. 10-week-old female Swiss Webster germ-free mice (SWGF) were obtained from the University of North Carolina Gnotobiotics Core Facility. Germ-free mice underwent gastric gavage with the indicated isotopologues of l -carnitine (see below for details of l -carnitine challenge) immediately following removal from the germ-free microisolator shipper. After the l -carnitine challenge, germ-free mice were conventionalized by being housed in cages with nonsterile C57BL/6J female mice. Approximately 4 weeks later, the l -carnitine challenge was repeated. Quantification of natural abundance and isotope-labeled l -carnitine, TMA and TMAO in mouse plasma was performed using stable-isotope-dilution LC/MS/MS as described above. Mouse l -carnitine challenge studies. C57BL/6J female or Apoe −/− female mice were given synthetic d3- l -carnitine (150 μl of a 150 mM stock) dissolved in water via gastric gavage using a 1.5-inch 20-gauge intubation needle. Plasma was collected from the saphenous vein at baseline and at the indicated time points. Apoe −/− female mice were used in the study examining the inducibility of microbiota to generate TMA and TMAO following carnitine feeding. For these studies, mice were placed on an l -carnitine–supplemented diet (1.3% l -carnitine in drinking water) for 10 weeks. Quantification of the abundance of native and isotope-labeled forms of carnitine, TMA and TMAO in mouse plasma was performed using stable-isotope-dilution LC-MS/MS as described above. Mouse reverse cholesterol transport, cholesterol absorption and bile acid pool size studies. Adult female (>8 weeks of age) Apoe −/− mice were placed on either a chow diet or an l -carnitine, choline- or TMAO-supplemented diet for 4 weeks before performance of reverse cholesterol transport, cholesterol absorption or bile acid pool size/composition studies as described in Supplementary Methods . In some RCT experiments, mice were treated with a cocktail of oral antibiotics (as in atherosclerosis studies described above) for 4 weeks before enrollment. RCT studies were performed using subcutaneous (in the back) injection of [ 14 C]cholesterol-labeled bone marrow–derived macrophages, as further detailed in Supplementary Methods . Feces were collected and analyzed as described in Supplementary Methods . For cholesterol absorption experiments, mice were fasted 4 h before gavage with olive oil supplemented with [ 14 C]cholesterol and [ 3 H]β-sitostanol. Feces were collected over a 24-h period and analyzed as described in Supplementary Methods . Total bile acid pool size and composition were determined in female Apoe −/− mice, with analysis of the combined small intestine, gallbladder, and liver, which were extracted together in ethanol with nor-deoxycholate (Steraloids) added as an internal standard. The extracts were filtered (Whatman paper #2), dried and resuspended in water. The samples were then passed through a C18 column (Sigma) and eluted with methanol. The eluted samples were again dried down and resuspended in methanol. A portion of the sample was subjected to HPLC using Waters Symmetry C18 column (4.6 × 250 mm no. WAT054275, Waters Corp.) and a mobile phase consisting of methanol:acetonitrile:water (53:23:24) with 30 mM ammonium acetate, pH 4.91, at a flow rate of 0.7 ml min −1 . Bile acids were detected by an evaporative light spray detector (Alltech ELSD 800, nitrogen at 3 bar, drift tube temperature 40 °C) and identified by comparing their respective retention times to those of standards (taurocholate and tauro-β-muricholate from Steraloids; taurodeoxycholate and taurochenodeoxycholate from Sigma; tauroursodeoxycholate from Calbiochem). For quantification, peak areas were integrated using Chromperfect Spirit (Justice laboratory software) and bile acid pool size was expressed as μmol per 100 g body weight after correcting for procedural losses based on the nor-deoxycholate internal standard. Effects of TMAO on macrophage cholesterol biosynthesis, cholesterol efflux, inflammatory genes and desmosterol levels. The effects of cholesterol loading on the expression of macrophage cholesterol biosynthetic and inflammatory genes, macrophage LDL receptor gene expression and macrophage desmosterol abundance were analyzed as previously described 25 . Thioglycollate-elicited mouse peritoneal macrophages (MPMs) were harvested and cultured in RPMI 1640 supplemented with 10% FCS and penicillin plus streptomycin. MPMs were then lipoprotein-starved further in culture for 18 h in the absence versus presence of increasing concentrations of cholesterol, acetylated LDL or vehicle with or without 300 μM TMAO dehydrate (Sigma). Desmosterol in the cholesterol-loading studies was quantified by stable-isotope-dilution GC/MS analysis. Further details of these studies and cholesterol efflux studies are described in Supplementary Methods . RNA preparation and real-time PCR analysis. RNA was purified from tissue (macrophage, liver or gut) using the animal tissue protocol from the Qiagen RNeasy mini kit. Small bowel used for RNA purification was sectioned sequentially in five equal segments from the duodenum to illeum before RNA preparation. Purified total RNA and random primers were used to synthesize first-strand cDNA using the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA) reverse transcription protocol. Quantitative real-time PCR was performed using Taqman quantitative RT-PCR probes (Applied Biosystems, Foster City, CA) and normalized to tissue β-actin by the ΔΔ C T method using StepOne Software v2.1 (Applied Biosystems, Foster City, CA). Statistical analyses. Student's t -test or a Wilcoxon nonparametric test were used to compare group means as deemed appropriate. The analysis of variance (ANOVA, if normally distributed) or Kruskal-Wallis test (if not normally distributed) was used for multiple group comparisons of continuous variables, and a Chi-square test was used for categorical variables. Odds ratios for various cardiac phenotypes (CAD, PAD and CVD) and corresponding 95% confidence intervals were calculated using logistic regression models. Kaplan-Meier analysis with Cox proportional hazards regression was used for time-to-event analysis to determine hazard ratio and 95% confidence intervals for adverse cardiac events (death, myocardial infarction, stroke and revascularization). Adjustments were made for individual traditional cardiac risk factors (age, gender, diabetes mellitus, systolic blood pressure, former or current cigarette smoking, LDL cholesterol, HDL cholesterol), extent of CAD, left ventricular ejection fraction, history of myocardial infarction, baseline medications (aspirin, statins, beta blockers and angiotensin-converting-enzyme (ACE) inhibitors) and renal function by estimated creatinine clearance. Kruskal-Wallis test was used to assess the effect of the degree of coronary vessel disease on l -carnitine levels. A robust Hotelling T 2 test was used to examine the difference in the proportion of specific bacterial genera along with subject TMAO levels between the different dietary groups 61 . All data were analyzed using R software version 2.15 and Prism (Graphpad Software). Additional methods. Detailed methodology is described in the Supplementary Methods . Accession codes Accessions Sequence Read Archive SRX020378 SRX020379 SRX020587 SRX020588 SRX020770 SRX020771 SRX020772 SRX020773 SRX021236 SRX021237 SRX037803
A compound abundant in red meat and added as a supplement to popular energy drinks has been found to promote atherosclerosis – or the hardening or clogging of the arteries – according to Cleveland Clinic research published online this week in the journal Nature Medicine. The study shows that bacteria living in the human digestive tract metabolize the compound carnitine, turning it into trimethylamine-N-oxide (TMAO), a metabolite the researchers previously linked in a 2011 study to the promotion of atherosclerosis in humans. Further, the research finds that a diet high in carnitine promotes the growth of the bacteria that metabolize carnitine, compounding the problem by producing even more of the artery-clogging TMAO. The research team was led by Stanley Hazen, M.D., Ph.D., Vice Chair of Translational Research for the Lerner Research Institute and section head of Preventive Cardiology & Rehabilitation in the Miller Family Heart and Vascular Institute at Cleveland Clinic, and Robert Koeth, a medical student at the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University. The study tested the carnitine and TMAO levels of omnivores, vegans and vegetarians, and examined the clinical data of 2,595 patients undergoing elective cardiac evaluations. They also examined the cardiac effects of a carnitine-enhanced diet in normal mice compared to mice with suppressed levels of gut microbes, and discovered that TMAO alters cholesterol metabolism at multiple levels, explaining how it enhances atherosclerosis. The researchers found that increased carnitine levels in patients predicted increased risks for cardiovascular disease and major cardiac events like heart attack, stroke and death, but only in subjects with concurrently high TMAO levels. Additionally, they found specific gut microbe types in subjects associated with both plasma TMAO levels and dietary patterns, and that baseline TMAO levels were significantly lower among vegans and vegetarians than omnivores. Remarkably, vegans and vegetarians, even after consuming a large amount of carnitine, did not produce significant levels of the microbe product TMAO, whereas omnivores consuming the same amount of carnitine did. "The bacteria living in our digestive tracts are dictated by our long-term dietary patterns," Hazen said. "A diet high in carnitine actually shifts our gut microbe composition to those that like carnitine, making meat eaters even more susceptible to forming TMAO and its artery-clogging effects. Meanwhile, vegans and vegetarians have a significantly reduced capacity to synthesize TMAO from carnitine, which may explain the cardiovascular health benefits of these diets." Prior research has shown that a diet with frequent red meat consumption is associated with increased cardiovascular disease risk, but that the cholesterol and saturated fat content in red meat does not appear to be enough to explain the increased cardiovascular risks. This discrepancy has been attributed to genetic differences, a high salt diet that is often associated with red meat consumption, and even possibly the cooking process, among other explanations. But Hazen says this new research suggests a new connection between red meat and cardiovascular disease. "This process is different in everyone, depending on the gut microbe metabolism of the individual," he says. "Carnitine metabolism suggests a new way to help explain why a diet rich in red meat promotes atherosclerosis." While carnitine is naturally occurring in red meats, including beef, venison, lamb, mutton, duck, and pork, it's also a dietary supplement available in pill form and a common ingredient in energy drinks. With this new research in mind, Hazen cautions that more research needs to be done to examine the safety of chronic carnitine supplementation. "Carnitine is not an essential nutrient; our body naturally produces all we need," he says. "We need to examine the safety of chronically consuming carnitine supplements as we've shown that, under some conditions, it can foster the growth of bacteria that produce TMAO and potentially clog arteries." This study is the latest in a line of research by Hazen and his colleagues exploring how gut microbes can contribute to atherosclerosis, uncovering new and unexpected pathways involved in heart disease. In a 2011 Nature study, they first discovered that people are not predisposed to cardiovascular disease solely because of their genetic make-up, but also based on how the micro-organisms in their digestive tracts metabolize lecithin, a compound with a structure similar to carnitine.
dx.doi.org/10.1038/nm.3145
Biology
Genome sequencing reveals how salmonella carves out a niche in pork production
Mark Kirkwood et al. Ecological niche adaptation of Salmonella Typhimurium U288 is associated with altered pathogenicity and reduced zoonotic potential, Communications Biology (2021). DOI: 10.1038/s42003-021-02013-4 Journal information: Communications Biology
http://dx.doi.org/10.1038/s42003-021-02013-4
https://phys.org/news/2021-05-genome-sequencing-reveals-salmonella-niche.html
Abstract The emergence of new bacterial pathogens is a continuing challenge for agriculture and food safety. Salmonella Typhimurium is a major cause of foodborne illness worldwide, with pigs a major zoonotic reservoir. Two phylogenetically distinct variants, U288 and ST34, emerged in UK pigs around the same time but present different risk to food safety. Here we show using genomic epidemiology that ST34 accounts for over half of all S . Typhimurium infections in people while U288 less than 2%. That the U288 clade evolved in the recent past by acquiring AMR genes, indels in the virulence plasmid pU288-1, and accumulation of loss-of-function polymorphisms in coding sequences. U288 replicates more slowly and is more sensitive to desiccation than ST34 isolates and exhibited distinct pathogenicity in the murine model of colitis and in pigs. U288 infection was more disseminated in the lymph nodes while ST34 were recovered in greater numbers in the intestinal contents. These data are consistent with the evolution of S . Typhimurium U288 adaptation to pigs that may determine their reduced zoonotic potential. Introduction Emergence of infectious diseases presents new challenges for the management of human and livestock health, with substantial human and economic costs through morbidity and mortality, and lost productivity in agriculture. The emergence of 335 human infectious diseases between 1945 and 2004 was dominated by zoonoses of bacterial aetiological agents. 1 A total of 10 of the 335 emergent infectious diseases during this period were Salmonella enterica and several more have been reported since, including S. enterica serotype Typhimurium ( S . Typhimurium) ST313 associated with invasive non-typhoidal Salmonella (iNTS) disease in sub-Saharan Africa, and extensively drug resistant (XDR) S . Typhi. 2 , 3 , 4 Salmonella was estimated to have caused around 87 million human infections resulting in approximately 1.2 million deaths globally in the year 2010. Non-typhoidal Salmonella alone has the greatest impact on health with 4 million disability adjusted life years lost, the greatest burden on human health among foodborne diseases. 5 Pigs are one of the major zoonotic reservoirs, with 10–20% of human salmonellosis in Europe attributable to them. 6 , 7 An understanding of the evolutionary processes leading to the emergence of new infectious diseases has the potential to improve pathogen diagnostics and surveillance, and guide policy and interventions aimed at decreasing the burden of human and animal infection. The genus Salmonella consists of over 2500 different serovars that have diverse host ranges, pathogenicity and risk to human health. One of these serovars, S . Typhimurium (including monophasic variants), has consistently been a dominant serovar in pigs globally, and currently accounts for around two thirds of isolates in the UK. 8 , 9 Despite the ostensibly stable prevalence of S . Typhimurium in pig populations over time, the epidemiological record indicates a dynamic process where distinct variants, identified by phage typing, increase and decrease in prevalence over time. 8 Since the middle of the 20 th century in Europe the dominant phage types were definitive type 9 (DT9), DT204, DT104 and most recently DT193 that is a monophasic S . Typhimurium ( S . 1 ,4,[5],12:i:-) with sequence type 34 (ST34). 10 , 11 At their peak incidence, each accounted for over half of all human isolates of S . Typhimurium. Phage typing has been useful for surveillance and outbreak detection, but only provides limited information about the relationship of the Salmonella isolates due to their polyphyletic nature and potential for rapid changes in phage type as a result of mutations and horizontal gene transfer. 12 Nonetheless, sub-genomic and whole genome sequence analysis confirmed that the emergence of new phage types over time does represent the emergence of distinct clonal groups. 13 , 14 The drivers of their emergence and the consequences for human and animal health are largely unknown. Since around the year 2003, S . Typhimurium isolates of U288 and DT193 have dominated UK pigs. 8 U288 appeared in UK pig populations around 2003 followed around the year 2006 by monophasic S . Typhimurium ( S . 1 ,4,[5],12:i:-) ST34 rapidly emerging in pig populations around the world. 8 , 15 , 16 U288 and ST34 co-existed in the UK pig population and together accounted for around 80% of isolates. 17 Despite, approximately half of all pork consumed in the UK being from UK pig herds, 18 since its emergence U288 have rarely been isolated from human infections in the UK. 17 In contrast, by the year 2013 in the UK over half of all S . Typhimurium infections in the UK were due to ST34, reflecting its capacity to be transmitted through the food chain and cause human infections. 19 , 20 U288 is not a definitive type and the designation is not widely adopted outside of the UK. Consequently, the prevalence of U288 outside of the UK is unclear. However, we previously detected U288 in pigs in Ireland, 21 a study reported that it was widespread in Danish Pig herds, 22 and was present in Italy. 23 A baseline survey reported prevalence of 21.2% and 30.5% in mesenteric lymph nodes and caecal contents for UK slaughter pigs in studies from 2007 to 2013, respectively. 24 , 25 It is believed that contamination of pig carcasses with faeces and gut contents at slaughter, and the ability of Salmonella to spread from the gut to other organs, results in contamination of meat products that enter the food chain and pose a risk to humans if improperly handled or cooked. However, the relative risk from contamination of meat by gut contents during slaughter or from tissue colonised by Salmonella prior to slaughter is not known and could be affected by differences in pathogenesis depending on the genotype of Salmonella involved. Survival of Salmonella in food depends upon adaptive response to environmental stresses including osmotic stress from biocides and desiccation, antimicrobial activity of preservatives and fluctuating temperatures during storage or cooking. In order to cause disease, Salmonella may also need to replicate in food to achieve a population size able to overcome the colonisation resistance of the host. Multiple pathovariants of S . Typhimurium are thought to have evolved from a broad host range ancestor resulting in distinct host range, outcome of infection and risk to food safety, 12 , 26 , 27 similar to that observed for distinct serovars. 28 An understanding of the molecular basis of risk to food safety of S . Typhimurium pathovariants is critical to improve assessment of risk and devise intervention strategies aimed at decreasing Salmonella presence in food. Furthermore, the identification of genomic signatures of zoonotic risk of Salmonella has the potential to further improve source attribution in outbreak investigations, as recently shown using machine learning approaches. 12 , 29 , 30 We therefore investigated the population structure of S . Typhimurium U288 and the genomic evolution accompanying the clonal expansion of U288 by analysis of whole genome sequences. The objective was to identify representative isolates of the U288 epidemic clade and compare their interaction with the environment and the pig host to gain insight into the phenotypic consequences of their distinct evolutionary trajectories. Results S . Typhimurium U288 and monophasic S . Typhimurium ST34 exhibit distinct host range The epidemiological record indicates that S . Typhimurium U288 was first reported in pigs in the UK around the year 2000 and thereafter became the dominant phage-type isolated for much of the following decade. 8 Monophasic S . Typhimurium ST34 emerged around seven years later in UK pigs, and these two variants have co-existed in pig populations since. Retrospective analysis of the frequency of U288 and monophasic S . Typhimurium ST34 isolated from animals in the UK by the Animal and Plant Health Agency (APHA) between 2006 and 2015, revealed distinct host ranges (Fig. 1 ). During this period, a total of 1535 and 2315 isolates of S . Typhimurium U288 and monophasic S . Typhimurium, were reported by APHA from animals in the UK, respectively. S . Typhimurium U288 was almost exclusively isolated from pigs, while in contrast, monophasic S . Typhimurium, although predominantly isolated from pigs, 31 was also isolated from multiple host species including cattle and poultry populations (Fig. 1 ). Emergence of ST34 coincided with a decrease in number of U288 isolates, although both variants were present throughout this time. Fig. 1: Animal species source of monophasic S . Typhimurium U288 and S . Typhimurium ST34. Stacked bar chart indicating the animal source (see colour key inset) of S . Typhimurium U288 ( A ) and monophasic S. Typhimurium ST34 ( B ) isolated in England and Wales by Animal and Plant Health Agency 2006–2015. Full size image The S . Typhimurium U288 and ST34 isolates form distinct phylogroups To investigate the phylogenetic relationship of S . Typhimurium U288 isolates, we first constructed a maximum likelihood tree using variation in the recombination-purged core-genome sequence of 1826 S . Typhimurium isolates from human clinical infections in England and Wales between April 2014 and December 2015 for which both whole genome sequence and the phage-type data were available. From all isolates 24 (1.4%) were U288 and of these, 20 were present in a distinct clonal group, composed of 33 isolates in total (henceforth referred to as the U288 clade, Supplementary Fig. 1 ). Four U288 were in a distinct outlier clade. The remaining 13 isolates within the predominantly U288 clade were reported as phage types DT193 (5 isolates), U311 (3 isolates), U302 (1 isolate), or reacted did not conform (RDNC, 4 isolates) and may be mis-typed or naturally occurring phage-type variants. The main U288 clade was closely related to 13 human clinical isolates, of various phage types, but predominantly U311; none were U288. To investigate the relationship of contemporaneous S . Typhimurium U288 in the UK pig population and human clinical isolates, we determined the whole genome sequence of 79 S . Typhimurium U288 strains isolated from animals in the UK in the years 2014 and 2015 as part of APHA surveillance. To place these in the phylogenetic context of S . Typhimurium, we included 128 isolates from the UK that represented diverse phage types, 12 that included 12 isolates from the current monophasic S . Typhimurium ST34 epidemic. 27 We also included the 36 human clinical strains from the main U288 clade isolated from 2014 and 2015, 3 U288 isolates outside of the main clade, 15 closely related but non-U288-clade isolates, and a U288 isolate (CP0003836), reported previously from Denmark in 2016. 32 The phylogenetic structure of S . Typhimurium was consistent with that described previously, 12 with a number of deeply rooted lineages, some of which exhibited evidence of clonal expansion at terminal branches (Fig. 2 ). All S . Typhimurium U288 isolates from pigs were present in a single phylogenetic clade together with the 33 isolates from human clinical infections (U288 clade, green lineages, Fig. 2 ). The U288 clade was closely related to sixteen S . Typhimurium isolates of various other phage types but none were phage type U288. Most of these were isolated from human clinical infections, and two from avian hosts (Fig. 2 ). Of note, S . Typhimurium strain ATCC700720 (LT2) differed by fewer than 5 SNPs from the common ancestor of the U288 clade and the 13 related non-U288 strains. S . Typhimurium strain ATCC700720 (LT2) was originally isolated from a human clinical infection at Stoke Mandeville hospital, London in 1948, and subsequently has been used for studying the genetics of Salmonella worldwide. 33 The three U288 isolates from human clinical infections in the minor U288 clade clustered together with isolates of other non-U288 phage types. Fig. 2: Phylogenetic relationship of S . Typhimurium U288 and S . 4,[5],12:i:- epidemic clades in the context of S . Typhimurium diversity. Mid-point rooted maximum likelihood phylogenetic tree constructed using 7189 SNPs from the recombination-purged core genome sequence of 262 S . Typhimurium isolates: 79 U288 strains isolated from animals in the UK in the years 2014 and 2015, 36 human clinical strains from the main U288 clade isolated from 2014 and 2015, a U288 isolate (CP0003836), reported previously from Denmark in 2016, 32 131 isolates from the UK that represented diverse phage types, 12 and 15 closely related but non-U288-clade isolates. The source of isolate (outer circle) and phage-type of isolate (inner circle) are specified by the fill colour as indicated in the key. S . Typhimurium U288 (green lineages), closely related to U288 (red lineages) and monophasic S . Typhimurium ST34 (blue lineages) are indicated. Full size image We next estimated the relative contribution of the S . Typhimurium U288 clade isolates and monophasic S . Typhimurium ST34 clade isolates to human clinical infections in the UK between April 2014 and December 2015. Of 1826 S . Typhimurium isolated in this period 33 (1.8%) were from the U288 clade. In contrast, 894 isolates (49%) were from the monophasic S . Typhimurium ST34 clade. To estimate the global distribution of U288 clade strains isolated outside of the UK we examined 34,487 S . Typhimurium whole genome sequences in the Enterobase database. We identified a hierarchical cluster (HC20-201) composed of 455 genomes, of which 345 reported the country of origin, that contained all strains reported as U288. This cluster was consistent with the main U288 phylogenetic cluster 34 (Supplementary Fig. 2 ). HC20-201 also contained 254 genomes from strains isolated in the UK and 91 from France, Denmark, Italy, Germany, Ireland, Austria, or the US. Significantly, the proportion of all S . Typhimurium in Enterobase that were HC20-201 in the European countries ranged from 0.15% to 7.6%, similar to the 2.1% observed for UK S . Typhimurium isolates in Enterobase. Distinct prophage repertoire, plasmid content, and genome degradation of S . Typhimurium U288 strain S01960-05 and monophasic S . Typhimurium ST34 strain S04698-09 To compare the whole genome sequence of U288 strain S01960-05 12 and ST34 S04698-09 14 the closed genomes were aligned. The genomes exhibited overall synteny, that was interrupted by seven insertions or deletions (indels) greater than 1 kb due to distinct prophage occupancy, recombination within shared prophage in one or other genome, the presence of a resistance region encoding multidrug resistance, and an integrative conjugative element (SGI-4) in strain S04698-09 (Fig. 3 ). Both genomes had Gifsy1, Gifsy2, ST64B, similar partial sequence of Fels-1, and remnant prophages SJ46 and BCepMu. The thrW locus was variably occupied by either the mTmV prophage in monophasic Typhimurium ST34 strain S04698-09 that carries the sopE gene, 14 or ST104 in U288 strain S01960-05, a prophage previously described in S . Typhimurium DT104 strain NCTC13384. 13 U288 strain S01960-05 had a complete Fels-2 prophage which was absent from S04698-09. The S04698-09 genome also harboured two additional prophages related to HP1 and SJ46, that were absent from U288 strain S01960-05. Fig. 3: Alignment of S . Typhimurium U288 strain S01960-05 and ST34 strain S04698-09 chromosome sequence. Genomes are indicated by horizontal black lines including pU288-1, pU288-2 and pU288-3. Scale bar indicates Mbp of nucleotide sequence. Shaded segments connecting the genomes indicate colinear (red) or reverse and complement (blue) regions with nucleotide sequence identity >90%. Notable features highlighted by coloured boxes and labels for prophage (blue), the SGI-4 integrative conjugative element (green), fljB locus (grey) and a resistance region (red). Full size image S . Typhimurium U288 strain S01960-05 contained three plasmids reported previously in another U288 strain from Denmark, pU288-1, pU288-2 and pU288-3. 32 , 35 In contrast, no plasmids were present in monophasic S . Typhimurium ST34 strain S04698-09. pU288-1 is similar to the virulence plasmid pSLT, an IncF plasmid present in many S . Typhimurium strains. 35 Additional sequence in pU288-1 but absent in pSLT included an integron with AMR genes including, dfrA12, aadA2, cmlA, aadA1 and sul3 , and the bla TEM gene associated with an IS26-like element. The IncQ1 plasmid pU288-2 encoded additional AMR genes sul2 , strA , strB , tetA (A) and cat . A notable difference in coding capacity affecting the core genome of the S01960-05 and S04698-09 strains resulted from hypothetically disrupted coding sequences (HDCS) due to the introduction of a premature nonsense codon from small insertions or deletions (indels) resulting in a frameshift or a single nucleotide polymorphisms (SNP) giving rise to a new nonsense codon. The monophasic S . Typhimurium ST34 strain S04698-09 genome contained three HDCS outside of prophage, with reference to S . Typhimurium SL1344. In contrast, S . Typhimurium U288 strain S01960-05 contained 19 HDCS outside of prophage (Table 1 ). Fifteen U288 HDCS encoded hypothetical proteins of unknown function ( ygbE , yciW , SL2283, yfbK , SL2330, yhbE , SL0337, ybaO , SL1627, yfbB , yqaA , SL1627, SL0337, SL2330 and SL2283), while eleven had predicted functions based on sequence similarity with proteins of known function ( assT5 , assT3 , dtpB , hutU , cutF , oadA , pncA , sadA , tsr , oatA , and rocR ). Table 1 Hypothetically disrupted coding sequence (HDCS) in U288 strain S01960-05 with reference to strain S04698-09. Full size table The pangenome of U288 and ST34 Analysis of the pangenome revealed a similar-sized core and accessory genome, with 3962 and 4056 core gene families, out of pangenomes of 5501 and 5578 gene families, for U288 and ST34, respectively (Supplementary Figure 3 ). The key differences in the genome sequence of reference strains S01960-05 and S04698-09 included the presence of SGI-4 in ST34, and the presence of pU288-1 (pSLT related) in U288, were general lineage-specific characteristics as they are present in most strains in each case. Lineage-specific differences in the accessory genome were due to distinct prophage repertoires, the presence of SGI-4 in ST34, and the presence of pU288-1 (pSLT related) in U288. The prophage repertoire was somewhat variable within the ST34 lineage and relatively invariant in U288, while plasmid sequence, including pU288-1 was highly variable in U288 and only occasional acquisition of small plasmids in ST34. Notably, U288 lacked lineage-specific genes present on the chromosome, with the exception of prophage. The S . Typhimurium U288 clade evolved from an S . Typhimurium LT2-like common ancestor by genome degradation and acquisition of AMR genes We next investigated the genotypic variation of AMR genes, plasmid replicons and plasmid sequence within the U288 clade to infer the evolutionary events associated with the emergence of the U288 clade. To this end we identified genes by mapping and local assembly of short-read sequence data of 148 U288 clade isolates and related S . Typhimurium isolates to databases of plasmid replicons, AMR genes and allelic variants associated with HDCS. The IncF replicon was present in all isolates from the U288 clade and closely related isolates, consistent with the presence of all or part of the pSLT plasmid-associated sequence (Fig. 4 ). Deletions of large parts of the pSLT-associated sequence were evident in the U288 clade isolates (Supplementary Fig. 3C ). Furthermore, in 58 of 133 U288 clade isolates, deletions affected two or more of the spv genes, previously implicated in virulence in the mouse model of infection (Supplementary Fig. 3C ) 36 . Fig. 4: Phylogenetic relationship and genotypic variation in selected genes of S . Typhimurium U288 and monophasic S . Typhimurium ST34 strain S04698-09. Mid-point rooted maximum likelihood tree based on 1859 SNPs in the recombination-purged variation in the core genome with reference to S . Typhimurium strain SL1344. Lineages associated with the U288 (green), closely related strains (black) and ST34 S04698-09 (purple) are indicated. Scale bar indicates the estimated number of SNPs based on the genetic distance as a fraction of total SNPs. This collection comprised 134 S . Typhimurium U288 isolates from animals and human infections in the UK placed in the context of 16 S . Typhimurium isolates from animals and human that were closely related but outside of the main U288 clade. Monophasic S . Typhimurium strain S04698-09 is included as an outgroup. The presence of allelic variants associated with HDCS, plasmid replicons and resistance genes are indicated by bars colour coded as indicated in the key (inset). Full size image The pattern of AMR gene presence was consistent with acquisition in two distinct evolutionary events. First, an IncQ1 plasmid (pU288-2) was acquired concurrent with initial clonal expansion of the clade, followed by subsequent acquisition of a transposon on the pSLT-like plasmid, pU288-1 (Fig. 4 ). The IncQ1 replicon of the pU288-2 plasmid was present in 97 of 133 U288 clade isolates, including a cluster of six most deeply rooted isolates in the U288 clade, and was associated with the strA, strB, sul2 and tetA (A) genes, encoding resistance to streptomycin, sulphonamide and tetracycline antibiotics. AMR genes cmlA , sul3 , dfrA12, aadA2 and bla TEM that confer resistance to chloramphenicol, sulphonamides, trimethoprim, aminoglycoside and β-lactam antibiotics respectively were present on pU288-1 in all but 16 U288 clade isolates. These AMR genes were absent from six of the most deeply rooted isolates in the S . Typhimurium U288 clade. Investigation of the distribution of the 19 HDCS identified in S . Typhimurium U288 strain S01960-05 and within diverse S . Typhimurium strains in the context of their phylogenetic relationship indicated their sequential acquisition during the evolution of the U288 clade (Fig. 4 ). Of the 19 HDCS, six (SL0337, yhbE , cutF , yciW and oadA ) were also present in the genome sequence of closely related isolates including LT2, and four ( assT5 , SL0987, dtpB and hutU ) were present in isolates from two relatively distinctly related clades. Two additional HDCS in S . Typhimurium U288 strain S01960-05, were either sporadically present as HDCS throughout the S . Typhimurium collection ( oatA and rocR ), or only present as HDCS in strain S01960-05 and 14 closely related isolates. Six HDCS ( pncA , yqaA , ybaO , sadA , ygbE and tsr ) were present in most U288 clade isolates, although a wild type allele of one of these ( ygbE ) was present in a subclade containing 35 isolates, and two other U288 isolates, suggesting that these may have subsequently reverted. AMR gene acquisition and genome degradation preceded the U288 epidemic clonal expansion In order to investigate the temporal relationship between the emergence of the U288 epidemic in UK pigs around the year 2000 and the acquisition of AMR genes and genome degradation, we investigated the accumulation of SNPs on ancestral lineages and constructed a time scaled phylogenetic tree from variation in the core genome of U288 clade and genetically closely related isolates. To enhance the accuracy for the determination of the molecular clock rate, we supplemented the 150 U288 and closely related strains (green and red lineages, Fig. 2 ) spanning the years 1948 to 2015 with 84 additional WGS of U288 strains isolated between 2006 and 2017. A maximum likelihood phylogenetic tree rooted with S . Typhimurium strain SL1344 outgroup was constructed from recombination-purged SNPs in the core genome. Root-to-tip accumulation of SNPs exhibited a molecular clock signal with a statistically significant fit to a linear regression model (R 2 = 0.43, p < 0.0001) (Fig. 5A ). A time dated tree was estimated in a Bayesian inference framework in order to determine the date of all nodes of the tree (Fig. 5B ). This analysis predicted the most recent common ancestor (MRCA) of all isolates at approximately the year 1937 (range 1915–1957), eleven years prior to the isolation of strain LT2 in London, and the MRCA of the U288 epidemic clade in 1988 (range 1982–1994). The disruption of the pncA gene was likely to have occurred between 1968 and 1979. Disruption of ybaO , sadA , yqaA and ygbE along with acquisition of pU288-2 between 1980 and 1990. Disruption of the tsr gene and the acquisition of cmlA , sul3 , dfrA12, aadA2 and bla TEM on mobile genetic elements on plasmid pU288-1 was likely between 1993 and 1995. Disruption of the rocR gene that only affected a subclade of U288 containing the reference strain S01960-05, likely occurred between 2000 and 2003. Fig. 5: Time-scaled phylogenetic analysis of the emergence of the U288 epidemic clade. Analysis of 234 U288 clade and closely related strains isolated between 1947 and 2015 calculated using maximum likelihood estimation based on recombination-purged variation in the core genome sequence. ( A ) Linear regression of root-to-tip SNPs with a slope of 2.05 SNPs per year, and ( B ) estimated time-scaled phylogeny, manually rooted using monophasic S . Typhimurium ST34 strain S04698-09 as an outgroup that was subsequently removed before further analysis. Blue bars at nodes indicate 95% CI of dated nodes, green lineages indicate the U288 epidemic clade, and black lineages indicate lineages closely related to the U288 clade. Major evolutionary events (arrows) of the acquisition of plasmids and AMR genes, and the accumulation of hypothetically disrupted coding sequences (HDCSs) resulting in possible pseudogene formation are indicated in boxes. Full size image S . Typhimurium U288 isolates have a longer doubling time and exhibit greater sensitivity to desiccation compared to ST34 We next compared U288 and ST34 isolates replication rate, motility, biofilm formation and ability to survive desiccation since these characteristics may be important for survival in the food chain. Strains from the U288 clade exhibited a longer aerobic and anaerobic doubling time and increased sensitivity to desiccation, but similar motility and capacity to form biofilm, compared to monophasic S . Typhimurium ST34 isolates. The mean doubling time for three U288 isolates was 0.6 h and 0.54 h in aerobic and anaerobic environments, respectively, compared to 0.52 h and 0.47 h for three ST34 isolates (Fig. 6A ). Fig. 6: In vitro replication, carbon metabolism, sensitivity to desiccation and biofilm formation of S . Typhimurium U288 and ST34 isolates. A Circles indicate the mean doubling time of three S . Typhimurium U288 strains (S01960-05, S07292-07 and H09152-0230, green) and three ST34 strains (S04698-09, S00065-06 and S01569-10, blue) are indicated with the mean (horizontal bar)±with the standard error. B Metabolism measured using the BIOLOG phenotyping microarray platform in the presence of 95 carbon sources. The mean absorbance of at least two technical replicates were used to determine the area under the curve for each metabolite and are presented as a heat map with carbon sources in columns and eight S . Typhimurium strains, U288 (green bars), ST34 (blue bars) and strain 4/74 (orange bar) in rows. Unsupervised clustering of the metabolic activity of each strain and for each metabolite among test strains are indicated (left and above, respectively). C Proportion of CFU surviving desiccation for 24 h with reference to the initial inoculum. The mean (horizontal line), interquartile (box), and range (vertical lines) are indicated. * Indicates mean survival that were significantly different from S04698-09, and # significantly different for strains compared as indicated by brackets, assessed using a Mann-Whitney U test of significance (p < 0.05). D Biofilm formation estimated from the measurement of biomass by crystal violet staining of nine U288 and three ST34 strains with strain SL1344 and SL1344 Δ csgD :: aph that lacks a component of curli fimbriae involved in biofilm formation as positive and negative controls, respectively. Crystal violet retention measured by absorbance at 340 nm with the mean (horizontal line), interquartile (box), and range (vertical lines) are indicated. * Indicates mean optical density that was significantly different from S04698-09, and # indicates mean optical density that was significantly different for strains compared indicated by brackets, assessed using a Mann-Whitney U test of significance ( p < 0.05). E , F Gentamicin protection assay to estimate the invasion of T84 ( E ) and IPEC-J2 ( F ) epithelial cells by ten U288 and six ST34 strains with strain SL1344 and SL1344 Δ invA :: aph that lacks a key component of the invasion associated type III secretion system 1 of Salmonella as positive and negative controls, respectively. The mean (horizontal line), interquartile (box), range (vertical lines) and individual data points (circles) are indicated. * Indicates mean invasion that were significantly different from S04698-09, in a Mann-Whitney U test of significance (p < 0.05). The mean and standard error of representative data from two independent biological replicates are shown. Full size image Since strains of each clade exhibited distinct replication rates, we compared respiration for three isolates of ST34 and four isolates of U288, utilizing a range of substrates as the sole carbon source for metabolism using the BIOLOG phenotyping microarray. All of the strains tested were able to use the majority of 95 carbon sources tested, but there was variation among approximately a quarter of substrates (Fig. 6B ). The pattern of carbon source utilisation of U288 and ST34 isolates was distinct from the commonly used lab strains S . Typhimurium 4/74 (histidine prototroph variant of strain SL1344). The inability or diminished ability of strain 4/74 to utilise m-Tartaric, Tricarballylic acid and D-xylose was a major factor that distinguished this strain from U288 and ST34 strains. Three ST34 strains clustered together, but the U288 isolates exhibited considerably greater diversity in carbon source utilization. Utilization of myo-inositol as a sole carbon source was the most pronounced phenotype that distinguished the two clusters of U288 isolates. Of note, strain S05968-02 and 11020-1996 that were able to use myo-inositol were isolated earlier and were more deeply rooted than strains S01960-09 and S07292-07 that were unable to use this source of carbon. We observed a clade-specific variation in tolerance to desiccation by comparing ten U288 strains and three ST34 strains. Following desiccation for 24 hours, approximately 2% of the initial inoculum remained viable for all three ST34 strains (Fig. 6C ). In comparison the mean viability of U288 strains was 0.1%, but varied between 0.0001% and 0.3% among the ten U288 strains tested. The loss of ability to form biofilm is a common feature of some host-adapted variants of Salmonella enterica 37 , 38 (Fig. 6D ). S . Typhimurium strain SL1344 formed moderate biofilm that was dependent on expression of the csgD gene as previously described. 37 The mean biofilm formation for ten U288 strains was not significantly different from that of three ST34 strains. However, considerable variation was observed especially for the U288 strains, and two strains of U288 had a statistically significant difference in biofilm formation compared to ST34 strain S04698-09, with U288 strain S01960-05 produced significantly less biomass and strain 10584-1997 significantly greater biomass. S . Typhimurium U288 and monophasic S . Typhimurium ST34 isolates exhibit distinct interactions with the host We initially evaluated the interaction of representative strains of ST34 and U288 with tissue culture cells. No difference in the ability of U288 or ST34 isolates to invade human (T84) or porcine (IPEC-J2) epithelial cells in culture was observed (Fig. 6 E, F). Although, several strains of U288 and ST34 had a small but significantly lower invasion compared to strain SL1344 in T84 cells, and two U288 isolates (S01960-09 and H091520254) exhibited decreased invasion of IPEC-J2 cells. In an initial experiment using the streptomycin pre-treated C57bl/6 mouse model of colitis we found that mice infected with randomly selected strains of the same phylogroup (ST34 or U288) exhibited similar colonisation level (Supplementary Fig. 4A ), induction of Cxcl1 (KC) and Nos-2 (iNOS) (Supplementary Fig. 4B ) and intestinal pathology (Supplementary Fig. 4C ). However, in each case, U288 strains (i) colonised the mouse caecum to a greater level, (ii) induced higher levels of CXCL1 and NOS2 transcripts, and iii) triggered a more severe pathology, compared to that resulting from infection with ST34 strains. To compare the ability of six strains to colonise pigs, the reference strains of U288 (S01960-05) and ST34 (S04698-09) and two additional randomly selected strains of U288 (S07292-07 and 11020-1996) and ST34 (S00065-06 and S01569-10) were modified by insertion of unique sequence in the chromosome to facilitate identification by sequencing, and four pigs were inoculated orally with an equal mixture of all six isolates. Colonisation was investigated after 48 hours by sequencing cultured homogenates of faeces and tissue and enumeration of the sequence reads for the strain-specific tags. U288 and ST34 exhibited a distinct pattern of colonisation (Fig. 7A ). ST34 isolates S04698-09 and S01569-10, were more abundant in three of four faeces samples 24 and 48 h post inoculation, compared to U288 strains S07292-07 and 11020-1996. Isolates of neither variant were consistently dominant in the distal Ileum. In contrast, the U288 isolates were generally more abundant in the mesenteric lymph nodes and the tissue of the spiral colon. Fig. 7: Colonisation and clinical signs of disease following oral inoculation of pigs with S. Typhimurium U288 and ST34. A Four pigs were challenged orally with an inoculum containing six wild-type independently tagged strains (WITS) of S . Typhimurium (three DT193 and three U288) in approximately equal proportions. Whole genome sequence of the population of the WITS in the inoculum and recovered from infected pigs was used to enumerate each strain based on the unique sequence tag. The relative abundance of each strain is denoted by bars and the number of strains in each sample (richness) is indicated by circles based on the colour code indicated. Separately, four pigs were challenged orally with S04698-09 (ST34), or 11020-1996 (U288) strains to investigate the signs of disease and colonisation. B – D Groups of four pigs were inoculated with approximately 1 × 10 10 colony forming units of either ST34 S04698-09 (blue bars or circles) or U288 11020-1996 (green bars or circles). B Rectal temperatures of the pigs were monitored over 72 h of infection and C Clinical scores derived from physiological signs and faecal consistency are box plots showing the median interquartile range. D The mean and standard error of viable counts of bacteria shed in the faeces at 24, 48 and 72 h post-inoculation and tissues collected 72 h post-inoculation. Full size image To further investigate the colonisation of pigs after oral inoculation, groups of four pigs were inoculated with either U288 strain S011020-1996 or ST34 strain S04698-09 in single infection experiments (Fig. 7B –D). Rectal temperatures and clinical scores were recorded for all pigs throughout the infection, and at 72 h post-inoculation the pigs were sacrificed, and colonisation of the faeces and tissues was determined. Pigs inoculated with the U288 strain had significantly lower body temperatures at early time points post-inoculation (Fig. 7B ), and this was reflected in significantly lower clinical scores than those inoculated with the ST34 strain (Fig. 7C ). Colonisation by both the U288 and ST34 isolates was consistent with the mixed inoculum experimental infections. The mean Salmonella CFU present in faeces was also lower at all time points in pigs inoculated with U288, compared with ST34 (Fig. 7D ). At 48 h post-inoculation approximately 100-fold more ST34 than U288 were present in the faeces. Similar colonisation for each isolate was observed in the distal ileum and spiral colon, but U288 was present in higher numbers in the mesenteric and colonic lymph nodes (Fig. 7D ), consistent with the mixed inoculum experiments. Discussion Estimation of relative risk of human infection based on phage-typing is potentially misleading, due to potential polyphyletic clusters of common phage-types. Our phylogenetic analysis indicated that the majority of U288 isolates are from a single clonal group, but that this clade also contained a number of isolates that were not identifiable by phage typing as U288, therefore, the contribution to human infection may be underestimated. Conversely, a proportion of the S . Typhimurium U288 isolates from human clinical infections were only distantly related to the clonal group of S . Typhimurium U288 associated with pigs in the UK, and therefore, contribute to an over estimation of the pig associated U288 genotype to human infection. We more accurately estimated the relative contribution of the U288 clade variant and monophasic S . Typhimurium ST34 variant to human infection by determining the relative contribution of the S . Typhimurium U288 clade isolates and monophasic S . Typhimurium ST34 clade isolates to human clinical infections in the UK between April 2014 and December 2015. Of 1826 S . Typhimurium isolated in this period 33 (1.9%) were from the U288 clade. In contrast, 894 isolates (54%) were from the monophasic S . Typhimurium ST34 clade. Whole genome sequence or sequence polymorphisms specific to the U288 clade have the potential to improve surveillance to identify the U288 pathovariant in pig herds. The majority of S . Typhimurium U288 isolates from livestock and human infections were present in a phylogenetic clade that evolved from a common ancestor that was closely related to strain LT2. LT2 was isolated at Stoke Mandeville hospital in 1948 and has been used in a large number of studies on the genetics and biochemistry of S . Typhimurium. 39 Approximately 50 years elapsed since the isolation of LT2 before U288 was first detected by epidemiological surveillance in the UK pig population, around the year 2003. 8 The zoonotic source of the infection caused by the LT2 strain and that of the common ancestor with U288, is not known. 39 Evolution of successive descendants of the LT2-like hypothetical ancestor gave rise to multiple lineages, including one that gave rise to the U288 epidemic clade during the 1980s. Evolution of U288 was characterized by the step-wise acquisition of genes involved in resistance to multiple antibiotics, and the accumulation of genome sequence polymorphisms, some of which resulted in interruption of coding sequences. Evolution culminated with the acquisition of AMR genes on pU288-1 and disrupted of the tsr gene around 1995, the date of the common ancestor of the majority of the isolates in the U288 clade. The U288 clade may therefore have been evolving in the pig population before its first detection and rapid spread around the year 2003. The slow emergence of the U288 clade may have been associated with a gradual adaptation to a unique niche in the pig population, but it may also reflect a lag in time from emergence to detection by surveillance, as was proposed for other epidemic serotypes such as S . Enteritidis in poultry layer flocks around 1980, after the eradication of S . Gallinarum more than a decade previously. 40 The acquisition of resistance to antimicrobials is a key factor in the emergence of bacterial pathogens over the past 50 years, and is often the evolutionary event immediately preceding the spread and clonal expansion of a new clone. 2 , 41 A U288 isolate was previously reported to encode antimicrobial resistance genes on a pSLT-like plasmid pU288-1 ( dfrA12, aadA2, cmlA, aadA1, sul3 and bla TEM , and sul2 , strA , strB , tetA (A) and cat ) on an IncQ plasmid pU288-2. 35 The pU288-2 plasmid was likely acquired first as it is present in isolates from throughout the U288 clade. Insertions carrying AMR genes in pU288-1 may have occurred later than the acquisition of pU288-2, since a group of seven U288 isolates that form a more deeply rooted, basal clade, lacked the pU288-1-associated AMR genes. The majority of the U288 isolates in our analysis were direct descendants of the hypothetical ancestor that acquired AMR genes on pU288-1, and few from that of the basal clade that lacked these genes, suggesting pU288-1 evolution was an important event in the success of the U288 epidemic clade. However, despite a number of examples of apparent loss of AMR genes associated with both pU288-1 or loss of the pU288-2 plasmid (loss of AMR genes and the IncQ replicon), there were just three isolates that had lost both concurrently, highlighting the importance of MDR. Sequence polymorphisms in S . Typhimurium U288 strain S01960-05 resulted in disrupted coding sequences affecting 26 genes, with reference to strain SL1344, a commonly used lab strain. Sixteen of these polymorphisms were predicted to have occurred before the hypothetical LT2-like ancestor of the U288 and related clades (red and green lineages in Fig. 2 ). Five coding sequences (SL0337, assT3 , yhbE , cutF and yciW ) were disrupted in all descendants of the hypothetical LT2-like ancestor, and one other ( oadA ) in all but one deeply rooted lineage. However, eight genes ( pncA , yqaA , ybaO sadA , ygbE , tsr , oatA and rocR ) were disrupted in either the hypothetical ancestor of all U288 clade isolates, or subsequently in a subset of descendant lineages, in a stepwise manner, similar to that reported previously for the S . Typhimurium ST313 pathovariant. 42 , 43 Ancestral state reconstruction indicated that the first gene to be disrupted specifically in the U288 lineage was pncA , an event that coincided with the acquisition of pU288-2, encoding AMR genes. Disruption of pncA is therefore, characteristic of the U288 clade. PncA is a nicotinamidase, a component of one of the pyridine nucleotide cycle (PNC) pathways, involved in the recycling nicotinamide adenine dinucleotide (NAD). The PncA dependent PNC pathway is probably more active in scavenging of pyridine compounds present in the environment. 44 NAD is central to metabolism in all living systems, participating in over 300 enzymatic oxidation-reduction reactions, 44 and therefore, conditions where de novo synthesis of NAD is limited by the availability of tryptophan or aspartate, the inability to use exogenous pyridines may limit metabolism. Perhaps significantly, the pncA gene is also disrupted in S . Choleraesuis, a serotype that is highly host-adapted to pigs and replicates more slowly than S . Typhimurium in the pig intestinal mucosa. 45 Chronologically, the next events were the disruption of the yqaA , ybaA , sadA and ygbE genes. SadA is a surface localised adhesin that contributes to cell-cell interactions and therefore multicellular behaviour, 46 while the function of yqaA , ybaA and ygbE are unknown. Disruption of tsr , that encodes a methyl accepting chemotaxis protein involved in energy taxis and colonisation of Peyer’s patches in the murine model of infection, was acquired by a common ancestor of the majority of the U288 clade. This polymorphism occurred on an internal branch of the tree that coincided with the acquisition AMR genes inserted on pU288-1 and a clone that spread successfully through the pig population. Insight into the role of genes present as HDCS in U288 comes from a functional genome screen of genes required for colonisation of the pig colon using a transposon insertion library. 47 Transposon insertions in yqaA , ybaA, ygbE had no affect on colon colonisation and insertions in pncA and tsr were not present in the transposon insertion library and so their role is not known. The potential role of sadA is unclear from the study since just one of eight insertion mutants was recovered in significantly lower numbers from the colon of pigs, while the remaining sadA mutants were recovered in similar proportions to those present in the inoculum. However, several additional genes disrupted in U288 but intact had ST34 were implicated in colonisation of the pig colon in the study. In particular, several insertion mutants in assT3 and assT5 were recovered in significantly lower numbers indicated by a fitness score of −2 and −5.76, and may therefore contribute to the reduced intestinal colonisation of U288 isolates. U288 and ST34 exhibited important differences in the way they interacted with the non-host environment, that could affect the likelihood that it survives in food and is transmitted on to consumers. First, considerably more viable ST34 bacteria were recovered following desiccation for 24 h, compared to U288. Many foodborne disease outbreaks due to Salmonella have been traced back to low moisture, ready-to-eat (RTE) foods, 48 including dried pork products. 49 Resistance to desiccation may be of particular significance because low oxygen tension is associated with increased resistance to a number of secondary stressors such as pH, salt, alcohol and heat. 48 Monophasic S . Typhimurium ST34 also replicated at a significantly higher rate than U288 in culture, a characteristic that may result in a higher level of contamination in food. This is important because bacteria of the family Enterobacteriaceae are known to replicate in food stored at 7 °C, increasing up to 10,000-fold in sausage meat in two weeks. 50 As naturally contaminated meat samples typically contained around 10 3 CFU/g 51 and the infective dose of Salmonella enterica in humans is in the range of 10 5 to 10 9 CFU, considerable replication in food may be necessary for transmission along the food chain until the final consumer. S . Typhimurium U288 and monophasic S . Typhimurium ST34 are associated with distinct risk to human health, despite circulating in the same pig population in the UK. 8 Our data suggest that this may be due to the differences in tissue tropism and levels of each variant in the pig host that has the potential to impact the likelihood that Salmonella enters the food during the slaughter and butchering process. The greatest risk to Salmonella entering pork products is the contamination of the carcass at slaughter due to errors in the evisceration process and inadequate cleaning of polishing machines. 52 The relative contribution of Salmonella present in the faeces and tissues to contamination of food is not known, but increased colonisation of Salmonella in the caecum due to longer time in lairage correlated with contamination of the carcass at slaughter. 53 If the level of contamination of different organs of the pig is important for transmission of U288 and ST34 through the food chain, our data is consistent with a greater role for faecal contamination of the final food product. Mean viable counts of a U288 isolate were up to two orders of magnitude lower in the faeces, but 10-fold greater in mesenteric lymph nodes, compared to an ST34 isolate. The reason for the difference in colonisation of pigs is not known. However, ST34 lack the pSLT virulence plasmid, encoding the spv locus that is required for invasive disease in mice and severe gastroenteritis in cattle. 54 , 55 , 56 Polymorphisms in some U288 strains also affect this locus. Acquisition of the IS 26 element appears to have been accompanied by deletion of the spvR and spvA genes, also likely affecting expression of spvB and spvC as they are under transcriptional control by SpvR. Further analysis of the impact of the genotypic variation of pig colonisation of U288 and ST34 will reveal the key determinants. Taken together, our data contribute to a better understanding of the evolutionary history and phenotypes associated with the emergence of a new Salmonella pathovariant. In the case of S . Typhimurium U288, adaptation in pigs appears to have been accompanied by a decreased risk to food safety for the consumption of pork or cross-contaminated food products by the human population. However, the consequences to the health and productivity of pigs as a result of a more invasive disease are not known but may be an important consideration for the pork production industry. Methods Bacterial strains and culture Salmonella Typhimurium U288 and ST34 isolates used in this study were isolated from human clinical infections during routine diagnostic testing by Public Health England (PHE), or from animals during routine surveillance or epidemiological investigation by Animal and Plant Health Agency (APHA) (Supplementary Data 1 ). All sequence from samples taken from clinical infections were published previously 14 , 57 and informed consent was not required as part of this study. All sequence data generated in this study are available in the SRA database under BioProject accession number PRJNA641292. In total we analysed the whole genome sequence of 2085 S . Typhimurium strains (Supplementary Data 1 ) isolated during routine surveillance by APHA and PHE. These included 166 U288 pig strains isolated by APHA composed of 89 from 2014-2015 and an additional 77 from 2005-2016 included to facilitate ancestral state reconstruction and calculation of the molecular clock rate. Additional strains isolated from animals by the APHA were included along with a collection of well-characterized S . Typhimurium isolates described previously 2 , 14 to provide phylogenetic context. S . Typhimurium isolated from human clinical infections during PHE diagnostics and surveillance described previously were also investigated. 57 Bacterial isolates were stored at −80 °C in 25% glycerol and routinely cultured overnight in 5 mL LB broth at 37 °C with shaking at 200 rpm, or on solid medium consisting of Luria Bertani (LB) broth or MacConkey containing 5% Agar, and supplemented with chloramphenicol (30 mg/l) or kanamycin (50 mg/l) as appropriate. Preparation of genomic DNA and sequencing of bacterial isolates Genomic DNA for short-read sequencing reported in this study was extracted using Wizard® Genomic DNA Purification (Promega) from a culture inoculated from a single colony and incubated for 18 hours at 37 °C. Low Input Transposase Enabled (LITE) Illumina libraries were constructed using a modified protocol based on the Illumina Nextera kit (Illumina, California USA). A total of 1 ng of DNA was combined with 0.9 µl of Nextera reaction buffer and 0.1 µl Nextera enzyme in a reaction volume of 5 µl and incubated for 10 minutes at 55 °C. To this 5 µl mixture containing the DNA we added 2.5 µl of 2 µM custom barcoded P5 and P7 compatible primers, 5 µl 5x Kapa Robust 2 G reaction buffer, 0.5 µl 10 mM dNTPs, 0.1 µl Kapa Robust 2 G enzyme and 10.4 µl water were mixed and DNA amplified by incubating the sample at 72 °C for 3 minutes, followed by 14 PCR cycles consisting of 95 °C for 1 minute, 65 °C for 20 seconds and 72 °C for 3 minutes. 20 µl of amplified DNA was added to 20 µl of Kapa beads and incubated at room temperature for 5 minutes to precipitate DNA molecules >200 bp onto the beads. The beads were then pelleted on a magnetic particle concentrator (MPC), the supernatant removed, and two 70% ethanol washes performed. Beads were left to dry for 5 minutes at room temperature before being re-suspended in 20 µl of 10 mM Tris-HCl, pH8. This was then incubated at room temperature for 5 minutes to elute the DNA molecules. Beads were harvested with an MPC and the aqueous phase containing the size selected DNA molecules transferred to a new tube. The size distribution of each purified library was determined on a PerkinElmer GX by diluting 3 µl of the size-selected library in 18 µl 10 mM Tris-HCl, pH8. Equimolar pool purified libraries were then subjected to size selection on a Sage Science 1.5% BluePippin cassette recovering molecules between 400 and 600 bp. QC of the size selected pool was performed by running 1 µl aliquots on a Life Technologies Qubit high sensitivity assay and an Agilent DNA High Sense BioAnalyser chip and the concentration of viable library molecules measured using qPCR. 10 pM library pools were loaded on a HiSeq4000 (Illumina, California, USA) based on an average of the qubit and qPCR concentrations using a mean molecule size of 425 bp. Phylogenetic reconstruction and time-scaled inference Paired-end raw sequence data for each isolate were mapped to the SL1344 reference genome (FQ312003) 58 or S01960-05 (PRJEB34597) 12 using SNIPPY (version 3.0) ( ). The size of the core genome was determined using snp-sites (version 2.3.3), 59 outputting monomorphic as well as variant sites and only sites containing A,C,T or G. A multifasta alignment of variant sites was used to generate a maximum likelihood phylogenetic tree with RAxML using the GTRCAT model implemented with an extended majority-rule consensus tree criterion. 60 The genome sequence of S . Heidelberg strain SL476 (NC_011083.1) was used as an outgroup in the analysis to identify the root and common ancestor of all S . Typhimurium strains. To identify S . Typhimurium genomes in the Enterobase database in the same hierarchical clustering level as the main U288 clade 34 we determined the lowest genetic difference cluster level (HCC20-201) that contained all isolates that were reported as U288 (accessed December 2020). The relationship of core genome sequence types was visualised using grapetree implemented within Enterobase. 61 To infer the time of nodes on the phylogeny, we used the BactDating software package implemented in R, 62 sequence variation in the core genome with recombination purged using Gubbins with five iterations. 63 The resulting sequence alignments were used to construct a maximum likelihood phylogenetic tree using RAxML rooted on S . Typhimurium SL1344 genome as the outgroup. The Markov chain Monte Carlo was run for 1 million iterations and the convergence and mixing of chains were 113.3, 129.5, 145.8 (for μ, σ, α, respectively) calculated using the R package corda. 64 Pangenome analysis and in silico genotyping The pangenome of 132 U288-clade, 90 ST34-clade and 114 additional S . Typhimurium from a representative collection described 14 (Supplementary Data 1 ) was determined using Roary software. 12 The presence of antibiotic resistance, virulence and plasmid replicon genes in short-read data was determined by mapping short read sequence data to databases ResFinder, 65 VFDB 66 and PlasmidFinder 67 of candidate genes and local assembly using Ariba with a 90% minimum alignment identity. 68 This tool was also used to determine the presence of specific genes or gene allelic variants. The results of the ARIBA determination of the presence or absence of specific genes were confirmed using SRST2 69 setting each alternative form of the gene as a potential allele. SRST2 was also used to verify the ARIBA findings of the VFDB data set, as the presence of orthologous genes in the genome was found to confound the interpretation of results. Construction of wild type isogenic tagged strains (WITS) and single knock out A modified recombineering method based on the Lambda Red system was used to construct knockout mutations in S . Typhimurium SL1344 and WITS. 70 WITS were constructed to provide a kanamycin resistance selectable marker ( aphI ) and unique sequence tag inserted in the genome to distinguish and quantify pooled populations of strains in mixed inoculum experiments of pigs by sequencing. Briefly, primers where used to amplify the aphI gene from pKD4 and recombination into the genome was directed by the inclusion of 50 nucleotide sequence flanking the insertion site (Supplementary Table 1 ). For the construction of WITS of S04698-09, S00065-06, S01569-10, S01960-05, S07292-07 and 11020-1996 WITS, insertion was directed to the intergenic region of iciA and yggE at orthologous position 3,247,245 in SL1344. 58 Determination of growth rate, biofilm formation and desiccation survival For determination of growth rate of randomly selected U288 strains (S01960-09, S07292-07 and H09152-0230), and ST34 strains (S04698-09, S00065-06 and S01569-10), bacterial cultures in LB broth were diluted to approximately 1 × 10 5 CFU per ml and incubated at 37 °C in aerobic or anaerobic (85% N 2 , 10% H 2 and 5% CO 2 ) environments and viable bacteria in colony-forming units (CFU) enumerated by serial dilution and culture on LB agar at 1, 3, 5 and 7 h post-inoculation. Doubling time was calculated in the exponential range of growth using the mean from three biological replicates. Determination of survival after desiccation of strains SL1344, S04698-09, S00065-06, S01569-10, S01960-05, H09152-0230, S02724-05, 10584-1997, 12005-1995, S05968-02, 11020-1996 and 3203-1997 was based on a method previously described. 71 Briefly, bacteria were cultured in LB broth at 37 °C with shaking for 18 h, harvested by centrifugation, washed with phosphate-buffered saline pH7.4 (PBS) and re-suspended in PBS and adjusted to OD 600nm of 1. 0.05 ml of cell suspension were added to polystyrene 96 well plate (Nunc) and desiccated at 22 °C, 36% relative humidity (RH). Desiccated plates were stored in a sealed vessel containing saturated potassium acetate solution, to maintain RH at 36% and incubated at 22 °C for 24 h. Cells were re-suspended in 0.2 ml PBS and viable counts were determined by culture of serial 10-fold dilutions on LB agar. The percentage survival was calculated from at least three biological replicates. To study biofilm formation of the same strains as for desiccation, liquid cultures were incubated statically in polystyrene 96-well plates at 22 °C for 24 hours, washed once with PBS pH7.4, and attached bacteria were stained with crystal violet and absorbance read at 340 nm. Data points represent the mean of at least three biological replicates. Metabolic profiling using OMNIlog microarray system To assess utilisation of carbon sources, eight strains ST4/74, S04698-09, S00065-06, S01569-10, S05968-02, S01960-05, 11020-1996 and S07292-07 were cultured on LB agar overnight at 37 °C, inoculated into IF-0 medium containing tetrazolium dye, and added to a PM-1 plate (with 95 different carbon sources), according to the manufacturer’s instructions (IBiolC). Accumulation of purple indicator dye as a consequence of redox activity was measured every 15 minutes for 48 hours. Raw absorbance data were processed using R software and the opm package. 72 The area-under-the-curve was a metric for total respiration of the indicated carbon sources and plotted as a heatmap using R software. Epithelial cell invasion assays IPEC-J2 porcine epithelial cells and T84 human epithelial cells were cultured and routinely passaged in Dulbecco Modified Eagles Medium (DMEM), containing high glucose or low glucose for each cell line, respectively. 24-well tissue culture plates were seeded with 1 × 105 cells/ml of epithelial cell lines, and incubated at 37 °C 5% CO 2 overnight. Stationary phase cultures of S . Typhimurium strains SL1344, SL1344 Δ invA :: aph , S04698-09, S00065-06, S01569-10, S01960-05, H091520230, S02724-05, 10584-1997, 12005-1995, S05968-02, 11020-1996 and 3203-1997 were balanced to OD 600nm ~1.0 in PBS and used to inoculate epithelial cells at a multiplicity of infection (MOI) of 20. Cells were incubated for 30 minutes before washing 5 times with PBS, re-suspending in DMEM + gentamicin (100 mg/l) and incubating for a further 30 minutes at 37 °C in 5% CO 2 atmosphere to kill extracellular bacteria. The medium was then replaced with DMEM + gentamicin (10 mg/l) and the plates were incubated for 60 minutes at 37 °C in 5% CO 2 atmosphere. Plates were then incubated again at 37 °C in 5% CO 2 atmosphere for 90 minutes, before a final wash with PBS + 0.1% Triton-X100. Plates were left for 2 minutes and cells were disrupted by vigorous pipetting for one minute followed by serial dilution and culture on LB agar to determine viable counts. Streptomycin pre-treated mouse infections Ethical approval for the experimental mouse infections was granted following review by the Animal Welfare and Ethical Review Body (University of East Anglia, Norwich, UK) under project licence PPL 70/8597. A streptomycin pretreated model of colitis was used. 43 , 73 Groups of five female 6–9 week old specific pathogen free C57bl/6 mice were randomly chosen and housed together in individually ventilated cages with food and water ad libitum . Mice were administered 20 mg of streptomycin sulphate by oral gavage 24 h prior to inoculation of S . Typhimurium. Five S . Typhimurium strains S00065-06, S01569-10, S01960-05, 11020-1996 and S07292-07 were cultured for 18 hours in 50 ml LB broth with shaking and approximately 5 × 10 6 CFU were inoculated orally in 0.2 ml of PBS pH7.4 by gavage. A control group of five mice were administered only the PBS buffer. PBS pH7.4 was used to mimic physiological osmolarity and pH to maintain viability. On day 3 post-inoculation mice were killed by asphyxiation with slowly rising CO 2 concentration and then the caecum was aseptically removed. An approximately 3 mm section of the caecum halfway down the organ from the ileal junction was removed and fixed in formalin for histopathology examination of 5 μm thin sections stained with hemotoxylin and eosin. Approximately 5 mg of caecum tissue was removed and placed in RNAlater (Thermo Fisher) and stored at −80 °C. The remaining caecum (approximately two thirds) was homogenised in sterile PBS pH7.4 and serial dilutions plated on LB agar containing 0.05 mg/ml kanamycin, incubated at 37 °C for 18 h and viable counts enumerated. Determination of Nos2 and Cxl1 expression in mouse caecum tissue RNA from caecum tissue was prepared using guanidinium thiocyanate-phenol-chloroform extraction with Tri Reagent (Merck). Tissue was homogenized in 1 ml of Tri reagent and disrupted using 1.4 mm zinc oxide beads in a bead beater and tissue debris removed by centrifugation. 0.2 ml of nuclease free water and 0.2 ml of chloroform were added to the supernatant, mixed and centrifuged for 15 minutes at 12,000 x g. 0.5 ml of isopropanol was added to the upper aqueous phase and centrifuged for 15 minutes at 12,000 x g. The resulting RNA pellet was washed twice with 70% ethanol, briefly dried and resuspended in 0.02 ml of RNase free water. The relative abundance of Nos2 and Cxl1 mRNA was determined by quantitative RT-PCR using primers specific to the test genes and the Gapdh house-keeping gene as the control (Supplementary Table 1 ) as described previously. 74 Mixed-strain infection of pigs Ethical approval for experimental infections of pigs was granted following review by the Moredun Research Institute Ethical Review Committee under project licence PCD70CB48. This infection model has previously revealed differential virulence and tissue tropism of S. enterica serovars in pigs. 45 , 75 Pigs were confirmed to be Salmonella -free by selective enrichment of faeces as previously described, 76 and therefore ethical consideration precluded inclusion of negative controls. Four 6-week-old Landrace x Large White x Durock pigs were challenged orally with a mixed-strain inoculum as previously described. 76 We tested six strains including U288 and ST34 reference strains and two additional strains of each variant that exhibited typical pathogenicity in the mouse colitis model (S01960-09, S07292-07, 111020-1996, S04698-09, S00065-06 and S01569-10) in a mixed inoculum assay. A mixed-strain inoculum was prepared by combining equal volumes of individual cultures of the six strains grown statically at 37°C for 16 hours in LB broth supplemented with 50 mg/l kanamycin, which were optical density (OD 600nm ) standardized to contain 8.9 log 10 CFU/ml. The number of CFU in the inoculum was determined by plating 10-fold serial dilutions of the inoculum on MacConkey agar containing 50 μg/ml kanamycin. Aliquots of the inoculum were stored at −20 °C for DNA extraction. Five ml of the mixed-strain inoculum was mixed with 5 ml of antacid [5% Mg(SiO 3 ) 3 , 5% NaHCO 3 , and 5% MgO in sterile distilled water] to promote colonization and administered orally by syringe before the morning feed. Pigs were fed as normal following challenge. Rectal temperatures were recorded every 24 hours and faecal samples were collected at 24 and 48 hours post-infection. The endpoint of the experiment was humane euthanasia at 72 hours post-inoculation. A section of distal ileal mucosa, mesenteric lymph nodes (MLNs) draining the distal ileal loop, a section of spiral colon, colonic lymph nodes (CLNs) and a section of liver were collected. Lymph nodes were trimmed of excess fat and fascia, and the sections of distal ileum and spiral colon were washed gently in PBS to remove nonadherent bacteria. One gram of each tissue was homogenized in 9 ml of PBS in gentleMACS M tubes using the appropriate setting on the gentleMACS dissociator (Miltenyi Biotec). Homogenates were filtered through 40-μm-pore-size filters and an aliquot was used to determine viable counts. The remaining homogenate was spread onto 10 MacConkey agar plates (500 μl per plate) containing 50 μg/ml kanamycin and incubated overnight at 37 °C. The bacterial lawns recovered from each sample were collected by washing with PBS, and the pellets were stored at −20 °C for DNA extraction. Genomic DNA (gDNA) was extracted from the pellets using the NucleoSpin tissue kit (Macherey-Nagel), according to the manufacturer’s instructions. The quality and quantity of DNA were assessed initially by NanoDrop 3300 (Thermo Scientific), and samples with an A 260/280 of < =1.8 were considered suitable for library preparation. These were confirmed further by using the DNA ScreenTape (Agilent Technologies) and the Qubit double-stranded DNA (dsDNA) BR assay kit (Life Technologies), respectively. One microgram of gDNA with a DNA integrity number (DIN) of < =6 was used for library preparation using the TruSeq PCR-free library preparation kit (Illumina) according to the manufacturer’s protocol. Whole genome sequencing on the HiSeq system (Illumina) followed by bioinformatics analysis were performed as previously described, 76 with the exception that strains were quantified by mapping sequence data to the unique WITS tag sequence incorporated chromosomally into each strain. Sequence data was submitted to the NCBI SRA database (Supplementary Table 2 ). For each strain, the percentage in a population was calculated as the average WITS frequency x 100. Data are presented as the mean ± standard error of the mean (SEM). Single-strain infection of pigs From the strain phenotypes identified in the mixed-strain infection, one representative strain of each ST34 and U288 clade that exhibited the greatest colonisation was selected for in vivo phenotype validation, S04698-09 and 11020-1996, respectively. The strains were grown statically at 37 °C for 16 hours in LB broth supplemented with 50 mg/l kanamycin and the optical densities (OD 600nm ) were standardized to contain 9.2 log 10 CFU/ml, which was confirmed retrospectively by plating 10-fold serial dilutions on MacConkey agar. Groups of four Salmonella -free pigs were challenged orally with 5 ml of each strain as described above. Rectal temperatures were recorded every 12 hours and faecal samples were collected every 24 hours post-infection. Clinical scores were calculated for each animal using their temperatures, physiological signs and faecal consistency. At 72 hours post-infection, tissue samples were collected and processed for viable counts as described above. The bacterial load of each strain in each tissue of the infected pigs was determined. Data are presented as the mean ± SEM. Statistics and reproducibility Statistical tests were performed in GraphPad Prism version 8.00 (GraphPad Software). The viable counts of bacteria in each case is presented as mean ± SEM, and differences between strains were analysed using a two-sided Mann-Whitney test of significance. Area under the curve analysis followed by a two-sided Mann-Whitney test was used to analyse the cumulative clinical scores of the infected pigs during single-strain infections. P values of ≤0.05 were considered to be statistically significant. The exact sample size for each experimental group are detailed in the text or in the data repository at . Data availability All data is freely available in publicly accessible data bases under accession numbers reported in Supplementary Data and previously reported. 14 , 57 Source data for main figures can be accessed from the figshare database. 77 Materials and all other data are available from the authors at reasonable request.
Variants of concern (VOCs) and variants of interest (VOIs) have become familiar terms due to the current pandemic, but variants of familiar pathogens such as salmonella also present a threat to human and animal health. To better understand the different threats these variants pose, a collaboration led by Professor Rob Kingsley from the Quadram Institute and Professor Mark Stevens from the Roslin Institute working with scientists from the Earlham Institute has focused on common variants of salmonella present in pigs in the UK. Their findings, published recently in the journal Communications Biology, has shown that despite being extremely closely related, variants can have very different effects on the health of the pig and also on the risks they pose to food safety. Salmonella Typhimurium is one of the most common types of salmonella. It is a major cause of human gastroenteritis, notably from consuming undercooked pork products or as a result of cross-contamination of foods consumed raw. This bacterial pathogen is also a concern to the pork industry as it can affect the health, productivity and welfare of pigs. Salmonella Typhimurium is relatively common in pig herds globally, and processes implemented in abattoirs are designed to prevent contamination of meat destined for the food chain. Bacterial pathogens continually evolve to exploit new ecological niches. Human activity, including agricultural practices and how we use medicines and antibiotics may drive the emergence of new variants. Understanding exactly how this happens is crucial to countering the consequences of new variants on human and animal health, and the answers lie in the genes of the bacteria. Genome sequencing can read all of an organisms genes and can help by resolving relationships between variants, identifying variants that are evolving as they enter a new niche, and pinpointing potential functional changes that affect their ability to cause disease or survive in the food chain. The team worked with Public Health England and the Animal and Plant Health Agency and to examine salmonella isolates from human clinical infections during routine diagnostics or from animals during routine surveillance, with funding from the Biotechnology and Biological Sciences Research Council, part of UKRI. Using whole genome sequencing the research team found that two types of S. Typhimurium, dubbed U288 and ST34, have been circulating in UK pigs since 2003. Surprisingly, U288 are rarely associated with human infection, while ST34 account for over half of all S. Typhimurium infections from all sources, not just pigs. What is more, the two types of salmonella infected pigs differently, resulting in distinct levels of colonisation of the intestine and surrounding tissue, and disease severity in the first few days after infection. The U288 variant grew more slowly in the lab and was more sensitive to stress associated with desiccation. These characteristics may affect its ability to survive in the food chain. Inspection of changes in the genome sequence of U288 indicated that this variant emerged by a unique set of changes that occurred within a short period of time, probably between the years 1980 and 2000. The researchers believe that these changes hold the key to understanding how this variant interacts differently with pigs during infections, in the lab, and potentially the food chain. "We have seen these types of changes before in variants of salmonella that have become adapted to specific host species and cause a more invasive disease, including the type of salmonella that causes typhoid fever in people but does not affect other species," said Prof. Rob Kingsley, a group leader at the Quadram Institute and Professor of Microbiology at the University of East Anglia. "One of the interesting findings is just how rapidly pathogens can adapt, and how even a few genomic changes can lead to very different disease outcomes," said Dr. Matt Bawn a researcher involved in the study based at both the Earlham Institute and Quadram Institute. Prof. Stevens, chair of microbial pathogenesis and a deputy director at The Roslin Institute, University of Edinburgh, added "Understanding how variants of salmonella emerge and pinpointing the genetic signatures responsible for adaptation to different hosts and the ability to produce disease will provide opportunities to improve diagnostics and surveillance. In turn this will help to predict the risk that salmonella variants pose to animal health and food safety."
10.1038/s42003-021-02013-4
Medicine
Obesity impairs the brain's response to nutrients, suggests study
Mireille Serlie, Brain responses to nutrients are severely impaired and not reversed by weight loss in humans with obesity: a randomized crossover study, Nature Metabolism (2023). DOI: 10.1038/s42255-023-00816-9. www.nature.com/articles/s42255-023-00816-9 Journal information: Nature Metabolism
https://dx.doi.org/10.1038/s42255-023-00816-9
https://medicalxpress.com/news/2023-06-obesity-impairs-brain-response-nutrients.html
Abstract Post-ingestive nutrient signals to the brain regulate eating behaviour in rodents, and impaired responses to these signals have been associated with pathological feeding behaviour and obesity. To study this in humans, we performed a single-blinded, randomized, controlled, crossover study in 30 humans with a healthy body weight (females N = 12, males N = 18) and 30 humans with obesity (females N = 18, males N = 12). We assessed the effect of intragastric glucose, lipid and water (noncaloric isovolumetric control) infusions on the primary endpoints cerebral neuronal activity and striatal dopamine release, as well as on the secondary endpoints plasma hormones and glucose, hunger scores and caloric intake. To study whether impaired responses in participants with obesity would be partially reversible with diet-induced weight loss, imaging was repeated after 10% diet-induced weight loss. We show that intragastric glucose and lipid infusions induce orosensory-independent and preference-independent, nutrient-specific cerebral neuronal activity and striatal dopamine release in lean participants. In contrast, participants with obesity have severely impaired brain responses to post-ingestive nutrients. Importantly, the impaired neuronal responses are not restored after diet-induced weight loss. Impaired neuronal responses to nutritional signals may contribute to overeating and obesity, and ongoing resistance to post-ingestive nutrient signals after significant weight loss may in part explain the high rate of weight regain after successful weight loss. Main The orosensory effects of food have long been identified as the primary driving force behind food intake beyond homeostatic needs 1 , 2 . In addition to the palatability of nutrients, increasing evidence shows a potent role for signals that arise after the ingestion of food—so-called post-ingestive nutrient signals—in the regulation of eating behaviour 3 . For instance, Trpm5 −/− mice that lack sweet taste transduction still develop a clear preference for sucrose over noncaloric solutions 4 , indicating a calorie-dependent and taste-independent effect on food intake. Other studies show that mice are receptive to appetition, the process of flavour conditioning associated with intragastric infusion of nutrients 5 , 6 . Several post-ingestive nutrient signals convey the presence of nutrients in the gastrointestinal tract to the brain and may contribute to the regulation of eating behaviour via the gut–brain axis. Firstly, gastrointestinal vagal nerve afferents are stimulated upon the presence of nutrients and mediate anorexigenic effects via the brainstem to several downstream brain regions 7 , 8 . Secondly, nutrients in the gastrointestinal lumen induce an endocrine response and facilitate the digestion, absorption and subsequent metabolism of nutrients. The post-ingestive endocrine response includes, but is not limited to, enhanced release of insulin from pancreatic beta cells, glucagon-like peptide-1 (GLP-1) from intestinal L cells 9 , 10 , and suppressed release of ghrelin from gastric X/A-like cells 11 . Gastrointestinal vagal nerve afferents and multiple brain regions express receptors for these hormones, but the exact mechanisms by which insulin and GLP-1 mediate anorexigenic effects and ghrelin mediates orexigenic effects remain to be elucidated 12 . Finally, changes in the concentrations of nutrients and/or metabolites in the portal or systemic circulation provide a direct post-ingestive signal of nutrient availability for different brain regions involved in the central regulation of food intake 13 , 14 . While the post-ingestive signals induced by glucose versus lipid consumption differ in several aspects, administration of both carbohydrate and lipid promotes the release of striatal dopamine in rodents 7 , 15 . The striatum is involved in the rewarding and motivational aspects of food intake 7 , 13 . A study in humans without obesity reported a biphasic striatal dopamine response immediately and approximately 20 min following the consumption of a milkshake solution, which was hypothesized to reflect both an immediate orosensory and delayed post-ingestive response 16 . In rodents, striatal dopamine release is positively and proportionally related to intragastric fat infusions, and intact striatal dopamine signalling is required to reduce subsequent caloric intake in proportion to the amount of fat directly infused into the stomach 17 . Interestingly, long-term exposure to a high-fat diet resulted in an impaired striatal dopaminergic response to an intragastric lipid infusion 18 . Taken together, these studies demonstrate that, (i) in addition to direct gustatory effects of nutrients, post-ingestive nutrient signals contribute to the regulation of feeding behaviour; and (ii) impaired striatal dopamine signalling after prolonged exposure to high-calorie nutrients may promote subsequent overeating and obesity. Despite these intriguing mostly preclinical studies, little is known about the role of post-ingestive nutrient signals in human physiology or obesity development. A few studies have assessed the response of the brain to the isolated, orosensory-independent, post-ingestive effects of glucose or dodecanoate (a C12 fatty acid) by means of intragastric infusions using functional magnetic resonance imaging (fMRI) 19 , 20 , 21 . The intragastric glucose infusion decreased blood oxygen level-dependent (BOLD) signal in the striatum and several other brain regions, including the brainstem, hypothalamus and thalamus 19 . The intragastric dodecanoate infusion increased BOLD signal in the brainstem, pons, hypothalamus, cerebellum and motor cortical areas 20 , 21 . However, these studies were conducted in participants without obesity and did not assess the effect of such intragastric infusions on striatal dopamine release. Given the devastating impact of obesity worldwide, it is highly relevant to determine whether post-ingestive nutrient signals and/or the subsequent striatal dopamine response are impaired in humans with obesity. On the basis of the available data, we hypothesized that intragastric infusions of glucose and lipid modulate cerebral neuronal activity and striatal dopamine release in lean humans and that these responses are impaired in humans with obesity. Finally, we hypothesized that an impaired response to post-ingestive nutrient signals is partially reversible with diet-induced weight loss. In this regard, we have previously demonstrated that obesity-associated changes in the striatal dopamine system are partially reversed by bariatric surgery-induced weight loss in women 22 . To test our hypotheses, we recruited lean individuals and individuals with obesity and evaluated the effects of direct, intragastric infusions of tap water (isovolumetric and noncaloric control), glucose and lipids on cerebral neuronal activity using fMRI and on striatal dopamine release using single-photon emission computed tomography (SPECT) imaging (Fig. 1 ). Participants with obesity were studied before and after a dietary intervention aimed at reducing body weight by 10%. Fig. 1: Schematics of the study design and main study procedures. a , Overview of the study design and overall timing of procedures in the lean participants and participants with obesity. Each imaging session was performed on a separate study day, and the order of sessions (that is, to assess the effects of intragastric (IG) glucose, lipids or water) was randomized. b , Overview of an fMRI study day. c , Overview of a SPECT imaging study day. Full size image Results Obesity-related insulin resistance improved with weight loss Twenty-eight lean participants with body mass index (BMI) ≤ 25 kg/m 2 and 30 participants with obesity and BMI ≥ 30 kg/m 2 were included in the analysis (Table 1 and Fig. 2 ). In line with our expectations, obesity was associated with increased fasting glucose and insulin concentrations, indicating insulin resistance 23 , and with decreased fasting ghrelin at baseline 24 , 25 . Following the baseline assessments, participants with obesity enrolled in a supervised personalized dietary weight loss programme aimed at reducing body weight by 10% over a period of 12 weeks. Twenty-six participants with obesity completed the dietary intervention. As shown in Table 1 , the intervention was successful at promoting the intended weight and body fat loss. Notably, dietary weight loss was not associated with a decrease in resting energy expenditure (REE), suggesting maintenance of lean body mass 26 , and it was associated with improved insulin sensitivity, as reflected by decreased fasting glucose and insulin 27 . In addition, dietary weight loss was associated with a decrease in fasting GLP-1 to levels below those observed in the lean participants. Table 1 Baseline characteristics of study participants ( n = 58) Full size table Fig. 2: Participant flow diagram. Overview of the number of screened and randomized participants. Full size image Post-ingestive metabolic response to intragastric nutrients To evaluate the metabolic and cerebral effects of post-ingestive nutrient signals, we used direct intragastric infusions of glucose (125 g in 250 ml of water; 500 kcal), lipid (250 ml of 20% Intralipid; 500 kcal) or water (250 ml of tap water; noncaloric isovolumetric control). Participants underwent all studies in a random assignment and crossover design. They were blinded for the type of infusion, and the infusions were administered via nasogastric tube to eliminate all anticipatory and orosensory effects. This design allowed us to specifically isolate the post-ingestive effects of these nutrients. As expected, intragastric glucose infusions rapidly and strongly raised plasma glucose and insulin concentrations in all participants, whereas intragastric lipid infusions only slightly increased circulating insulin (Fig. 3a,b ). Glucose and lipid infusions decreased acylated ghrelin and increased total GLP-1 levels in both groups (Fig. 3c,d ). Overall, these data show an evident metabolic response within the first 30 min after the intragastric infusion, which is when we assessed cerebral responses using fMRI. Fig. 3: Effect of intragastric glucose and lipid infusions on glucose, insulin and gut hormones. a , Plasma glucose. b , Plasma insulin. c , Plasma acylated ghrelin. d , Total plasma GLP-1. Green symbols indicate lean participants; red and blue symbols indicate participants with obesity before and after weight loss, respectively. Data are the mean ± s.e.m. and compared by analysis of variance (ANOVA). Effect of time was assessed in lean participants and participants in the pre-diet condition. The interaction between time and group was assessed for lean versus pre-diet and pre-diet versus post-diet. Source data Full size image In addition, we evaluated the effects of the intragastric infusions on feelings of hunger (visual analogue scale (VAS) scores) and subsequent ad libitum caloric intake. Independent of the nature of the intragastric infusion, lean participants and participants with obesity in the pre-diet condition did not report decreased hunger scores after the fMRI scan (VAS −0.1 ± 1.8, P = 0.303); in the participants with obesity, this did not change following the dietary intervention (VAS 0.1 ± 1.8, P = 0.649). However, this secondary outcome measure may have been underpowered 28 . The number of calories consumed by lean participants did not depend on the nature of the received intragastric infusion ( P = 0.401). Compared to lean participants, participants with obesity consumed more calories in the pre-diet condition (pre-diet 538 kcal versus lean 404 kcal, P = 0.013) and this did not change following the diet (pre-diet 529 kcal versus post-diet 543 kcal, P = 0.531). Post-ingestive brain activity is blunted in individuals with obesity To evaluate the isolated post-ingestive effects of glucose and lipid on brain activity, participants underwent three separate fMRI scanning sessions following the intragastric infusion of glucose, lipid and water (control; Fig. 1b ). During each fMRI scan, the whole-brain BOLD response was continuously measured for a duration of 40 min. The BOLD signal is the local ratio of deoxyhaemoglobin to oxyhaemoglobin, based on increased cerebral blood flow in response to neuronal activity 29 . Eight minutes after the start of the fMRI scan, participants received the intragastric glucose, lipid or water infusion (250 ml in 5 min). We used exploratory voxel-wise and targeted region-of-interest (ROI) analyses to compare whole-brain and striatal BOLD signal responses to the intragastric infusions, respectively. We first performed a whole-brain voxel-wise analysis to identify the typical cerebral BOLD signal response to intragastric glucose or lipids (both corrected for the BOLD response to intragastric water) in humans with normal body weight. Our data show that glucose and lipid both induce multiple post-ingestive effects on brain activity in lean participants (Table 2 ). We observed decreased BOLD signal in striatal, frontal, insular, limbic, occipital, parietal and temporal regions at 10 to 15 min after the intragastric glucose infusion (that is, time bins T5 and T6); a more prolonged neuronal response was observed in the nucleus accumbens (NAc), putamen and frontal pole (Table 2 and Supplementary Table 1 ). In contrast, after the intragastric lipid infusion, we observed decreases in BOLD signal in frontal, insular, limbic, parietal and temporal regions at 20 to 22.5 min (that is, time bin T9); here, a more prolonged response was observed in frontal, insular and parietal regions and a delayed response was observed in the occipital lobe (Table 2 and Supplementary Table 2 ). Importantly, whole-brain voxel-wise analyses revealed no significant nutrient-induced changes in BOLD signal in any region in the participants with obesity, and there were no differences between the pre-diet and post-diet conditions (Table 2 and Supplementary Tables 1 and 2 ). These data indicate that brain regions involved in the regulation of eating behaviour respond in a nutrient-specific manner to the post-ingestive effects of glucose and lipid. Moreover, our observations that this physiological response is absent in participants with obesity and is not restored following diet-induced weight loss suggest that impaired post-ingestive nutrient sensing may play a role in obesity and may also contribute to the high rate of weight regain after diet-induced weight loss. Table 2 The ‘lean brain phenotype’ Full size table Impaired post-ingestive striatal responses are not reversible with weight loss The striatum has an essential role in the regulation of eating behaviour 17 . It has been proposed to also function as a post-ingestive caloric sensor, and it coordinates an appropriate behavioural response to nutrient exposure 17 . Therefore, we complemented the explorative voxel-wise analysis with a targeted ROI analysis to test our hypothesis that obesity is associated with a blunted striatal response to post-ingestive nutrient signals. To this end, we assessed the post-ingestive effects of glucose and lipids on the BOLD signal in striatal subregions in lean participants and participants with obesity before versus after weight loss. The intragastric administration of both glucose and lipid to lean participants induced strong decreases in BOLD signal in both the NAc and the putamen (Fig. 4a,b and Supplementary Table 3 ). We observed no post-ingestive effects on the caudate nucleus (Fig. 4c and Supplementary Table 3 ). This physiological cerebral response to these nutrients was severely impaired in participants with obesity: there were no changes in BOLD signal in any of the striatal subregions, except for an intragastric glucose-induced effect in the putamen (Fig. 4a–c and Supplementary Table 3 ). More importantly, dietary weight loss in the participants with obesity was not associated with a restoration of the cerebral responses to post-ingestive nutrient signals (Fig. 4a–c and Supplementary Table 3 ). A direct (paired) comparison between the BOLD signal responses in the participants before versus after the dietary intervention revealed no significant differences in any striatal region. Taken together, these data show that the striatal response to post-ingestive nutrient signals is impaired and not reversible with significant weight loss in humans with obesity. Fig. 4: BOLD signal following intragastric glucose and lipid infusions (controlled for intragastric water infusions) in lean participants and participants with obesity before and after weight loss. a , b , Changes in NAc BOLD signal over time after intragastric glucose ( a ) or lipid ( b ) administration. c , d , Changes in putamen BOLD signal over time after intragastric glucose ( c ) or lipid ( d ) administration. e , f , Changes in caudate nucleus BOLD signal over time after intragastric glucose ( e ) or lipid ( f ) administration. Data are the mean ± s.e.m. Green symbols indicate lean participants; red and blue symbols indicate participants with obesity before and after weight loss, respectively. Grey shaded area indicates the time frame (5 min) of intragastric infusion. Credits for images at left: ROI masks were obtained from the Harvard–Oxford subcortical atlas 64 , 65 , 66 , 67 and are shown overlaid on a MNI152 brain (Copyright (C) 1993–2009 Louis Collins, McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University) 73 . * P < 0.05 versus baseline (two-sided one-sample t -test). The exact P values are available in Supplementary Table 3 . Source data Full size image Striatal dopamine release is impaired in humans with obesity The neurotransmitter dopamine is involved in the motivational and rewarding aspects of food intake. In the context of obesity, blunted dopaminergic responses to nutrients have been suggested to contribute to energy consumption beyond homeostatic needs. Thus, to test our hypothesis—that post-ingestive glucose and lipid induce striatal dopamine release in lean participants, but not in participants with obesity—we measured striatal dopamine release in response to intragastric glucose and lipids (Fig. 1c ). To this end, we applied SPECT imaging in combination with the radiotracer [ 123 I]-iodobenzamide ([ 123 I]IBZM) to determine striatal dopamine D 2/3 receptor (D 2/3 R) availability at baseline and after nutrient infusions (Fig. 5a ). Given that acute changes in D 2/3 R availability in response to a stimulus are a measure of striatal dopamine release 30 , this design allowed us to determine the post-ingestive effects of glucose and lipids on striatal dopamine release in humans in vivo. Fig. 5: Dopamine release following intragastric glucose and lipid infusions in lean participants and participants with obesity before and after weight loss. a , Representative example of a T1-weighted anatomical brain MRI overlaid with the co-registered SPECT image of a lean participant showing the distribution of radiotracer uptake, with strongest uptake of [ 123 I]IBZM in the bilateral striata. b , Post-ingestive effects of glucose on striatal dopamine release. c , Post-ingestive effects of lipids on striatal dopamine release. Nutrient-induced striatal dopamine release (percentage) was calculated as: the percentage change in striatal [ 123 I]IBZM BP ND × −1. Data are individual participants with mean ± s.e.m. * P < 0.05 versus baseline (one-sided one-sample t -tests). Source data Full size image Intragastric infusion of glucose induced striatal dopamine release in all groups, with no group differences between the lean participants and those with obesity (Fig. 5b ). In contrast, intragastric infusion of lipid induced striatal dopamine release in lean participants only. In the participants with obesity, this impaired dopaminergic response to lipid was not reversible with diet-induced weight loss (Fig. 5c ). These data suggest that striatal dopamine release is not nutrient specific in lean participants, supporting the hypothesis that the striatum functions as a general calorie sensor. However, the blunted striatal dopamine response to intragastric lipid, but not to glucose, in participants with obesity, points to impaired lipid sensing in nutrient-specific pathways that are involved in dopamine release. Post-ingestive brain responses are not related to circulating hormones We observed substantial interindividual variation in the BOLD signal responses to intragastric glucose and lipid infusion (Fig. 4 ). Moreover, because it is largely unknown how nutrient-derived signals are relayed to the brain, we assessed whether the variation in brain activity can be explained by variation in postprandial levels of circulating nutrients or hormones. We found that changes in the striatal BOLD signal could not be predicted by changes in plasma glucose, insulin or ghrelin following the glucose or lipid infusion (Supplementary Table 4 ). In the lean participants, plasma GLP-1 excursions correlated negatively with changes in BOLD signal after the lipid infusion in the putamen and caudate nucleus (Fig. 6a,b ), indicating that GLP-1 release following intestinal exposure to lipids is associated with a decrease in striatal BOLD signal in lean individuals. We did not observe these correlations in the participants with obesity (Supplementary Table 4 ). Fig. 6: Strong GLP-1 release following intragastric lipids is associated with more pronounced changes in the striatal BOLD signal, only in lean participants. a , b , Scatterplots of the GLP-1 response to intragastric lipids versus changes in the BOLD signal in the putamen ( a ) and the caudate nucleus ( b ). Data are individual participants and were evaluated using Pearson’s coefficient ( n = 27). After Bonferroni correction, a two-tailed P value of 0.0125 was considered significant. The solid green line represents the regression line for the lean participants. Green symbols indicate lean participants. AUC, area under the curve; iAUC, incremental area under the curve. Source data Full size image Discussion In this study, we demonstrate the differential post-ingestive (that is, orosensory and preference-independent) effects of isocaloric intragastric glucose and lipids on neuronal activity in brain regions involved in the regulation of eating behaviour, as well as on striatal dopamine release in lean adults. Moreover, we show that most of these physiological responses to intragastric nutrients are impaired in humans with obesity, with no signs of reversibility after 12 weeks of dietary weight loss. Taken together, these findings support the hypotheses that: (i) glucose and lipid differentially affect brain regions involved in the regulation of eating behaviour through post-ingestive signals; (ii) impaired post-ingestive nutrient signalling may contribute to pathological eating behaviour, overeating and obesity; and (iii) the persistence of these disturbances after diet-induced weight loss may contribute to the high incidence of weight regain after dietary interventions. Physiological brain responses to intragastric nutrients Our unique study design, including the blinded administration of nutrients via nasogastric tube and the combination of state-of-the-art functional and molecular neuroimaging techniques, allowed us to evaluate the isolated post-ingestive effects of glucose and lipids on whole-brain neuronal activity and striatal dopamine release. By studying both lean participants and patients with obesity, we were able to first identify the physiological brain responses to intragastric glucose and lipids and subsequently determine that these responses are impaired in humans with obesity. Intragastric glucose and lipids swiftly and strongly alter brain activity in regions that have previously been implicated in the physiological regulation of eating behaviour 31 , 32 , 33 . Intriguingly, the spatial and temporal distribution of the responses differed substantially between the glucose and lipid infusions. Glucose administration had the most pronounced effects in the striatum and the frontal pole, regions involved in important aspects of eating behaviour, including reward expectancy and calculation, executive control and decision-making 31 . The effects of the lipid infusion were most notable in the insula and frontal cortex, regions involved in the integration of internal and external stimuli, the encoding of reinforcing stimuli and the regulation of reward-related behaviour 32 , 33 . In addition, we observed important changes in brain activity within the first 10 min following intragastric glucose, whereas most of the brain responses to intragastric lipids occurred after 20 min or more. As a study in lean humans showed that early gastric emptying (<45 min) did not differ between intragastric glucose and lipid 34 , this temporal difference is unlikely to be explained by nutrient-specific differences in gastric emptying. We thus demonstrate that glucose and lipids have orosensory and preference-independent, yet nutrient-specific physiological effects on the central nervous system, suggesting that the signalling pathways by which the brain is informed about ingested nutrients may also be nutrient specific 3 , 7 , 13 , 35 . In addition to these nutrient-specific BOLD responses, we observed a similar striatal response to intragastric glucose and lipids. We were specifically interested in this region as it has been proposed to function as a post-ingestive caloric sensor and to play an important role in the adaptation of feeding behaviour to changes in the caloric value of energy intake 17 . In the lean individuals, intragastric glucose and lipids both decreased BOLD signals in the NAc and putamen and both resulted in striatal dopamine release. We thus extend the findings of rodent studies, showing striatal dopamine release following intragastric glucose or lipids 7 , 15 , to healthy adults with normal body weight. Interestingly, in these rodent studies, the amount of striatal dopamine release correlated positively with the caloric load of the infusion, and the authors subsequently hypothesized the striatum to function as a general calorie sensor 17 . Although it goes beyond the scope of the present study to determine whether striatal dopamine release is directly related to caloric intake in humans, the similarity in the striatal response to isocaloric glucose or lipid infusions, but not to isovolumetric noncaloric water, points to a calorie-driven, rather than a nutrient-specific or volume-driven mechanism in this brain region. We found that most of the post-ingestive cerebral changes were not associated with post-ingestive changes in circulating glucose or hormones, suggesting that cerebral post-ingestive nutrient signalling is not directly modified by these metabolic signals. However, linear correlation analyses might be less suited to studying the complex dynamics between post-ingestive nutritional signals and activity of specific neuronal circuits. In addition, postprandial nutrient availability might be sensed in the portal vein instead of in the systemic circulation 13 . Nevertheless, some of the interindividual variation in the lipid-induced striatal BOLD response could be explained by variation in GLP-1 release upon lipid infusion. The exact mechanisms underlying the anorexigenic effects of GLP-1 remain to be elucidated 36 : possible ways by which this gut hormone may indirectly affect the striatum include paracrine stimulation of intestinal vagal nerve afferents that then project to the brainstem and onwards to the striatum 37 , endocrine stimulation of GLP-1 receptors in brain regions that are connected with the striatum 38 , 39 or direct action on striatal GLP-1 receptors 40 . Central nutrient resistance in people with obesity In contrast to the physiological effects we observed in the lean participants, in the participants with obesity the whole-brain voxel-wise analysis and the NAc ROI analysis showed no functional responses to intragastric glucose or lipids. These findings are important, because the NAc is involved in the incentive, reinforcing and motivational aspects of (anticipatory) food cues and closely interacts with the homeostatic regulatory circuitry 41 , 42 , 43 , 44 . In humans, increases in NAc activity provoked by visual food stimuli predict greater subsequent ad libitum food intake 45 , while a decrease in NAc activity correlates with reduced food palatability ratings 46 . In the present study, we therefore speculate that the observed nutrient-induced decrease in NAc BOLD signal in lean participants reflects a homeostatic response intended to devalue, and thus discourage, further food intake, because homeostatic needs have already been met. We further suggest that an absence of such a response, as observed in the participants with obesity, may lead to an inability to devalue the rewarding aspects of additional food intake, thereby promoting the consumption of food beyond homeostatic needs. Whereas the absence of whole-brain and NAc functional responses to intragastric glucose and lipid in participants with obesity suggests a general defect in the sensing of post-ingestive nutrient signals, the observation that the functional response of the putamen and the striatal dopamine response were only impaired for intragastric lipids, but not glucose, indicates a specific defect in post-ingestive lipid sensing/signalling in these hedonic brain areas. In this regard, post-ingestive glucose versus lipids likely signals to the striatum via different pathways 3 . In rodents, the lipid-induced striatal response depended on a functioning vagal nerve 7 . When fed a high-fat diet, mice exhibited impaired striatal dopamine release in response to intragastric lipids 18 , similarly to our participants with obesity. This was attributed to reduced intestinal synthesis of oleoylethanolamine 18 , a bioactive lipid metabolite that interacts with vagal sensory afferents 47 , 48 , and it will be highly relevant to determine if reduced intestinal oleoylethanolamine synthesis plays a role in human obesity development. Furthermore, a high-fat diet affected Agouti-related peptide (AgRP) neurons in mice 49 . This neuronal population in the hypothalamus has a critical role in the regulation of energy homeostasis and the transmission of homeostatic energy signalling to striatal dopamine signalling 50 , 51 . A negative energy status activates AgRP neurons to increase caloric intake 52 , 53 , whereas intragastric nutrients inhibit these neurons and stimulate striatal dopamine release 51 , 54 . However, in mice following a high-fat diet, the inhibition of AgRP neurons by intragastric fat specifically was blunted and it would be interesting to study if this mechanism translates to human obesity 49 . The glucose-induced response, on the other hand, primarily relied on a vagal-independent mechanism (that is, portal-mesenteric sensing of glucose) 13 . Finally, a response within the putamen has been linked to the caloric value of nutrients and likely contributes to homeostatic feedback after food intake 17 , 55 . Thus, in the participants with obesity, an impaired putamen response to intragastric lipids may facilitate continued energy consumption after a lipid-rich meal. To summarize, human obesity is associated with both global and nutrient-specific defects in post-ingestive nutrient sensing. These impairments may contribute to overeating (and subsequent weight gain) and provide future targets for the development of therapies against obesity. Effects of diet-induced weight loss on brain responses As we have previously shown that bariatric weight loss partially reversed obesity-associated alterations in the striatal dopamine system 22 , we hypothesized that dietary weight loss also reverses such impairments. Here, however, we show that a 12-week supervised dietary intervention promotes significant weight loss and metabolic improvement, but does not restore the physiological brain response to post-ingestive glucose or lipids in humans with obesity, at least during the time course of this study. We reported earlier on a significant increase in striatal D 2/3 R availability upon weight loss 22 , but in that study weight loss was higher and induced by bariatric surgery and the interval between baseline and follow-up SPECT was 2 years compared to 12 weeks in the current study. Whether these systems are restored with long-term weight loss remains to be determined. Unfortunately, in practice, it may never come to this, because most patients regain weight within a few years of dieting: one meta-analysis of long-term weight loss trials showed that 50% of original weight loss was regained after 2 years and 80% after 5 years 56 . On the basis of the present study, we now postulate that persistent defects in post-ingestive nutrient signalling to the brain contribute to weight regain after dietary weight loss. If shown to be true, this would make these impairments even more appealing therapeutic targets, not only for weight loss but also for weight maintenance. Limitations Some nuances with respect to our findings should be made. Firstly, due to radiation exposure, we only studied individuals over 40 years of age, and we cannot extrapolate our findings to younger adults. Secondly, we performed continued fMRI scanning up to 32 min after the start of the intragastric infusions, so our conclusions are limited to this timeframe. It is possible that any of the observed obesity-associated defects are not entirely absent but merely delayed. However, if the cerebral response to food intake is indeed delayed for more than 30 min, these results are still highly relevant, because defects within this timeframe may result in delayed meal termination and thus more energy consumption. Thirdly, a recent paper reported on the stimulatory effect of systemic rehydration on dopaminergic neuron activity in the ventral tegmental area (dopaminergic neurons in this brain area project to the ventral striatum) in water-deprived mice 57 . This suggests that the control isovolumetric and noncaloric intragastric water infusion in the participants of the current study might have triggered neuronal activity affecting the BOLD signal and/or dopamine release. This would imply that the water condition cannot serve as an optimal neutral control condition. Since all participants were allowed to drink water and were not dehydrated when the functional imaging was performed, we expect the effect of water on the imaging outcomes to be small. Of note, the described effects of water on neuronal activation in the mice were smaller when the mice were not water-deprived and the volume of intragastric water used in that study was higher (approximately 30% of daily requirements versus 12.5% in our study). Fourthly, we standardized the volume and caloric load of the intragastric glucose and lipid infusion for all participants. The infused 500 kcal represented 33.3% versus 29.3% of caloric need in the resting state (measured using indirect calorimetry) for lean participants and participants with obesity, respectively. Although this difference of 4% is small, we cannot exclude that this may have contributed to the outcome. Fifthly, we cannot rule out an effect of the passage of time between the pre-diet and post-diet intervention scans in the participants with obesity. It was not feasible to re-scan the lean participants. Finally, our conclusions on reversibility remain limited to this specific dietary weight loss intervention. It is possible that (some of) the defects in post-ingestive nutrient signalling may be (partially) reversible by altering the macronutrient content, meal timing or duration of the intervention. Conclusion Data from the current study demonstrate that glucose and lipids have distinct post-ingestive effects on brain activity in regions involved in the regulation of eating behaviour as well as on striatal dopamine release in lean, healthy adults. Moreover, we found that humans with obesity have severely impaired functional and neurochemical responses to post-ingestive nutrients, with no signs of reversibility after 12 weeks of (successful) dietary weight loss. Taken together, these observations provide insights into the physiology of human eating behaviour, and the pathophysiology of obesity. The lack of reversibility after significant weight loss suggests that the high rate of weight regain after successful weight loss is in part explained by ongoing resistance to post-ingestive nutrient signals. Methods Participants Thirty participants with a healthy body weight and 30 participants with obesity were recruited from the general population in the Amsterdam metropolitan area (Fig. 2 ). Participants were eligible to participate if they: (i) were men aged 40–70 years or post-menopausal women aged 50–70 years, (ii) had either BMI ≤ 25 kg/m 2 (participants with a healthy body weight) or BMI ≥ 30 kg/m 2 (participants with obesity) and (iii) had stable weight (<10% weight change) for at least 3 months before the study assessments. Exclusion criteria were: (i) use of any medication, except for thyroid hormone, antihypertensive and/or lipid-lowering drugs; (ii) any somatic disorder, except for dyslipidaemia, hypertension and/or treated hypothyroidism; (iii) history of any psychiatric or eating disorder; (iv) lactose, gluten, soybean oil, egg or peanut intolerance; (v) shift work; (vi) irregular sleep habits; (vii) regular vigorous exercise (>3 h/week); (vii) restrained eaters; (viii) childhood onset of obesity (at age <4 years); (ix) substance abuse (smoking, alcohol >3 units/day, recreational drugs); (x) occupational radiation exposure; or (xi) any contraindication for MRI. All participants completed a medical evaluation, including history, physical examination and blood tests. Study design In this single-blinded, randomized, controlled, crossover study (Fig. 1a ), lean participants underwent three fMRI sessions on three separate study days to assess the effects of glucose, lipids and tap water on cerebral neuronal activity. Participants with obesity underwent three fMRI study days before the start of a 12-week hypocaloric diet intervention and three fMRI study days after completion of this diet intervention to assess the effect of diet-induced body weight loss. A stratified randomization scheme was used to determine the order of the infusions for the fMRI study days. During the three fMRI study days after the diet, the same nutrient infusion order was used as the one before the diet. In addition, all participants underwent two [ 123 I]IBZM SPECT study days to assess the post-ingestive nutrient effects of glucose and lipids on the striatal dopamine system. In lean participants, in random order, the effect of glucose was assessed during one SPECT study day and the effect of lipids was assessed during the other SPECT day. Due to radiation exposure, the number of SPECT study days a participant is allowed to undergo is limited to two. Therefore, in participants with obesity, the effect of either glucose ( n = 15) or lipids ( n = 15) on the striatal dopamine system was assessed both before and after the diet. All randomization and allocation procedures were performed using the GraphPad QuickCalcs website. Primary study outcomes were the effects of the intragastric nutrient infusions on cerebral neuronal activity and striatal dopamine release. Secondary study outcomes were the effects of the intragastric nutrient infusions on: glucoregulatory and gut hormone release and objective and participantive hunger scores. The protocol was approved by the Academic Medical Center medical ethics committee. All participants provided written informed consent in accordance with the Declaration of Helsinki. The study was prospectively registered in the Netherlands Trial Registry ( : NTR7042). The trial was registered on 22 February 2018. One pilot fMRI was performed in one participant who was enrolled on 13 December 2017. The data of this one participant were only used to optimize the fMRI protocol and are not included in the paper. The data of participants included in this clinical trial and paper were collected from March 2018 onwards. Experimental procedures Anthropometric measurements Body weight and fat percentage were measured using a BOD POD (COSMED USA). REE was measured by indirect calorimetry (Vmax Encore 29, Carefusion). Daily energy expenditure was estimated as 1.3 × REE. Diet intervention After the baseline study days, participants with obesity enrolled in a supervised personalized dietary weight loss programme aimed at reducing body weight by 10% over 12 weeks. Individual daily caloric budget was calculated on the basis of REE, estimated daily activity and the required caloric deficit to lose 10% body weight in 12 weeks. The diet adhered to a minimum intake of 1 g protein per kg body weight and consisted of approximately 45% carbohydrates, 20% lipids and 35% proteins. Participants received weekly supervision by telephone and visited the clinic for consultation and measurement of body weight and REE after the first and second month of the diet. If weight loss deviated from an average of 0.8% per week, the diet was adjusted accordingly. Participants with <5% body weight loss at the end of the diet intervention were excluded from participation in the post-diet study procedures. Intragastric infusions To bypass the orosensory and preference-dependent effects of nutrient ingestion, participants received intragastric infusions via a nasogastric tube (Levin, 12FR). Participants were first familiarized with the tube placement during a training session. Then, during the fMRI and SPECT study days, the intragastric infusions consisted of 250 ml glucose 50% (125 g of glucose, 500 kcal), 250 ml Intralipid 20% (50 g of soybean oil, 500 kcal) or 250 ml tap water (noncaloric and isovolumetric control). The participants were blinded to the order of the infusions. fMRI Protocol The effects of the intragastric infusions of glucose, lipids and tap water on cerebral neuronal activity were assessed using fMRI on three separate study days (Fig. 1b ). A pilot fMRI was performed in one participant who underwent nasogastric tube placement with infusion of lipids to assess feasibility and optimize the protocol. The data of this participant were not included in the analysis. Participants were instructed to consume the same dinner (40% of daily energy expenditure) the evening before each fMRI study day. After an overnight fast, participants came to the imaging facility of the Amsterdam UMC, location AMC in the morning between 6:00 and 10:00 am (time of arrival was consistent for each participant). Thirty minutes before the intragastric infusion, the nasogastric tube was inserted. A cannula was inserted in the antecubital vein to enable repeated blood sampling during fMRI acquisition. An anatomical brain scan was acquired on the first fMRI study day. During the functional brain scans, baseline activity was measured for 8 min, after which the intragastric infusion of glucose, lipids or tap water was administered in 5 min. Imaging continued for another 27 min. Participants were instructed to lie as still as possible, to stay awake, and to keep their eyes open. Blood was drawn at select time points during the functional brain scans. Participants were asked to rate their feeling of hunger on a VAS ranging from 0 to 10 before the start of the fMRI and shortly after the fMRI was finished. Twenty minutes after the removal of the nasogastric tube and intravenous cannula, participants received a meal, consisting of a bowl of yoghurt (isocaloric vanilla or natural) mixed with muesli, and were asked to eat until satiated. The food was weighed before and after consuming the meal and the caloric intake was calculated as caloric content after versus before the meal. Acquisition Data were acquired using a 3.0T Philips MR Scanner (Philips Medical Systems) using a 32-channel receive-only head coil. An anatomical T1-weighted scan was obtained with the following scan parameters: TR/TE = 7.0/3.2 ms; FOV = 256 × 240 × 180 mm; voxel size = 1 × 1 × 1 mm. fMRI was acquired using a gradient echo planar imaging (EPI) sequence with the following scan parameters: TR/TE = 1,700/35 ms; FOV = 216 × 216 × 124 mm; voxel size = 2.7 × 2.7 × 2.7 mm; 1410 dynamics; MB factor = 2; SENSE = 1.7; scan duration = 40 min. Data processing and analysis fMRI data were preprocessed using FMRIPREP v1.2.3 (RRID: SCR_016216) 58 , 59 . Anatomical T1-weighted scans were normalized to MNI space. Preprocessing of the functional scans included motion correction (FLIRT), distortion correction (3dQwarp) and co-registration to the anatomical T1-weighted scans. The functional scans were then nonaggressively denoised using independent component analysis based Automatic Removal Of Motion Artifacts (AROMA) and spatially smoothed (6 mm FWHM). For details, see the Supplementary Information (‘fMRI preprocessing’). Participant-level analysis was performed using the Functional Magnetic Resonance Imaging of the Brain (FMRIB) Software Library (FSL 6.0; ). The first three volumes from each functional scan were removed. The remaining 1,407 volumes were divided over 14 consecutive time bins: T0 (baseline, that is, the 5 min before the start of the intragastric infusion) and T1–T13 (each including consecutive 2–2.5-min intervals, with T1 beginning directly at the start of the intragastric infusion). First-level analysis was applied with the use of FMRI Expert Analysis Tool (FEAT) to compare T1–T13 with baseline (T0) within every functional brain scan 60 . The time course of the BOLD signal of the cerebrospinal fluid of each scan was extracted and included as a covariate to adjust for general, non-infusion-related changes in BOLD signal. Next, for each participant, the maps of T1–T13 of the water infusion session were subtracted from the maps of T1–T13 of the glucose and lipid infusion session. For each participant, this resulted in 13 maps for the glucose infusion and the lipid infusion that reflected the percentage change in BOLD signal from baseline (T0) for each time bin (T1–T13) for glucose and lipids, corrected for the effects of the infusion of tap water. These values were used as input for the group-level analyses: the explorative whole-brain voxel-wise analysis and the ROI analysis. For the whole-brain voxel-wise analysis, clusters of grey-matter voxels that showed a significant increase or decrease from baseline (T0) for each time bin (T1–T13) were identified using Threshold Free Cluster Enhancement and permutation testing using Permutation Analysis of Linear Models v.alpha116 (ref. 61 ). Faster permutation inference was applied by fitting a generalized Pareto distribution to the tail of the permutation distribution ( P value threshold 0.10) 62 , with 5,000 permutations. The Permutation Analysis of Linear Models options --corrcon and --corrmod were applied to perform FWER correction for multiple testing over the multiple contrasts and time bins, respectively 63 . A FWER-corrected P value ( p FWER ) < 0.05 was considered significant. For clusters with a significant change in BOLD signal, the locations of the peak and up to five local maxima (minimum distance 20 mm) were interpreted using the Harvard–Oxford (sub)cortical atlas ( ) 64 , 65 , 66 , 67 . As within-participant designs are more powerful than between-participant designs 68 , this analysis was performed to assess the within-group effects of intragastric glucose and lipids in the lean participants and in the participants with obesity, in the pre-diet condition, separately. To evaluate the effect of the diet intervention, the voxel-wise analysis was performed on the post-diet data after subtraction of the pre-diet data for each participant. For the ROI analysis, effects of the nutrient infusions on the striatal subregions the NAc, caudate nucleus and putamen were assessed with a ROI analysis. Masks of these regions were obtained from the Harvard–Oxford subcortical atlas ( ) 64 , 65 , 66 , 67 with a threshold of 30%. The mean change (percentage) in BOLD signal from baseline (T0) was extracted for each time bin (T1–T13) for each ROI. To limit the number of comparisons, only at T3 (pre-absorption), T8 (early absorption) and T12 (late absorption), the significance of the change from baseline was evaluated by one-sample t -tests in lean participants and participants with obesity pre-diet. In addition, the effect of the diet intervention was evaluated by comparing the change from baseline for these time bins (T3, T8 and T12) between the pre-diet and post-diet conditions by paired t -tests. SPECT Protocol The effects of the intragastric infusions of glucose and lipids on the striatal dopamine system were assessed using SPECT imaging with the radiotracer [ 123 I]IBZM (produced in accordance with GMP guidelines; GE Healthcare). [ 123 I]IBZM binds to the D 2/3 R and competes with intrasynaptic dopamine for binding to D 2/3 R. An acute decrease in the binding of [ 123 I]IBZM in the striatum indicates dopamine release 30 . After an overnight fast, participants came to the nuclear imaging facility of the Amsterdam UMC, location AMC between 8:00 and 9:00 am. The nasogastric tube was placed in accordance with the procedure of the training session. Via a cannula in the antecubital vein, an [ 123 I]IBZM bolus of 64 MBq was administered and a continuous infusion of 16 MBq h −1 was started for the duration of the experiment (5 h). The first SPECT scan started 2 h after the start of the [ 123 I]IBZM infusion and 45 min after the intragastric infusion of tap water (noncaloric isovolumetric control). The second SPECT scan started 4 h after the start of the [ 123 I]IBZM infusion and 45 min after the intragastric infusion of either glucose or lipids (Fig. 1c ). A decrease in striatal [ 123 I]IBZM binding between the first and second SPECT scan is an indication of nutrient-induced striatal dopamine release. To limit thyroid uptake of free radioactive iodide, all participants were pretreated with potassium iodide. Acquisition SPECT imaging was performed using the InSPira HD system, a brain-dedicated SPECT camera (Neurologica) with the following parameters: matrix = 128 × 128; energy window = 135–190 keV; slice thickness = 4 mm; acquisition time per slice = 4 min; axial slices were acquired upward from and parallel to the orbitomeatal line until the whole striatum was covered. Data processing and analysis SPECT images were reconstructed with an iterative expectation maximization algorithm and corrected for attenuation by manually aligning an adult head template 69 . Binding of the radiotracer (that is, D 2/3 R availability) was quantified for the striatum by an ROI analysis. Freesurfer (version 5.3.0) was used to obtain striatal masks from individual T1-weighted MRI scans. The occipital cortex was used as the reference region to quantify non-specific radiotracer activity. Occipital cortex masks were obtained by warping the occipital cortex from the Harvard–Oxford cortical atlas to individual T1-weighted MRI using FSL. For the striatum, D 2/3 R availability (that is, non-displaceable binding potential (BP ND ) of [ 123 I]IBZM) was calculated as the specific-to-non-specific binding ratio: striatal BP ND = (mean striatal binding − mean occipital cortex binding)/mean occipital cortex binding. Nutrient-induced changes in striatal D 2/3 R availability were defined as BP ND after the nutrient infusion (second SPECT scan), relative to BP ND after the tap water infusion (first SPECT scan). Registration of adverse events We registered (serious) adverse events during the study. There were no serious adverse events and three adverse events during the study. Two participants experienced nausea after nasogastric tube placement and nutrient infusion (one lean participant and one participant with obesity), and one participant with obesity had transient worsening of pre-existent tinnitus. All adverse events were resolved. The MRI scans of these participants were excluded from the analyses. In the Supplementary Information (‘missing data’), we describe these missing data. Plasma nutrient and hormone measurements During the fMRI session, blood was sampled at baseline and at 5, 10, 15, 20 and 30 min after the start of the intragastric infusion. Samples were centrifuged at 4 °C and stored at −80 °C. At all time points, plasma glucose was determined with the glucose oxidase method using a Biosen C-line plus glucose analyser (EKF Diagnostics). At baseline, t = 15 and t = 30, plasma insulin concentrations were determined by immunoassay (Luminescence, Atellica IM, Siemens Medical Solutions Diagnostics) with an intra-assay variation of 3%, inter-assay variation of 7% and lower limit of quantitation 10 pmol l −1 . In total, 2 mg ml −1 4-(2-aminoethyl) benzenesulfonyl fluoride hydrochloride (AEBSF; Pefabloc SC; Roche) was added to EDTA tubes to prevent breakdown of acylated ghrelin. At baseline and t = 30, concentrations of plasma acylated ghrelin were determined by immunoassay (SPI-Bio A05106, SPI-Bio) with an intra-assay and inter-assay variation of 6% and lower limit of quantification of 4 pg l −1 . At baseline, t = 15 and t = 30 concentrations of plasma total GLP-1 were determined by radioimmunoassay (Merck Millipore) with an intra-assay and inter-assay variation of 9% and lower limit of quantitation of 5 pmol l −1 . The timing of the measurements was according to the known effects of nutrients on glucose, insulin and gut hormones 19 , 70 , 71 . Statistical analyses Within-group post-ingestive nutrient-induced changes from baseline were evaluated by paired or one-sample t -tests. Between-group differences were evaluated by t -test, Mann–Whitney U test or Chi-square test or by two-way mixed ANOVA. The effects of the diet intervention were evaluated by paired t -tests or Wilcoxon signed-rank tests or by two-way repeated-measures ANOVA. BOLD signal time bins and time points of the glucose/hormone measurements were evaluated by calculating AUC and iAUC values using the trapezoidal method. Correlations between BOLD signal AUC and glucose/hormone iAUC were evaluated using Pearson’s correlation coefficient. Findings were considered significant if P < 0.05. Assumptions of the statistical tests were met, and normality and equal variances were tested. Handling of missing data is described in the Supplementary Information . No statistical methods were used to predetermine sample sizes but our sample sizes are similar to those reported to be sufficient for within-subject comparisons in previous publications 72 . Statistical analyses were performed using IBM SPSS Statistics v26 and R v3.6.1. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Free access to individual data is restricted due to ethical/legal concerns. However, upon request, the data may be made available for scientific collaborations after the execution of appropriate data sharing agreements, after review and approval of requests by the medical ethics committee, participants and investigators in line with existing local/national regulations and data sharing agreements. Requests can be sent to the corresponding author. A first response to requests will follow within 4 weeks. Source data are provided with this paper. Code availability Computer codes used for data analyses will be published in a repository on GitHub: .
Brain responses to specific nutrients are diminished in individuals with obesity and are not improved after weight loss, according to a study led by Amsterdam UMC and Yale University, published today in Nature Metabolism. "Our findings suggest that long-lasting brain adaptations occur in individuals with obesity, which could affect eating behavior. We found that those with obesity released less dopamine in an area of the brain important for the motivational aspect of food intake compared to people with a healthy bodyweight. Dopamine is involved in the rewarding feelings of food intake," says Mireille Serlie, lead researcher and Professor of Endocrinology at Amsterdam UMC. "The subjects with obesity also showed reduced responsivity in brain activity upon infusion of nutrients into the stomach. Overall, these findings suggest that sensing of nutrients in the stomach and gut and/or of nutritional signals is reduced in obesity and this might have profound consequences for food intake." Food intake is dependent on the integration of complex metabolic and neuronal signals between the brain and several organs, including the gut and nutritional signals in the blood. This network triggers sensations of hunger and satiation, regulates food intake as well as the motivation to look for food. While these processes are increasingly better understood in animals, including in the context of metabolic diseases such as obesity, much less is known about what happens in humans. Partly due to the difficulty in designing experimental setups in the clinic that could shed light on to these mechanisms. In order to address this lack of knowledge, Serlie, who is also a professor at Yale, and colleagues from both institutions designed a controlled trial. This trial consisted of infusing specific nutrients directly into the stomach of 30 participants with a healthy bodyweight and 30 individuals with obesity, while simultaneously measuring their brain activity through the use of MRI and dopamine release using SPECT scans. While the participants with a healthy bodyweight displayed specific patterns of brain activity and dopamine release after nutrient infusion, these responses were severely blunted in participants with obesity. Moreover, 10% body weight loss (following a 12-week diet) was not sufficient to restore these brain responses in individuals with obesity, suggesting long-lasting brain adaptations occur in the context of obesity and remain even after weight loss is achieved. "The fact that these responses in the brain are not restored after weight loss, may explain why most people regain weight after initially successful weight loss," concludes Serlie.
10.1038/s42255-023-00816-9
Biology
Study shows human ancestors could have consumed hard plant tissues without damaging their teeth
Scientific Reports (2020). DOI: 10.1038/s41598-019-57403-w Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-019-57403-w
https://phys.org/news/2020-01-human-ancestors-consumed-hard-tissues.html
Abstract Reconstructing diet is critical to understanding hominin adaptations. Isotopic and functional morphological analyses of early hominins are compatible with consumption of hard foods, such as mechanically-protected seeds, but dental microwear analyses are not. The protective shells surrounding seeds are thought to induce complex enamel surface textures characterized by heavy pitting, but these are absent on the teeth of most early hominins. Here we report nanowear experiments showing that the hardest woody shells – the hardest tissues made by dicotyledonous plants – cause very minor damage to enamel but are themselves heavily abraded (worn) in the process. Thus, hard plant tissues do not regularly create pits on enamel surfaces despite high forces clearly being associated with their oral processing. We conclude that hard plant tissues barely influence microwear textures and the exploitation of seeds from graminoid plants such as grasses and sedges could have formed a critical element in the dietary ecology of hominins. Introduction Early hominin craniodental morphologies, evolving before cooking or sophisticated extra-oral food processing, represent adaptations to diet, but profound disagreement persists about the specific foods that drove evolutionary change. Isotopic evidence demonstrates that, starting in the mid-Pliocene (circa 3.5 million years ago) and continuing into the Pleistocene, the composition of hominin diets broadened. In most hominin species, it shifted over this period from consumption almost exclusively of C 3 vegetation (circa 85% of diet) to encompassing a moderate-to-large proportion of C 4 plant material (35–77% of diet) 1 , 2 . Although isotopic evidence indicates the photosynthetic pathway of carbon fixation, these data do not directly indicate the exact dietary source of such a signal. Enriched carbon implies early hominins ate either C 4 grasses, sedges or the animals that consumed these same graminoid plants 1 , 3 . However, predictions of what plant part may have contributed to such a signal vary: some authors suggest leaves 1 , 3 , while others focus on energy-rich plant storage organs such as corms or bulbs 4 . Here we advocate the case for seeds. These various plant parts differ in their mechanical properties and thus promote contrasting selection pressures on tooth morphology. Mechanical analyses of australopith teeth and jaws indicate that they were capable of generating high bite forces 5 , 6 with their very thick tooth enamel both strengthening their teeth and prolonging their functional life 7 , 8 , 9 . In particular, the low blunt cusps of australopith molars would be more resistant to fracture against hard foods, as exemplified by the woody casings of what are thus ‘mechanically protected’ plant embryos (Fig. 1 ) 10 . Whether this casing is derived from the seed integuments or the fruit endocarp, we call this a ‘mechanically-protected seed’ here. This feeding association contrasts with primates that typically have long sharp crests on their teeth as adaptations to eat tough compliant foods like leaves 7 , 8 , 11 , 12 . However, the morphological signal in australopiths seems to be at odds with the microwear signal conveyed in the surface texture of wear facets in hominin teeth. Figure 1 A schematic drawing of seeds mechanically protected by lignified woody tissue. ( a ) Large seeds of some dicotyledonous plants are protected by a woody seed shell. ( b ) Even small seeds of monocotyledons have lignified pericarps protecting the seed within. Full size image Conventional interpretations of dental microwear in primates suggest that a diet consisting of a large proportion of hard objects would produce a surface texture with a high complexity. Complexity is essentially a measure of the surface roughness, with wear facets demonstrating high complexity often associated with deep and elaborate scars 13 . Although there is variation in the microwear signals of Plio-Pleistocene hominins, in general, the wear facets of most early hominin teeth exhibit low to moderate complexity 1 , 14 . One notable exception from this trend is that of Paranthropus robustus which in some cases exhibits high surface texture complexity 1 . However, most australopiths tend to exhibit light surface striations that, in several species, are not strongly directed in parallel 1 . This lack of surface texture complexity is more in keeping with what one would expect from a primate that eats a considerable amount of tough compliant material, such as leaves, although extant primate folivores tend to exhibit surface textures with parallel oriented striations 15 . The apparent mismatch between morphology and microwear has fuelled a continual and at times heated debate about early hominin diets. Nowhere is this disparity more salient than for Paranthropus boisei whose highly derived, robust morphology has earned it the epithet “nutcracker man”, stemming from a predicted diet laden with hard objects 5 . Yet microwear studies have indicated this same species did not routinely eat hard objects and that its dentition was used for the processing of softer, tougher foods 3 . Resolution of these radically different interpretations requires an evaluation of the mechanics of microwear formation. Currently, there are little to no experimental data on sliding contacts between particles of woody plant material, such as pieces of seed shell, and enamel. Such particles represent the hardest plant tissues and, based on mechanical models of wear 16 , such tissues should not impart much damage on teeth. However, if particles of lignified plant tissue are unable to produce the deep or elaborate scars on enamel it seems unlikely that feeding on dietary items, such as mechanically protected seeds, would produce the complex surfaces textures predicted by traditional interpretations of dental microwear. It is plausible then that the presence or absence of complex surface textures measured in microwear analysis of tooth facets may not directly reflect the consumption of hard foods, but instead echo levels and types of dietary abrasives 17 . Here we present data on nanowear experiments investigating the interaction between heavily lignified plant tissue and enamel. We demonstrate that although the densest woody tissue can mark enamel surfaces it cannot produce deep elaborate features on the tooth surfaces. Further, by combining these data with compressive tests of some of the smallest hard C 4 seeds, we show that high forces can be generated during the oral milling of large quantities of them. Such orofacial forces may provide a selective force driving the evolution of robust craniodental morphology in early hominins. Results Sliding experiments We performed single-slide nanowear experiments 16 (total slides n = 16) making contacts between fragments of three woody seed shells (Fig. 2a–c ) present in primate diets ( Elaeis guineensi s, Arecaceae, Sacoglottis gabonensis , Humiriaceae; Mezzettia parviflora , Annonaceae) against enamel at forces between 0.4–1.2 mN, varied at 0.2 mN intervals. The nanohardness measured for these woody seed shells is typical of other protective endocarps and the pericarp of grasses and sedges, but very high in relation to plant tissues generally 12 . However, they are an order of magnitude lower in hardness than that of either dental enamel or phytoliths (Table 1 ). Figure 2 AFM topography traces of a tooth surface (shading indicates depth in nm) around detectable damage following sliding contacts against seed shell fragments. ( a ) Elaeis guineensis , accompanying graph is a 3D longitudinal profile of the enamel mark, which is just 5 nm deep with a length of ~15 µm. ( b ) Sacoglottis gabonensis with accompanying cross-sectional profile. This was the most pronounced mark recorded during the experiments. ( c ) Mezzettia parviflora where the accompanying longitudinal profile highlights deposits of material in the damage zone. ( d ) Using the AFM as nanomechanical force microscope, the deposit in c is shown to have the elastic modulus of the seed shell of M. parviflora . ( e ) SEM micrograph of piece of Sacoglottis gabonensis seed shell on end of a flat-head indenter, post-test. ( f ) Energy dispersive spectroscopy (EDS) maps of this shell fragment, post-test; small amounts of calcium are present. ( g ) Example of the extremely small enamel chips found adhering to the woody tissue. Full size image Table 1 Comparison of mechanical properties pertinent to wear of woody plant tissue versus tooth enamel, phytoliths and quartz grit. Full size table There was no evidence of large pits or scratches/fractures of the enamel (Fig. 2a–c ), such as produced by some extraneous grit/dust particles 16 . There was also no evidence on the enamel of the ‘prow’, produced by contact with phytoliths 16 . Shallow grooves in the enamel, maximally 150 nm deep and less than a micron wide, were sometimes observed (31% of contact events). One groove was observed for both E. guineensis and S. gabonensis and three grooves were observed for M. parviflora . These markings were similar in length to the slide displacement, in the same direction, and thus undoubtedly caused by them (Fig. 2a–c ). They were much less pronounced than marks produced by phytoliths in similar sliding experiments 16 and would barely register in dental microwear texture analyses as conventionally performed 15 . We expect that even dozens of these marks on a standard scale dental microwear surface would manifest themselves as gentle ripples rather than a highly complex texture. In experiments performed on M. parviflora seed shell, mounds of material were deposited on the enamel (Fig. 2c ) that were large enough to investigate. Using an atomic force microscope (AFM) as a bimodal nanomechanical force microscope 18 , the elastic modulus of these deposits was shown to be ≈11 GPa (Fig. 2d ), similar to seed shell values 19 and much lower than that of enamel (median 75 GPa). Thus, rather than woody plant material abrading enamel, the converse occurs with enamel escaping relatively unscathed. Results from the EDS mapping of the piece of seed shell after the sliding experiment did reveal very small (submicron) enamel chips on its exterior (Fig. 2f,g ). Technically, enamel tissue loss of this kind is defined mechanically as abrasion, but the scale of tissue removal was much smaller than that caused by mineral particles of similar size 16 . Seed compression and CT scanning We compressed samples of sedge nutlets (where, throughout, ‘nutlet’ refers here to the formal botanical term for the fertilized fruit of sedges), showing that high forces can be reached during mastication of even a small number of them (Fig. 3a ). Initial fracture of their pericarp starts at low force (possibly lower with an additional lateral force) yet persists as further loading opens the pericarp completely to facilitate chemical access to the nutritious interior (Fig. 3b ). The enclosed endosperm densifies as the load increases and the gradient of the force-displacement curve elevates dramatically. Simple mechanics predict that these forces rise with an increase in the number of nutlets processed. Natural variation in individual nutlets complicates experimental results, but in our experiments the trial with the greatest number of nutlets registered the highest forces. Figure 3 Results from sedge nutlet compression tests. ( a ) Force-displacement plots for compression of differing numbers of nutlets of a sedge, Carex monostachya . Initial fracture occurs at a small force, but further compression releases the endosperm at far higher forces, a pattern amplified as more nutlets are compressed. ( b ) Images demonstrating catastrophic nutlet failure needed to access the nutrient-rich tissues are a function of both force magnitude and the number of nutlets. High forces are needed for nutlet densification, yet for any given force some nutlets may not fail catastrophically if many nutlets are consumed at once. This implies that processing many nutlets at once requires high forces or repetitive loading. Full size image Micro-CT scans of a Carex monostachya nutlet revealed that it is populated by numerous phytoliths (Fig. 4 and Video S1 , green flecks). When segregated it was apparent that phytoliths can be found abundantly in the outer pericarp, but some are located too on the inner endosperm. Figure 4 Micro-CT scans of a Carex monostachya nutlet. ( a ) A transverse slice and ( b ) a longitudinal slice through the nutlet; phytoliths show up as green flecks. When segmented ( c ), large numbers of phytoliths are seen in both seed coat and outer regions of the endosperm, highlighting their position and distribution within the nutlet. Full size image Discussion Our experiments demonstrate that contact between woody tissue and teeth did not directly produce the deep and substantial pitting that leads to complex microwear textures on primate teeth. Such limited damage is consistent with mechanical models of tooth wear that predict when sliding contacts between a “hard” particle and a tooth surface occur there can be two main resulting actions: “rubbing” or “abrasion” (also sometimes termed “cutting”) 16 . During rubbing, no material is instantly lost throughout contact between a surface and a sliding particle. Instead, the material on the surface is merely rearranged producing a shallow groove with noticeable pile up of displaced material at the edges. Contrary to rubbing, abrasive actions, defined here as the removal of dental material in a single tribological event via cutting or chipping from the surface, produce a deep v-shaped scratch mark 16 , 20 . The distinction as to whether sliding contacts between a particle and surface produce either rubbing or abrasion is dictated by the particle mechanical properties and the critical angle of attack. If a particle has a sufficient hardness, and the attack angle is also above the critical angle (dictated by the toughness of the surface), then material will be removed, leaving behind a scar in the form of an irregular pit or an angular scratch. If however the particle is of a lower hardness, and/or the angle of attack is below the critical value, then material will be plastically rearranged 16 , leaving behind a furrow with a smooth cross-section. Previous experiments with enamel have shown that hard geologically-derived particles like quartz can easily produce substantial abrasive scratches on teeth. However small “hard” plant-derived particles like phytoliths lack the mechanical hardness to produce abrasive marks and instead only rub enamel. This produces visible damage to the surface but does not remove material instantly from the tooth 16 . Whilst repeated rubbing might cause eventual material loss, this will be at a much-reduced rate compared to harder particles like quartz. Lignified plant tissue being considerably softer than either quartz or phytoliths (Table 1 ) would by mechanical predictions impart limited damage to enamel. Our experiments confirmed this at the scale of microwear with such contacts generally producing no large identifiable marks on enamel. We do not doubt that occasional contacts between seeds and irregular spicules of enamel could result in the latter being fractured, but over time this process should result in a decrease, rather than increase, in texture complexity. As a generalization, seeds cannot be the source of complex textures conventionally attributed to hard object feeding. It has been proposed 3 that food material properties may indirectly influence microwear patterns because of how the jaw movements of a primate are modulated when eating hard versus tough, softer foods. According to this kinematic hypothesis, as hard food items are compressed between teeth moving vertically towards each other, hard particles are driven vertically into the occlusal surface to produce pits. Similarly, as tough, softer foods are trapped between tooth surfaces that are sliding past each other during large transverse jaw displacements, hard particles should be dragged across the tooth surface producing linear scratches aligned in the direction of the jaw movement. Mechanical experiments in which grit is processed by flat tooth surfaces that either slide transversely across or compress vertically towards each other seem to corroborate this hypothesis 21 . However, jaw movements in living primates do not vary so much as to be purely transverse or purely vertical. Moreover, in vivo chewing experiments on humans and capuchin monkeys show that the consumption of hard foods is associated with greater transverse jaw movements than when eating tough, softer foods 22 , 23 . These data thus contradict a key premise of this kinematic hypothesis. Further, this hypothesis relies on the assumption that any “hard” particle can cause substantive abrasive damage to enamel. The results we present here confound such assumptions by demonstrating that some of the hardest plant foods are incapable of producing the characteristics of complex surface textures in enamel. We favor an alternative model of interpreting microwear 17 . Namely, mastication of thin, film-like tissues (like leaves or grass blades) ought to produce microgrooves aligned in the same direction because phytoliths or grit particles will contact tooth surfaces only as opposing dental contact facets are sliding past one another (which can happen without large jaw excursions). In contrast, mastication of thicker or isodiametric tissues may produce irregular contacts between particle and occlusal surface as the food is rolled between the teeth, or as the food tissues are laterally displaced as the food item is vertically compressed. If mineral grit is present during these contacts, then complex pitting might ensue. If grit is absent but phytoliths are present, then rubbing marks might be produced, but the features would not be aligned. Thus, contra conventional wisdom, microwear analysis of surface textures may not provide direct evidence about food material properties, but rather inform on interactions between particle shapes and sizes in the mouth, as well as the relative proportions of hard, angular abrasive particles (such as quartz and silicates) versus phytoliths that can produce initially non-abrasive surface yielding on enamel. If lignified plant tissues cannot severely damage enamel at the microwear scale, challenging conventional interpretations of dental microwear, then the lack of complexity in enamel surface textures no longer rules out hard object feeding as a significant component of australopith diets. Grass or sedge seed consumption is consistent with the moderate-to-high C 4 isotopic signal recovered from the teeth of many australopiths since many African grasses and sedges are C 4 plants. Ecological considerations suggest that such seeds could have served as an important, seasonally available food source capable of contributing substantially to the energetic needs of a large bodied hominin. Previous research into African tropical grasslands suggests that seed production would be seasonal, usually occurring around 3 months after the onset of rain and persisting until the end of a rainy season delivering a productive period that may span 4–5 months 24 . The reproductive effort of grass is linked to rainfall. Grass seed production varies widely between plant species, but in general a crop of 10 3 –10 4 seeds/m 2 has been proposed for tropical grasslands 24 . The mass of a grass seed varies greatly and is dependent on species. However, a mean seed weight can be calculated from 10 African grass species as 0.00037 g (dry wt. basis) 24 . Taking this as typical, then it can be estimated that 1 m 2 of tropical grassland could produce between 0.37–3.7 g of seed. Grass seeds are considered energy-rich with 1 kg of grass seed proposed to deliver an estimated 15 MJ in energy 25 - more than enough to support a large bodied ape and even a modern human 26 , 27 . This being so, (assuming a daily energy budget for a pre- Homo hominin as circa 6.3 MJ; ref. 27 ), then a patch of tropical grassland potentially as small as 135 m 2 could provide enough energy to sustain such a hominin daily. There is no living analogue for the diet that we presume australopiths may have been consuming, but the behavior of geladas 28 and yellow baboons 29 show some similarities, the latter consuming the seeds of two C 4 grasses that we have studied. Yet it is clear that the consumption of large amounts of grass and sedge seeds would require both high magnitude and highly repetitive loading to break the protective woody exteriors, while daily foraging could be achieved with quite limited ranging. Grass and sedge seed consumption is also consistent with cusp and tooth crown morphology. Yet, one might ask whether the consumption of such small seeds would require adaptations to produce high bite forces. Compression tests of sedge nutlets indicate that high forces can be reached during mastication of even a small number of them (Fig. 3a ). A large-bodied hominin would be required to orally mill large amounts of sedge and grass seeds to fulfil its daily energetic needs, meaning that the consumption of small mechanically protected seeds in large numbers should reasonably require high-magnitude repetitive forces. This could impart large, continuous stresses to the molar teeth, requiring thickened enamel to resist fractures and maintain functional efficiency for as long as possible. Moreover, phytoliths in the casings of some such seeds (Fig. 4 ) could explain the presence of light grooves on the dental microwear textures of many early hominins, and the lack of alignment of such grooves could be explained by the transverse displacement of parts of the pericarp away from the seed centroid (in a manner analogous to the lateral displacement of the sides of a solid as it is axially compressed) as the endosperm is densified. Phytoliths in the pericarp would therefore be moving in all directions parallel to the occlusal surface, resulting in relatively isotropic microwear textures. Our analyses show that the hardest plant tissues produce markings on enamel surfaces that cannot be directly responsible for pitting and surface complexity; at most, these tissues produce light rubbing marks. We further show that small mechanically protected grass and sedge seeds can require high forces to process orally. We conclude that consumption of grass and sedge seeds is compatible with the available data on australopith diets and feeding adaptations and hypothesize that such foods were a selectively important component of early hominin diets. Such selection pressures were effectively side-stepped with sophisticated extra-oral processing and cooking practices in later hominins, but prior to this, small-object feeding, once thought to be a driver of hominin adaptations 30 , seems entirely plausible. Materials and Methods Nanowear experiments and imaging The enamel sample was taken from a museum specimen of a Bornean orangutan ( Pongo pygmaeus ) molar tooth. This molar was donated to PWL by the Raffles Museum of Biodiversity Research in Singapore (permission granted by its then director Mrs Yang Chang Man); it is the same as used in previous nanowear studies 16 . The molar was sectioned longitudinally and the enamel surface polished down to a 20 nm r.m.s. surface roughness using colloidal silica between each seed shell experiment. The tooth enamel was not fresh or maintained in a hydrated state for this experiment. Recent research has indicated that dehydrating enamel reduces the tissue’s ability to resist fracture 29 . Therefore, our experiments represent a conservative estimate of the conditions needed to induce mechanical damage in enamel as fresh, hydrated enamel will likely be more durable. The nanohardness of both the enamel and sections of the three seed shells was measured with a Berkovich tip (Hysitron Ubi1, Minneapolis, MN, USA). Given the irregular shapes of the seed shell fragment, to ensure scratch damage could be located, a ‘landing strip’ was created on the enamel surface by indenting a Berkovich diamond tip into the enamel surface at 8 mN to produce two parallel lines: four indents on one side, roughly 20 µm apart and three indents on the other, with an 80 µm wide landing strip between them. Searches for marks were made within these strip boundaries. This working area was located not on occlusal enamel but between the occlusal surface and the enamel-dentine junction (EDJ). Although the mechanical properties of dental enamel have been shown to vary in some species from EDJ to the occlusal surface, in Pongo this difference is limited 31 . Additionally, if the initial occlusal surface were somehow adapted to be more wear resistant, then by conducting tests deep to this surface we again ensure our results are a conservative estimate of the conditions needed to induce mechanical damage in enamel with plant material. Fragments of woody seed shell were made by pressing a large seed section against a serrated blade. From the debris, for each plant species an appropriately sized particle was chosen and fixed to a custom manufactured flat-headed titanium tip using cyanoacrylate glue (Fig. 2e ). Light microscopy was used to verify that the seed shell fragment was properly attached and free from adhesive on the contacting surface. The tip and affixed particle were then placed into the nanoindenter for sliding experiments to be performed. Contact between particle and enamel consisted of a lateral displacement at 10–15 µm at fixed vertical forces increasing by 0.2 mN intervals between 0.4–1.2 mN. These forces were chosen as they correspond to previous experiments on microwear formation 16 , where it was shown that other dietary abrasives (grit, phytoliths and enamel chips) could inflict markings on enamel surfaces. Such minute forces are far below maximum bite forces for any primate, so marks produced in this experimentation could be reproduced in almost any masticatory action. After each particle of seed shell was slid at the various forces, the landing strip was searched for evidence of damage using an atomic force microscope (5500 AFM, Agilent, Santa Clara, CA, USA) in tapping mode in sequential 80 × 80 µm scans. When damage was identified, higher magnification scans of the area of interest were generated allowing high resolution images and 3D profiles of scratch zones to be analyzed. When there was clear evidence of debris within the strip, the elastic modulus of this debris was determined by an AFM (MFP-3D, Asylum, Oxford Scientific, UK) configured as a bimodal nanomechanical force microscope 18 . The elastic moduli of both debris and enamel surface, standardized to soda lime glass ( E = 70 GPa), were recorded. Data scatter reflects, in part, the hierarchical composite nature of these materials. EDS mapping Both pre- and post-test, the elemental composition of each seed shell fragment tip was mapped using energy dispersive spectroscopy (EDS, Oxford Instruments, Abdingdon, UK) attached to a scanning electron microscope (SEM, Jeol 7001F, Tokyo, Japan), a combination that allows high-resolution imaging and elemental mapping. When calcium was present in post-test scans, it was possible to correlate the image with the submicron enamel chips, using their identity. Sedge nutlet compression and CT scanning Small sedge nutlets were compressed between flat plates at speeds of 1–2 mm min −1 and the resultant force and displacements recorded by a materials tester (FLS-1 tester, Lucas Scientific, New York, USA) fitted with a 2 kN load cell. CT images of intact seeds were made on a GE Phoenix Nanotom M (Wunstorf, Germany). Scans were displayed at a resolution of 0.93 µm/voxel using a voltage of 100 kV at a current of 260 µA. Total scan time was 250 min. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Go ahead, take a big bite. Hard plant foods may have made up a larger part of early human ancestors' diet than currently presumed, according to a new experimental study of modern tooth enamel from Washington University in St. Louis. Scientists often look at microscopic damage to teeth to infer what an animal was eating. This new research—using experiments looking at microscopic interactions between food particles and enamel—demonstrates that even the hardest plant tissues scarcely wear down primate teeth. The results have implications for reconstructing diet, and potentially for our interpretation of the fossil record of human evolution, researchers said. "We found that hard plant tissues such as the shells of nuts and seeds barely influence microwear textures on teeth," said Adam van Casteren, lecturer in biological anthropology in Arts & Sciences, the first author of the new study in Scientific Reports. David S. Strait, professor of physical anthropology, is a co-author. Traditionally, eating hard foods is thought to damage teeth by producing microscopic pits. "But if teeth don't demonstrate elaborate pits and scars, this doesn't necessarily rule out the consumption of hard food items," van Casteren said. Humans diverged from non-human apes about seven million years ago in Africa. The new study addresses an ongoing debate surrounding what some early human ancestors, the australopiths, were eating. These hominin species had very large teeth and jaws, and likely huge chewing muscles. "All these morphological attributes seem to indicate they had the ability to produce large bite forces, and therefore likely chomped down on a diet of hard or bulky food items such as nuts, seeds or underground resources like tubers," van Casteren said. But most fossil australopith teeth don't show the kind of microscopic wear that would be expected in this scenario. The researchers decided to test it out. Previous mechanical experiments had shown how grit—literally, pieces of quartz rock—produces deep scratches on flat tooth surfaces, using a device that mimicked the microscopic interactions of particles on teeth. But there was little to no experimental data on what happens to tooth enamel when it comes in contact with actual woody plant material. For this study, the researchers attached tiny pieces of seed shells to a probe that they dragged across enamel from a Bornean orangutan molar tooth. They made 16 "slides" representing contacts between the enamel and three different seed shells from woody plants that are part of modern primate diets. The researchers dragged the seeds against enamel at forces comparable to any chewing action. The seed fragments made no large pits, scratches or fractures in the enamel, the researchers found. There were a few shallow grooves, but the scientists saw nothing that indicated that hard plant tissues could contribute meaningfully to dental microwear. The seed fragments themselves, however, showed signs of degradation from being rubbed against the enamel. This information is useful for anthropologists who are left with only fossils to try to reconstruct ancient diets. "Our approach is not to look for correlations between the types of microscopic marks on teeth and foods being eaten—but instead to understand the underlying mechanics of how these scars on tooth surface are formed," van Casteren said. "If we can fathom these fundamental concepts, we can generate more accurate pictures of what fossil hominins were eating." So those big australopith jaws could have been put to use chewing on large amounts of seeds—without scarring teeth. "And that makes perfect sense in terms of the shape of their teeth" said Peter Lucas, a co-author at the Smithsonian Tropical Research Institute, "because the blunt low-cusped form of their molars are ideal for that purpose." "When consuming many very small hard seeds, large bite forces are likely to be required to mill all the grains," van Casteren said. "In the light of our new findings, it is plausible that small, hard objects like grass seeds or sedge nutlets were a dietary resource for early hominins."
10.1038/s41598-019-57403-w
Biology
Anatomy of a decision—mapping early development
Antonio Scialdone et al, Resolving early mesoderm diversification through single-cell expression profiling, Nature (2016). DOI: 10.1038/nature18633 Journal information: Nature
http://dx.doi.org/10.1038/nature18633
https://phys.org/news/2016-07-anatomy-decisionmapping-early.html
Abstract In mammals, specification of the three major germ layers occurs during gastrulation, when cells ingressing through the primitive streak differentiate into the precursor cells of major organ systems. However, the molecular mechanisms underlying this process remain unclear, as numbers of gastrulating cells are very limited. In the mouse embryo at embryonic day 6.5, cells located at the junction between the extra-embryonic region and the epiblast on the posterior side of the embryo undergo an epithelial-to-mesenchymal transition and ingress through the primitive streak. Subsequently, cells migrate, either surrounding the prospective ectoderm contributing to the embryo proper, or into the extra-embryonic region to form the yolk sac, umbilical cord and placenta. Fate mapping has shown that mature tissues such as blood and heart originate from specific regions of the pre-gastrula epiblast 1 , but the plasticity of cells within the embryo and the function of key cell-type-specific transcription factors remain unclear. Here we analyse 1,205 cells from the epiblast and nascent Flk1 + mesoderm of gastrulating mouse embryos using single-cell RNA sequencing, representing the first transcriptome-wide in vivo view of early mesoderm formation during mammalian gastrulation. Additionally, using knockout mice, we study the function of Tal1, a key haematopoietic transcription factor, and demonstrate, contrary to previous studies performed using retrospective assays 2 , 3 , that Tal1 knockout does not immediately bias precursor cells towards a cardiac fate. Main Traditional experimental approaches for genome-scale analysis rely on large numbers of input cells and therefore cannot be applied to study early lineage diversification directly in the embryo. To address this, we used single-cell transcriptomics to investigate mesodermal lineage diversification towards the haematopoietic system in 1,205 single cells covering a time course from early gastrulation at embryonic day (E)6.5 to the generation of primitive red blood cells at E7.75 ( Fig. 1a and Extended Data Figs 1a and 2a ). Using previously published metrics (Methods), we observed that the data were of high quality. Five hundred and one single-cell transcriptomes were obtained from cells taken from dissected distal halves of E6.5 embryos sorted for viability only, which contain all of the epiblast cells, including the developing primitive streak, and a limited number of visceral endoderm and extra-embryonic ectoderm cells. From E7.0, embryos were staged according to anatomical features (Methods) as primitive streak, neural plate and head fold. The VEGF receptor Flk1 ( Kdr ) was used to capture cells as it marks much of the developing mesoderm 4 . During subsequent blood development, Flk1 is downregulated and CD41 ( Itga2b ) is upregulated 5 . We therefore also sampled cells expressing both markers and CD41 alone at the neural plate and head fold stages ( Fig. 1a and Extended Data Figs 1b and 2a ), giving a total of 138 cells from E7.0 (primitive streak), 259 from E7.5 (neural plate) and 307 from E7.75 (head fold). Figure 1: Single-cell transcriptomics identifies ten populations relevant to early mesodermal development. a , Whole-mount images and schematics of E6.5–7.75 embryo sections. Colours indicate approximate locations of sorted cells. Anterior, left; posterior, right. Scale bars, 200 μm. b , Heatmap showing key genes distinguishing ten clusters. Coloured bars indicate assigned cluster (top), stage (middle: turquoise, E6.5; purple, primitive streak (E7.0); green, neural plate (E7.5); red, head fold (E7.75)) and the sorted population (bottom: green, E6.5 epiblast; blue, Flk1 + ; turquoise, Flk1 + CD41 + ; red, Flk1 − CD41 + ). c , t-SNE of all 1,205 cells coloured by embryonic stage, and ( d ) according to clusters in b . PowerPoint slide Full size image After rigorous quality control, 2,085 genes were identified as having significantly more heterogeneous expression across the 1,205 cells than expected by chance ( Extended Data Fig. 2b–d ). Unsupervised hierarchical clustering in conjunction with a dynamic hybrid cut (Methods) yielded ten robust clusters with varying contributions from the different embryonic stages ( Fig. 1b , Extended Data Fig. 3 , Methods and cell numbers in Extended Data Fig. 3h ). Using t-distributed stochastic neighbour embedding (t-SNE) dimensionality reduction to visualize the data, three major groups were observed: one comprising almost all E6.5 cells, another mainly consisting of earlier primitive streak and neural plate stage cells, and a third containing predominantly later head fold stage cells ( Fig. 1c ). Importantly, clusters were coherent with the t-SNE visualization except for the small cluster 5 ( Fig. 1d ). The expression of key marker genes allowed us to assign identities to each cluster: visceral endoderm, extra-embryonic ectoderm, epiblast, early mesodermal progenitors, posterior mesoderm, endothelium, blood progenitors, primitive erythrocytes, allantoic mesoderm and pharyngeal mesoderm ( Fig. 1b , Extended Data Figs 3h and 4 ). Because of the limited cell numbers and lack of markers for their prospective isolation, conventional bulk transcriptome analysis of these key populations has never before been attempted. Since the T-box transcription factor Brachyury—encoded by the T gene—marks the nascent primitive streak 6 , we investigated the gene expression programs associated with T induction in the E6.5 cells (cluster 3). T expression was restricted to a distinct subset of epiblast cells found closest to cluster 4 ( Fig. 1d and Extended Data Fig. 5b ), with rare isolated cells within the bulk of the epiblast population also expressing moderate levels, consistent with priming events for single gastrulation-associated genes. T expression correlated with other gastrulation-associated genes including Mixl1 and Mesp1 ( Fig. 2a ), with Mesp1 highly expressed only in the small subset of cells situated at the pole of the E6.5 epiblast cluster (association of T and Mesp1 expression: P value 3 × 10 −15 , Fisher’s exact test). We also observed a subset of cells distinct from the T + / Mesp1 + population, which expressed Foxa2 , suggestive of endodermal priming 7 ( Extended Data Fig. 5d ). Figure 2: Transcriptional program associated with T induction in E6.5 epiblast cells. a , t-SNE of the 481 E6.5 cells in cluster 3. Points are coloured by expression of T (Brachyury) and Mixl1 , Mesp1 and Frzb . b , Heatmap showing the ten genes most highly positively and negatively correlated with T ( Supplementary Information Table 1 ). c , Forward scatter for the 481 E6.5 epiblast cells in cluster 3, with cells grouped according to T / Mesp1 expression. Boxplots indicate the median and interquartile range. P values were calculated using a two-sided Welch’s t -test for samples with unequal variance, with false discovery rate correction for multiple testing. PowerPoint slide Full size image We next identified genes displaying correlated expression with T , which identified known markers and regulators such as Mixl1 , and genes not previously implicated in mammalian gastrulation, such as Slc35d3 , an orphan member of a nucleotide sugar transporter family 8 and the retrotransposon-derived transcript Cxx1c 9 ( Fig. 2b and Supplementary Information Table 1 ). Genes negatively correlated with T were consistently expressed across the majority of epiblast cells, suggesting that cells outside the primitive streak have not yet committed to a particular fate, consistent with the known plasticity of epiblast cells in transplant experiments 10 . Ingressing epiblast cells undergo an EMT, turning from pseudo-stratified epithelial cells into individual motile cells, a conformational change associated with alterations in cell size and shape 11 . Our E6.5 epiblast cells were isolated using index sorting, thus providing a forward scatter value for each cell. As shown in Fig. 2c , T + / Mesp1 + co-expressing cells showed a significant reduction in forward scatter values compared with T + / Mesp1 - and T - cells. Since forward scatter correlates positively with cell size, this observation provides a direct link between specific transcriptional programs and characteristic physical changes associated with gastrulation. As T + /Mesp1 + cells also express Mesp2 , this observation was consistent with the known EMT defect in Mesp1 / Mesp2 double knockout embryos 12 . Index sorting therefore linked expression changes with dynamic physical changes similar to those recognized to occur during chicken gastrulation 13 . We next focused on mesodermal lineage divergence during and immediately after gastrulation. We reasoned that approaches analogous to those used to order single cells in developmental pseudotime could be used to infer the location of cells in pseudospace, specifically with respect to the anterior–posterior axis of the primitive streak ( Fig. 3a ). To this end, we used diffusion maps 14 , a dimensionality reduction technique particularly suitable for reconstructing developmental trajectories 15 . We identified the diffusion-space direction that most probably represents true biological effects (see Methods), which we interpreted as the pseudospace coordinate (red line in Fig. 3b and Extended Data Fig. 6a–d ). Hierarchical clustering revealed three groups of genes ( Fig. 3c , Extended Data Fig. 6e and Supplementary Information Table 4 ) showing a gradient of expression along the pseudospace axis. These were assigned as anterior (darker blue, 334 genes) and posterior (lighter blue, 87 genes) owing to the enrichment of genes with known differential expression along the anterior–posterior axis of the primitive streak ( Fig. 3d and Extended Data Figs 6f–h and 7 ). A third cluster was expressed highly at either end of the pseudospace axis (turquoise, 41 genes). Interestingly, the more posterior Flk1 + mesodermal cells are associated with the allantois, blood and endothelial clusters ( Fig. 1d and Extended Data Fig. 5c ), which are known to arise from the posterior primitive streak. Gene ontology analysis revealed that the putative anterior genes were associated with terms relating to somite development, endoderm development and Notch signalling, consistent with a more anterior mesoderm identity 16 ( Supplementary Information Table 2a and Extended Data Fig. 6h ). Conversely, the putative posterior mesoderm cluster was associated with BMP signalling, hindlimb development and endothelial cell differentiation, consistent with the posterior portion of the streak 17 . Figure 3: Dimensionality reduction reveals transcriptional profiles associated with cell location in the embryo. a , Schematic of tissue emergence along the anterior–posterior primitive streak, derived from ref. 29 . Mesodermally and endodermally derived tissues are marked by a red and green line, respectively; bi, blood island; al, alantois; amn, amnion; ps, primitive streak; n, node. b , Diffusion map of 216 cells in cluster 4 with pseudospace axis in red. Projections onto this axis represent pseudospace coordinates. c , Heatmap for differentially expressed genes along the pseudospace axis, showing genes more highly expressed in the anterior (dark blue) and posterior region (light blue), or highly expressed at either end (aquamarine). d , Expression profiles for example genes (red line, local polynomial fit). PowerPoint slide Full size image Although derived from the same embryonic stages as the mesodermal progenitor cells, cluster 7 lacks expression of genes such as Mesp1 , yet expresses Tal1 , Sox7 , Tek (Tie2) and Fli1 , which are vital for extra-embryonic mesoderm formation ( Fig. 1b and Extended Data Fig. 5 , 7 ). Expression of Kdr and Itga2b ( Extended Data Fig. 5b ) further highlights clusters 7 and 8 (brown) as corresponding to the developmental journey towards blood, with a transition to mostly head fold stage cells in cluster 8 and increasing expression of embryonic haemoglobin Hbb-bh1 ( Fig. 1b ). Given the apparent trajectory of blood development from cluster 7 to 8, we used an analogous approach to that described above to recover a pseudotemporal ordering of cells ( Fig. 4a , Extended Data Fig. 8a–d and Methods). Eight hundred and three genes were downregulated, including the haematovascular transcription factor Sox7 , which is known to be downregulated during blood commitment 15 ( Fig. 4c, d and Extended Data Fig. 8e, f ). Sixty-seven genes were upregulated including the erythroid-specific transcription factors Gata1 and Nfe2 , and embryonic globin Hbb-bh1 ( Fig. 4b, d, e and Extended Data Fig. 8 ). Twenty-seven genes were transiently expressed, including the known erythroid regulator Gfi1b ( Supplementary Information Table 5 ). Significant GO terms associated with the upregulated genes were indicative of erythroid development, while downregulated genes were associated with other mesodermal processes including vasculogenesis and osteoblast differentiation ( Supplementary Information Table 2b ). Figure 4: Inferring the transcriptional program underlying primitive erythropoiesis. a , Diffusion map of 271 cells in clusters 7 and 8 displaying the inferred pseudotime axis (blue). b , Expression of Hbb-bh1 ordered by pseudotime (red line, local polynomial fit). c , Heatmap ordered along the pseudotime axis. Horizontal bars indicate cluster and developmental stage. Genes shown were repressed (grey), activated (green) or transiently expressed (blue). d , Examples of activated and repressed genes and ( e ) Gata1 as in b . f , University of California, Santa Cruz Browser tracks for Gata1 ChIP-seq and input in Runx1 + Gata1 + cells; the Nfe2 locus is shown. g , Percentage of genes in each group identified in c overlapping Gata1 targets. Numbers indicate total numbers of genes in each category from c . PowerPoint slide Full size image Gata1-null embryos die at around E10.5 owing to the arrest of yolk sac erythropoiesis 18 . We generated genome-wide ChIP-seq (chromatin immunoprecipitation followed by sequencing) data for Gata1 in haematopoietic cells derived after 5 days of embryonic stem cell (ESC) in vitro differentiation ( Extended Data Fig. 9a–c ). The group of upregulated genes from the pseudotime analysis showed a pronounced overlap with Gata1 targets ( P < 2.2 × 10 −16 , Fisher’s test) including known targets such as Nfe2 and Zfpm1 ( Fig. 4f, g , Extended Data Fig. 9d, e and Supplementary Information Table 6 ). Integration of single-cell transcriptomics with complementary transcription factor binding data therefore predicts likely in vivo targets of developmental regulators such as Gata1. Two contrasting mechanisms are commonly invoked to explain how drivers of cell fate determination regulate cell type diversification. The first involves fate restriction through a stepwise sequence of binary fate choices and is supported by mechanistic investigations using ESC differentiation 2 , 19 . The alternative invokes acquisition of diverse fates from independent precursor cells and is commonly supported by cell transplantation and lineage tracing analysis ( Fig. 5a ) 1 , 10 , 20 , 21 . In contrast to the retrospective nature of transplantation and lineage tracing experiments where measurements are typically obtained a day or more after cell fate decisions are made, single-cell transcriptomics allows cellular states to be determined at the moment when fate decisions are executed since low cell numbers are not a limiting factor. Figure 5: Analysis of Tal1 −/− embryos suggests independent fate acquisition. a , Two cell fate diversification models. b , Tal1 in situ hybridization at head fold stage. Scale bar, 200 μm. c , Flow cytometry of WT and Tal1 −/− mice at head fold and E8.25. d , Blood program genes are differentially expressed between nascent mesoderm (blue) and endothelial (red) and blood cells (brown). Differential expression between 45 Tal1 −/− and 59 WT endothelial cells (lower left t-SNE) identified 50 downregulated genes. Gene set overlap (centre) indicates failure to induce the blood program in Tal1 −/− endothelium ( P < 2.2 × 10 −16 , Fisher’s test). On the right are expression distributions for selected genes in WT (black) or Tal1 −/− (grey) endothelial cells. e , For genes previously reported 3 to be bound and activated (left) or bound and repressed (right) by Tal1, fold change between Tal1 −/− and WT endothelium (defined in d ) is plotted against average expression. Red circles, genes with a fold change >1.5 and a false discovery rate <0.05. PowerPoint slide Full size image The bHLH transcription factor Tal1 (also known as Scl) is essential for the development of all blood cells 22 , 23 with strong expression in posterior mesodermal derivatives ( Fig. 5b ). Tal1 −/− bipotential blood/endothelial progenitors cannot progress to a haemogenic endothelial state 19 , Tal1 overexpression drives transdifferentiation of fibroblasts into blood progenitors 24 and Tal1 −/− mesodermal progenitors from the yolk sac give rise to aberrant cardiomyocyte progenitors when cultured in vitro 2 . However, the precise nature of the molecular defect within Tal1 −/− mesodermal progenitors within the embryo has remained obscure, because cell numbers are too small for conventional analysis. We profiled single Flk1 + cells from 4 wild type (WT) and 4 Tal1 −/− embryos obtained from E7.5 (neural plate) to E8.25 (four-somite stage) (256 WT and 121 Tal1 −/− cells; Fig. 5c and Extended Data Fig. 10 ), and computationally assigned cells to the previously defined 10 clusters (Methods). Cells from WT embryos contributed to all clusters, while Tal1 −/− embryos did not contain any cells corresponding to the blood progenitor and primitive erythroid clusters (yellow and brown, Fig. 5d ) consistent with the known failure of primitive erythropoiesis in Tal1 −/− embryos 23 and their lack of CD41 expression ( Fig. 5c ). Forty-five Tal1 −/− cells were confidently mapped to the endothelial (red) cluster, which therefore allowed us to investigate the early consequences of Tal1 deletion in this key population for definitive haematopoietic development ( Fig. 5d and Supplementary Information Tables 7 and 8 ). Fifty genes were downregulated in Tal1 −/− endothelial cells (fold change < 0.67, 5% false discovery rate). These included known regulators of early blood development ( Itga2b , Lyl1 , Cbfa2t3 , Hhex , Fli1 , Ets2 , Egfl7 , Sox7 , Hoxb5 ), consistent with Tal1 specifying a haematopoietic fate in embryonic endothelial progenitor cells 19 , and in particular Hoxb5 , which has recently emerged as a powerful marker for definitive blood stem cells 25 . Single-cell profiling also identified genes with altered distributions of expression. For example, Sox7 changed from a largely unimodal pattern in WT cells to a bimodal on/off pattern in Tal1 −/− endothelial cells, while Cbfa2t3 showed the opposite pattern ( Fig. 5d ). However, we did not observe upregulation of cardiac markers in Tal1 −/− endothelial cells ( Fig. 5e and Supplementary Information Tables 8 and 9 ). Previously, this upregulation had been observed in yolk sac endothelial cells collected 1–1.5 days later than our data 2 , and had been taken as evidence that Tal1 acts as a gatekeeper controlling the balance between alternative cardiac and blood/endothelial fates within single multipotent mesodermal progenitors 3 . Our results, however, suggest that the primary role of Tal1 is induction of a blood program, and the subsequent ectopic expression of cardiac genes may be the result of secondary induction events acting on a still relatively plastic mesodermal cell blocked from executing its natural developmental program. Here we have used single-cell transcriptomics to obtain a comprehensive view of the transcriptional programs associated with mammalian gastrulation and early mesodermal lineage diversification. Further technological advances to resolve epigenetic processes at single-cell resolution 26 and match single-cell expression profiles with spatial resolution 27 , 28 are probably key drivers of future progress in this field. Finally, our analysis of Tal1 −/− embryos illustrates how the phenotypes of key regulators can be re-evaluated at single-cell resolution to advance our understanding of early mammalian development. Methods No statistical methods were used to predetermine sample size. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. Timed matings and embryo collection All procedures were performed in strict adherence to United Kingdom Home Office regulations (project licence 70/8406). Timed matings were set up between CD1 mice (which produce large litters). Embryos were staged according to the morphological criteria of Downs and Davies 30 , and classified broadly as primitive streak, neural plate or head fold stage. Suspensions of cells from individual embryos were prepared by incubating with TrypLE Express dissociation reagent (Life Technologies) at 37 °C for 10 min and quenching with heat-inactivated serum. All cells were stained with DAPI for viability. At E6.5, the distal half of the embryo was dissected and dissociated into a single-cell suspension, and live cells were sorted. For E7.0 and older, suspensions consisted of the whole embryo and were also stained with Flk1-APC (AVAS12 at 1:400 dilution; BD Bioscience) and only Flk1 + cells were collected. For cell sorting of CD41 + Flk1 − and CD41 + Flk1 + cells from neural plate stage and head fold stages, suspensions were stained with Flk1-APC, PDGFRa-PE (APA5 at 1:200 dilution; Biolegend) and CD41-PEcy7 (MWReg30 at 1:400 dilution; Biolegend) for 20 min at 4 °C as described 31 . Cells were sorted from seven E6.5 embryos. Flk1 + cells were sorted from three primitive streak stage, four neural plate stage and three head fold stage embryos ( Extended Data Fig. 1a ). CD41 + Flk1 − and CD41 + Flk1 + cells were sorted from the same embryos, an additional eight each at neural plate and head fold stages ( Extended Data Fig. 1b ). Cell sorting was performed with a BD Influx cell sorter in single-cell sort mode with index sorting to confirm the presence of a single event in each well. Additional cells were sorted into tissue culture plates to visually confirm the presence of single events. To obtain Tal1 −/− cells, timed matings were set up between Tal1 LacZ/+ mice 32 . Flk1 + cells were sorted as above from four embryos for each genotype: from one embryo for each genotype at neural plate and four-somite (4S) stages, from two head fold stage embryos for Tal1 LacZ/LacZ (designated Tal1 −/− ), one head fold stage WT embryo and one WT embryo intermediate between neural plate and head fold stages. Genotyping PCR using 1/20 suspension cells was performed as described previously 32 . Single-cell RNA sequencing library preparation and mapping of reads scRNA-seq analysis used the Smart-seq2 protocol as previously described 33 . Single cells were sorted by fluorescence-activated cell sorting (FACS) into individual wells of a 96-well plate containing lysis buffer (0.2% (v/v) Triton X-100 and 2 U/μl RNase inhibitor (Clontech)) and stored at −80 °C. Libraries were prepared using the Illumina Nextera XT DNA preparation kit and pooled libraries of 96 cells were sequenced on the Illumina Hi-Seq 2500. Reads were mapped simultaneously to the Mus musculus genome (Ensembl version 38.77) and the ERCC sequences using GSNAP (version 2014-10-07) with default parameters. HTseq-count 34 was used to count the number of reads mapped to each gene (default options). Identification of poor quality cells To assess data quality 35 , five metrics were used: (1) total number of mapped reads, (2) fraction of total reads mapped to endogenous genes, (3) fraction of reads mapped to endogenous genes that are allocated to mitochondrial genes, (4) fraction of total reads mapped to ERCC spike-ins and (5) level of sequence duplication (as estimated by FastQC, version 0.11.4, ). For all downstream analyses, we only retained samples that had (1) more than 200,000 reads mapped (either to ERCC spike-ins or endogenous mRNA), (2) more than 20% of total reads mapped to mRNA, (3) less than 20% of mapped reads allocated to mitochondrial genes, (4) less than 20% of reads mapped to ERCC spike-ins and (5) less than 80% of duplicated sequences. Out of the 2,208 cells that were captured across the two experiments, 1,582 (that is, ~72% of the total) passed our quality check. A t-SNE projection 36 of the values of these five metrics ( Extended Data Fig. 2b ) shows that most of discarded cells tend to cluster together and fail at least two criteria. All metrics were standardized before applying t-SNE with the ‘RtSNE’ function (default parameters) from the R package ‘RtSNE’ (version 0.1) 37 . Normalization of read counts The data were normalized for sequencing depth using size factors 38 calculated on endogenous genes. By doing so, we also normalized for the amount of RNA obtained from each cell 39 , which is itself highly correlated with cell cycle stage 40 . Highly variable genes and GO enrichment analysis Highly variable genes were identified by using the method described in Brennecke et al . 39 . In brief, we fitted the squared coefficient of variation as function of the mean normalized counts 39 . In the fitting procedure, to minimize the skewing effect due to the lowly expressed genes 39 , only genes with a mean normalized count greater than 10 were used. Genes with an adjusted P value (Benjamini–Hochberg method) less than 0.1 were considered significant (red circles in Extended Data Fig. 2c ). This set of highly variable genes was used for the clustering analysis discussed below. The GO enrichment analysis was performed using TopGO in its ‘elimination mode’ with Fisher’s exact test; we considered as significant GO categories with an unadjusted P value below 10 −4 . Differentially expressed genes To find genes differentially expressed between two groups of cells we used edgeR 41 (version 3.12). Before running edgeR, we excluded genes annotated as pseudogenes in Ensembl, sex-related genes (Xist and genes on the Y chromosome) and genes that were not detected or were expressed at very low levels (we considered only genes that had more than ten reads per million in at least n cells, n being equal to 10% of the cells in the smaller group being compared). The function ‘glmTreat’ was then used to identify the genes having a fold change significantly greater than 1.5 at a false discovery rate threshold equal to 0.05. Clustering analysis Clustering analysis was performed on the 1,205 WT cells from the first experiment that passed the QC. The Spearman correlation coefficient, ρ , was computed between the expression levels of highly variable genes in each pair of cells, which was then used to build a dissimilarity matrix defined as (1 − ρ )/2. Hierarchical clustering was performed (‘hclust’ R function with the ‘average’ method) on the dissimilarity matrix and clusters were identified by means of the dynamic hybrid cut algorithm 42 . The R function ‘cutreeDynamic’ with the ‘hybrid’ method and a minimum cluster size equal to ten cells was used (‘dynamicTreeCut’ package, version 1.62). This function allows the user to specify the ‘deepSplit’ parameter that controls the sensitivity of the method: higher values of this parameter correspond to higher sensitivity and can result in more clusters being identified, but also entail an increased risk of overfitting the data. The optimal trade-off between robustness of clustering and sensitivity was found by analysing the results of the algorithm with all possible values of the deepSplit parameter (that is, integer values from 0 to 4) on 100 subsamples of our data. In particular, in each subsample, we removed 10% of genes randomly selected before computing the dissimilarity matrix and applying the clustering algorithm. The statistics of the Pearson gamma and the average silhouette width (computed with the ‘cluster.stats’ function included in the R package ‘fpc’, version 2.1-10) 43 , 44 of the subsamples (see Extended Data Fig. 3a,b ) suggest that with ‘deepSplit=2’ a good compromise is reached between robustness and sensitivity for our data. We identified ten different clusters as well as two outlier cells that, although similar in gene expression to the mesodermal progenitor cells (cluster 4), were not assigned to any cluster by the algorithm, probably because of their relatively poor quality. We then evaluated how specifically each gene is expressed in any given cluster. First, we found the differentially expressed genes (as described above) between all pairs of clusters. Marker genes for cluster i are expected to be significantly upregulated in i across all pairwise comparisons involving cluster i . The average rank of a marker gene across the pairwise comparisons provides a measure of how specifically the marker is expressed in the cluster. Extended Data Fig. 3c–f shows the expression values of marker genes for four different clusters. We provide the full list of markers in Supplementary Information Table 3 . The clusters were visualized by using t-SNE (as implemented in the ‘RtSNE’ R package) on the dissimilarity matrix. Single-cell trajectories in pseudospace: the anterior/posterior axis of the primitive streak As discussed in the main text, cells allocated to cluster 4 ( Fig. 1b–d ) are cells that have probably exited the primitive streak only recently. We sought to align the cells along a pseudospatial trajectory representing the anterior–posterior axis of the primitive streak, which would allow us to identify the likely original locations of each cell along such an axis. To do this we adopted an unsupervised approach: we did not use any prior information about marker genes, but selected the strongest signal present in this cluster of cells (controlling for potential batch effects) and later verified its biological meaning. We first used a diffusion map-based technique to reduce the dimensionality of the data set. Diffusion maps have recently been successfully applied to identify developmental trajectories in single-cell qPCR and RNA-seq data 14 , 15 . We used the implementation of the ‘destiny’ R package (‘DiffusionMap’ function) developed by Angerer et al . 45 . We restricted the analysis to genes that are highly variable among cells in the blue cluster and have an average expression above ten normalized read counts. The centred cosine similarity was used (‘cosine’ option in the ‘DiffusionMap’ function) and only the first two diffusion components (DC1 and DC2) were retained for downstream analysis. In addition to biologically meaningful signals, batch effects (owing to cells being sorted and processed on different plates) can also be present and induce structure within the data. While in our data set the batch effect does not strongly influence the definition of different populations of cells, it might become relevant when finer structures within a single cluster of cells are considered (see Extended Data Fig. 6a ). To tease apart the signals due to biological and batch effects, we computed the fraction of variance attributable to the batch effect along each direction in the diffusion space using a linear regression model. The direction ‘orthogonal’ to the batch effect, that is, the direction associated with the smallest fraction of variance explained by the batch effect, was considered as mostly driven by a biologically relevant signal. Hence, all cells were projected on this direction to obtain a ‘pseudo-coordinate’ representing the state of a cell relative to the biological process captured by the diffusion map. The direction was identified by the angle α that it formed with the DC1 axis ( Extended Data Fig. 6c ). Cells considered here are mostly from two batches including cells from the primitive streak stage (plate SLX-8408 and SLX-8409) and two batches including cells from the neural plate stage (plate SLX-8410 and SLX-8411; Extended Data Fig. 6b ). For each of these two sets of batches, we computed the fraction of variance that can be explained by the batch covariate along any possible direction in the diffusion plot by using a linear regression model. The angles α 1 and α 2 corresponding to the directions orthogonal to the two batch effects are very close to each other ( Extended Data Fig. 6c ); we took the average value of α between these two angles to approximate the direction orthogonal to both batch effects. Cells’ coordinates in the diffusion space were projected along the direction identified by the average value of α , and this projection was interpreted as a ‘pseudospace’ coordinate representing the position of cells along the primitive streak (see main text and Fig. 3 ). We tested the robustness of such a pseudospace coordinate by repeating the same analysis with alternative dimensionality reduction techniques (t-SNE and independent component analysis), which gave highly correlated coordinates (see Extended Data Fig. 6d ). A principal component analysis performed with a set of previously known markers for the anterior and posterior regions of the primitive streak also yielded a first component highly correlated with the pseudospace coordinate (see Extended Data Fig. 6h left panel). Moreover, the pseudospace coordinate had a positive (negative) correlation with the posterior (anterior) markers used (see Extended Data Fig. 6h right panel). These results strongly support the robustness of the signal we identified as well as its biological interpretation. Once the pseudospace trajectory was defined, we selected genes that were differentially expressed along the trajectory. First, we removed all genes that were not detected in any cell. Then, for each gene, we fitted the log 10 (expression levels) (adding a pseudocount of 1) by using two local polynomial models: one with degree 0 and another with degree 2 (‘locfit’ function in ‘locfit’ R package, nearest neighbour component parameter equal to 1). The first, simpler model is better suited for genes that do not change their expression level along the trajectory. The second model has a greater number of parameters and is able to reproduce the more complex dynamics of genes that are differentially expressed. We evaluated these two models by using the Akaike information criterion (AIC), a score that measures how well the data are reproduced by the model and includes a penalization for more complex models 46 . Better models according to this criterion correspond to smaller AIC scores. To compute the AIC scores for the two models, we used the ‘aic’ function available in the ‘locfit’ R package, and then calculated the difference: ΔAIC = AIC(degree = 2) − AIC(degree = 0). Negative values indicate that the more complex model with degree 2 local polynomials performs better, and therefore corresponds to genes that are more likely to be differentially expressed. Genes having a ΔAIC < −2 were considered to be significantly differentially expressed along the trajectory 46 . A hierarchical tree was built with the normalized expression patterns of the 462 differentially expressed genes (function ‘hclust’ with average linkage method and dissimilarity based on Spearman correlation) and a dynamic hybrid cut algorithm (‘cutreeDynamic’ function, minimum cluster size equal to 5) split this set of genes into three clusters according to the type of dynamics they have (see Fig. 3 , Extended Data Fig. 6e and Supplementary Information Table 4 ). Single-cell trajectories in pseudotime: the blood developmental trajectory As discussed in the main text, clusters 7 and 8 (yellow and brown clusters in Fig. 1b, d ) include blood progenitors at different stages of differentiation. By using a procedure analogous to the one described above, we aligned these cells along a trajectory representing embryonic blood development. Extended Data Fig. 8a shows the diffusion plot with cells from the yellow and the brown clusters. Most of these cells come from plates SLX-8344 and SLX-8345 that were collected from embryos at neural plate and late head fold stages (see Extended Data Fig. 8b ). With a linear regression model, where we controlled for biological parameters such as stage and sorting, we found the direction that correlates the least with the batch effect associated to these two plates and projected all cells onto it ( Extended Data Fig. 8c ). Note that the minimum correlation with the batch effect is achieved at a very small value of α (~10°, see Extended Data Fig. 8c ), suggesting that the first diffusion component is mainly driven by a biologically meaningful signal and the batch effect plays a minor role here even at this more detailed scale of analysis. The new cell coordinate obtained from the projection was interpreted as a ‘pseudotime’ coordinate, which represents the differentiation stage of each cell along their journey towards erythroid fate. As expected, cells in the yellow cluster have a smaller pseudotime coordinate compared with the brown cluster that is mainly composed of more differentiated primitive erythroid cells. An analysis with alternative dimensionality reduction techniques yielded highly correlated pseudotime coordinates, suggesting the robustness of the signal ( Extended Data Fig. 8d ). Furthermore, our biological interpretation of the pseudotime coordinate is supported by the expression pattern of genes that are known to be upregulated or downregulated along the blood developmental trajectory, as is clear via principal component analysis (see Extended Data Fig. 8f ). By using the filtering and clustering procedure described in the previous section, we were able to detect 897 genes that were differentially expressed along the trajectory, which were divided in three clusters, each displaying a different type of dynamics (see Extended Data Fig. 8e and Supplementary Information Table 5 ). Random Forest to allocate cells to previously identified clusters Cells captured in the Tal1 experiment (testing data set) were allocated to the clusters we previously identified by using a Random Forest algorithm 47 (R package ‘randomForest’, version 4.6-12) 48 trained on the cells captured in the first experiment (training data set). The rank-normalized expression levels of all highly variable genes in the training data set were used as variables (the R function ‘rank’ was used for normalization, ties were averaged). The Random Forest algorithm was first used on the training data to assess variable importance with 1,000 classification trees. The 25% most important variables were selected to grow another set of 1,000 trees that were then used for the classification of the testing data set. With this filtered set of variables, the out-of-bag error estimate was ~4.8%. The quality of allocation of each cell in the testing data set was verified by computing the median of pairwise dissimilarities (defined as (1 − ρ )/2, with ρ being the Spearman correlation) of that cell to all other cells in the training data allocated to the same cluster. Cells in the testing data set having a median pairwise dissimilarity larger than the maximum of the medians of pairwise dissimilarities of cells in the training data were considered to be ‘unclassified’ (~1.8% of all cells from the testing data set). For the identification of differentially expressed genes between clusters in the testing data, only cells that were confidently allocated to the clusters (that is, cells with a minimum difference of 10% probability between the best and the second best cluster allocation) were used. Generation, maintenance and haematopoietic differentiation of Runx1–GFP/Gata1–mCherry ESCs Runx1 GFP/+ Gata1 mCherry/Y ESCs were generated from morulae as described previously 49 , 50 . Cells were not tested for mycoplasma contamination. ESCs were grown on gelatinized plates (0.1% gelatin in water) at 37 °C and 5% CO 2 in ESC media (Knockout DMEM (Life Technologies) with 15% FCS (batch-tested for ESC culture; Life Technologies), 2 mM L-glutamine (PAA Laboratories), 0.5% P/S, 0.1 mM β-mercaptoethanol (Life Technologies) and 10 3 U/ml recombinant LIF (ORF Genetics)). Cells were passaged with TrypLE Express dissociation reagent (Life Technologies) every 1–3 days. ESCs were differentiated as embryoid bodies as previously described 31 , 51 . Embryoid bodies were harvested into Falcon tubes after 5 days of culture and dissociated with TrypLE Express dissociation reagent and prepared for FACS. ChIP-seq ChIP was performed as described 52 with modifications for low cell numbers 53 . Approximately 7 × 10 6 FACS-sorted day 5 embryoid body cells (Runx1-ires–GFP + /Gata1–mCherry + ; Extended Data Fig. 9a ) per ChIP were cross-linked using formaldehyde to a final concentration of 1%. As samples were pooled from several sorts, isolated nuclei were frozen on dry ice-cold isopropanol and stored at −80 °C. During the immunoprecipitation step, 4 μl recombinant histone 2B (New England Biolabs) and 1 μl of mouse RNA (Qiagen; diluted 1/5 in IP dilution buffer) were added as carriers, followed by 7 μg of primary antibody (rabbit anti-Gata1, Abcam ab11963). Sequencing libraries were prepared using the TruSeq Kit (Illumina) for high throughput sequencing on an Illumina HiSeq 2500, according to the manufacturer’s instructions, with size selection for fragments of 150–400 bp. ChIP-seq mapping and analysis Alignment of the ChIP-seq reads to the mouse mm10 genome, quality control and peak calling were performed according to the data pipeline set out by Sanchez-Castillo et al . 54 . Peak calling was performed using MACS2 55 with P = 1 × 10 −6 . Post-processing using in-house scripts converted the peak coordinates to 400 bp on the basis of peak summits given in the MACS output. Coordinates of genomic regions that lie at the end of chromosomes and/or in repeat regions were discarded from the final high-confidence peak lists. PolyAPeak 56 was run in R to remove abnormally shaped peaks. Peaks were assigned to genes using an in-house script according to whether they overlapped with a known TSS or fell within 50 kb each side of a gene. In situ hybridization Whole-mount in situ hybridization for Tal1 was performed as described previously 57 . An in situ hybridization probe for Tal1 was synthesized using published sequence ( Tal1 860-1428, accession number M59764) with the DIG RNA labelling kit (Roche). Code availability All data were analysed with standard programs and packages, as detailed above. Code is available on request. Accession codes Data deposits ChIP-seq data are available at the NCBI Gene Expression Omnibus portal under accession number GSE74994 . Processed data are also available at . RNAseq data are available at Array Express under accession numbers E-MTAB-4079 and E-MTAB-4026 . Processed RNAseq data are also available at .
In the first genome-scale experiment of its kind, researchers have gained new insights into how a mouse embryo first begins to transform from a ball of unfocussed cells into a small, structured entity. Published in Nature, the single-cell genomics study was led by the European Bioinformatics Institute (EMBL-EBI) and the Wellcome Trust-MRC Cambridge Stem Cell Institute. Gastrulation is the point when an animal's whole body plan is set, just before individual organs start to develop. Understanding this point in very early development is vital to understanding how animals develop and how things go wrong. One of the biggest challenges in studying gastrulation is the very small number of cells that make up an embryo at this stage. "If we want to better understand the natural world around us, one of the fundamental questions is, how do animals develop?" says Bertie Gottgens, Research Group Leader at the Wellcome Trust - Medical Research Council Cambridge Stem Cell Institute. "How do you turn from an egg into an animal, with all sorts of tissues? Many of the things that go wrong, like birth defects, are caused by problems in early development. We need to have an atlas of normal development for comparison when things go wrong." Today, thanks to advances in single-cell sequencing, the team was able to analyse over 1000 individual cells of gastrulating mouse embryos. The result is an atlas of gene expression during very early, healthy mammalian development. "Single-cell technologies are a major change over what we've used before - we can now make direct observations to see what's going on during the earliest stages of development," says John Marioni, Research Group Leader at EMBL-EBI, the Wellcome Trust Sanger Institute and the University of Cambridge. "We can look at individual cells and see the whole set of genes that are active at stages of development, which until now have been very difficult to access. Once we have that, we can take cells from embryos in which some genetic factors are not working properly at a specific developmental stage, and map them to the healthy atlas to better understand what might be happening." To illustrate the usefulness of the atlas, the team studied what happened when a genetic factor essential for the formation of blood cells was removed. "It wasn't what we expected at all. We found that cells which in healthy embryos would commit to becoming blood cells would actually become confused in the embryos lacking the key gene, effectively getting stuck," says John. "What is so exciting about this is that it demonstrates how we can now look at the very small number of cells that are actually making the decision at the precise time point when the decision is being made. It gives us a completely different perspective on development." "What is really exciting for me is that we can look at things that we know are important but were never able to see before - perhaps like people felt when they got hold of a microscope for the first time, suddenly seeing worlds they'd never thought of," says Bertie. "This is just the beginning of how single cell genomics will transform our understanding of early development."
10.1038/nature18633
Space
Webb spots surprisingly massive galaxies in early universe
Ivo Labbe, A population of red candidate massive galaxies ~600 Myr after the Big Bang, Nature (2023). DOI: 10.1038/s41586-023-05786-2. www.nature.com/articles/s41586-023-05786-2 Journal information: Nature
https://dx.doi.org/10.1038/s41586-023-05786-2
https://phys.org/news/2023-02-webb-massive-galaxies-early-universe.html
Abstract Galaxies with stellar masses as high as roughly 10 11 solar masses have been identified 1 , 2 , 3 out to redshifts z of roughly 6, around 1 billion years after the Big Bang. It has been difficult to find massive galaxies at even earlier times, as the Balmer break region, which is needed for accurate mass estimates, is redshifted to wavelengths beyond 2.5 μm. Here we make use of the 1–5 μm coverage of the James Webb Space Telescope early release observations to search for intrinsically red galaxies in the first roughly 750 million years of cosmic history. In the survey area, we find six candidate massive galaxies (stellar mass more than 10 10 solar masses) at 7.4 ≤ z ≤ 9.1, 500–700 Myr after the Big Bang, including one galaxy with a possible stellar mass of roughly 10 11 solar masses. If verified with spectroscopy, the stellar mass density in massive galaxies would be much higher than anticipated from previous studies on the basis of rest-frame ultraviolet-selected samples. Main The galaxies were identified in the first observations of the James Webb Space Telescope (JWST) Cosmic Evolution Early Release Science (CEERS) program. This program obtained multiband images at 1–5 μm with the Near-Infrared Camera (NIRCam) in a ‘blank’ field, chosen to overlap with existing Hubble Space Telescope (HST) imaging. The total area covered by these initial data is roughly 40 arcmin 2 . The data were obtained from the Mikulski Archive for Space Telescopes (MAST) archive and reduced using the Grizli pipeline 4 . A catalogue of sources was created, starting with detection in a deep combined F277W + F356W + F444W image (see Methods for details). A total of 42,729 objects are in this parent catalogue. We selected candidate massive galaxies at high redshifts by identifying objects that have two redshifted breaks in their spectral energy distributions (SEDs), the λ rest = 1,216 Å Lyman break and the λ rest of roughly 3,600 Å Balmer break. This selection ensures that the redshift probability distributions are well constrained, have no secondary solutions at lower redshifts and that we include galaxies that have potentially high mass-to-light (M/L) ratios. Specifically, we require that objects are not detected at optical wavelengths, blue in the near-infrared with F150W–F277W is less than 0.7, red at longer wavelengths with F277W–F444W is more than 1.0 and brighter than F444W is less than 27 AB units in magnitude. After visual inspection to remove obvious artefacts (such as diffraction spikes), this selection produced 13 galaxies with the sought-for ‘double-break’ SEDs. Next, redshifts and stellar masses were determined with three widely used techniques, taking the contribution of strong emission lines to the rest-frame optical photometry explicitly into account 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 . We use the EAZY code 16 (with extra strong emission-line templates), the Prospector-α framework 17 and five configurations of the Bagpipes SED-fitting code to explore systematics due to modelling assumptions. The seven individual mass and redshift measurements of the 13 galaxies are listed in the Methods section. We adopt fiducial masses and redshifts by taking the median value for each galaxy. We note that these masses and redshifts are not definitive and that all galaxies should be considered candidates. As shown in Fig. 1 , all 13 objects have photometric redshifts 6.5 < z < 9.1. Six of the 13 have fiducial masses greater than 10 10 M ☉ (Salpeter initial mass function (IMF)) and multiband images and SEDs of these galaxies are shown in Figs. 2 and 3 . Their photometric redshifts range from z = 7.4 to z = 9.1. The model fits are generally excellent, and in several cases clearly demonstrate that rest-frame optical emission lines contribute to the continuum emission. These lines can be so strong in young galaxies that they can dominate the broad-band fluxes redwards of the location of the Balmer break 6 , 7 , 8 , 14 , 18 , and Spitzer/IRAC detections of optical continuum breaks in galaxies at z ≳ 5 have been challenging to interpret 3 , 5 , 19 , 20 , 21 , 22 , 23 , 24 . With JWST, this ambiguity is largely resolved due to the dense wavelength coverage of the NIRCam filters and the inclusion of relatively narrow emission-line-sensitive filter F410M (ref. 25 ), which falls within the F444W band, although the uncertainties are such that alternative solutions with lower masses may exist 14 . The brightest galaxy in the sample, 38094, is at z = 7.5 and may have a mass that is as high as M * ≅ 1 × 10 11 M ☉ , more massive than the present-day Milky Way. It has two nearby companions with a similar break in their optical to near-infrared SEDs, suggesting that the galaxy may be in a group. Fig. 1: Redshifts and tentative stellar masses of double-break selected galaxies. Shown in grey circles are EAZY-determined redshifts and stellar masses using emission-line enhanced templates (Salpeter IMF) for objects with SNR > 8 in the F444W band. Fiducial redshifts and masses of the bright galaxies (F444W < 27 AB) that satisfy our double-break selection are shown by the large red symbols. Uncertainties are the 16th–84th percentiles of the posterior probability distribution. All galaxies have photometric redshifts 6.5 < z < 9.1. Six galaxies are candidate massive galaxies with fiducial M * > 10 10 M ☉ . Full size image Fig. 2: Images of the six galaxies with the highest apparent masses as a function of wavelength. The fiducial stellar masses of the galaxies are (log( M * / M ☉ ) > 10). Each cut-out has a size of 2.4″ × 2.4″. The filters range from the 0.6 μm F606W filter of HST /ACS to the 4.4 μm F444W JWST/NIRCam filter. The galaxies are undetected in the optical filters, blue in the short-wavelength NIRCam filters and red in the long-wavelength NIRCam filters. The colour stamps show F150W in blue, F277W in green and F444W in red. Full size image Fig. 3: SEDs and stellar population model fits. a , Photometry (black squares), best-fitting EAZY models (red lines) and redshift probability distribution P ( z ) (grey filled histograms) of six galaxies with apparent fiducial masses log( M * / M ☉ ) > 10. The flux density units are f ν . Uncertainties and upper limits (triangles) are 1 σ . Fiducial best-fit stellar masses and redshifts are noted. The SEDs are characterized by a double break: a Lyman break and an upturn at more than 3 μm. Emission lines are visible in the longest wavelength bands in several cases. b , Average rest-frame SED of the six candidate massive galaxies (red dots) and the 16th–84th percentile of the running median (shaded area). The red line is the best-fit median EAZY model. Green squares and the green line show average rest-frame UV-selected galaxies at z = 8, 10 from HST + Spitzer 3 , 15 . Grey triangles show two spectroscopically confirmed galaxies at z of roughly 9 (refs. 23 , 34 , 35 ). The double-break selected galaxies are notably redder than previously identified objects at similar redshifts. This may be due to high M/L ratios or effects that are not included in our modelling, such as AGN or exotic lines. Full size image We place these results in context by comparing them to previous studies of the evolution of the galaxy mass function to z of roughly 9. These studies are based on samples that were selected in the rest-frame ultraviolet (UV) using ultra-deep HST images, with Spitzer/IRAC photometry typically acting as a constraint on the rest-frame optical SED 3 , 15 , 26 , 27 , 28 . The bottom panel of Fig. 3 compares the average SED of the six candidate massive galaxies to the SEDs of HST-selected galaxies at similar redshifts. The galaxies we report here are much redder and the differences are not limited to one or two photometric bands: the entire SED is qualitatively different. This is the key result of our study: we show that galaxies can be robustly identified at z > 7 with JWST that are intrinsically redder than previous HST-selected samples at the same redshifts. It is likely that these galaxies also have much higher M/L ratios, but this needs to be confirmed with spectroscopy. We note that the new galaxies are very faint in the rest-frame UV (median F150W of roughly 28 AB), and previous wide-field studies with HST and Spitzer 29 of individual galaxies did not reach the required depths to find this population. The masses that we derive are intriguing in the context of previous studies. No candidate galaxies with log( M * / M ☉ ) > 10.5 had been found before beyond z of roughly 7, and no candidates with log( M * / M ☉ ) > 10 had been found beyond z of roughly 8. Furthermore, Schechter fits to the previous candidates predicted extremely low number densities of such galaxies at the highest redshifts 3 . This is shown by the lines in Fig. 4 : the expected mass density in galaxies with log( M * / M ☉ ) > 10 at z of roughly 9 was roughly 10 2 M ☉ Mpc −3 , and the total previously derived stellar mass density, integrated over the range 8 < log( M * / M ☉ ) < 12, is less than 10 5 M ☉ Mpc −3 . If confirmed, the JWST -selected objects would fall in a different region of Fig. 4 , in the top right, as the JWST-derived fiducial mass densities are far higher than the expected values on the basis of the UV-selected samples. The mass in galaxies with log( M * / M ☉ ) > 10 would be a factor of roughly 20 higher at z of around 8 and a factor of roughly 1,000 higher at z of roughly 9. The differences are even greater for log( M * / M ☉ ) > 10.5. Fig. 4: Cumulative stellar mass density, if the fiducial masses of the JWST-selected red galaxies are confirmed. The solid symbols show the total mass density in two redshift bins, 7 < z < 8.5 and 8.5 < z < 10, based on the three most massive galaxies in each bin. Uncertainties reflect Poisson statistics and cosmic variance. The dashed lines are derived from Schechter fits to UV-selected samples 3 . The JWST-selected galaxies would greatly exceed the mass densities of massive galaxies that were expected at these redshifts on the basis of previous studies. This indicates that these studies were highly incomplete or that the fiducial masses are overestimated by a large factor. Full size image We infer that the possible interpretation of these JWST-identified ‘optical break galaxies’ falls between two extremes. If the redshifts and fiducial masses are correct, then the mass density in the most massive galaxies would exceed the total previously estimated mass density (integrated down to M * = 10 8 M ☉ ) by a factor of about two at z of roughly 8 and by a factor of roughly five at z of roughly 9. Unless the low-mass samples are highly incomplete, the implication would be that most of the total stellar mass at z = 8−9 resides in the most massive galaxies. Although extreme, this is qualitatively consistent with the notion that the central regions of present-day massive elliptical galaxies host the oldest stars in the universe (together with globular clusters), and with the finding that by z of roughly 2 the stars in the central regions of massive galaxies already make up 10–20% of the total stellar mass density at that redshift 30 . A more fundamental issue is that these stellar mass densities are difficult to realize in a standard Λ cold dark matter cosmology, as pointed out by several recent studies 31 , 32 . Our fiducial mass densities push against the limit set by the number of available baryons in the most massive dark matter halos. The other extreme interpretation is that all the fiducial masses are larger than the true masses by factors of more than 10–100. We use standard techniques and multiple methods to estimate the masses. Under certain assumptions for the dust attenuation law and stellar population age sampling (favouring young ages with strong emission lines), low masses can be produced ( Methods ). This only occurs at specific redshifts ( z = 5.6, 6.9, 7.7 or about 10% of the redshift range of the sample) in which line-dominated and continuum-dominated models produce similar F410M–F444W colours. In addition, it is possible that techniques that have been calibrated with lower redshift objects 17 are not applicable. As an example, we do not include effects of exotic emission lines or bright active galactic nuclei (AGN) 14 . Part the sample is reported to be resolved in F200W (ref. 33 ) making a significant contribution from AGN less likely, but faint, red AGN are possible and would be highly interesting in their own right, even if they could lead to changes in the masses. It is perhaps most likely that the situation is in between these extremes, with some of the red colours reflecting exotic effects or AGN and others reflecting high M/L ratios. Future JWST NIRSpec spectroscopy can be used to measure accurate redshifts as well as the precise contributions of emission lines and to the observed photometry. With deeper data, the stellar continuum emission can be detected directly for the brightest galaxies. Finally, dynamical masses are needed to test the hypothesis that our description of massive halo assembly in Λ cold dark matter is incomplete. It may be possible to measure the required kinematics with ALMA or from rotation curves with the NIRSpec integral field unit if the ionized gas is spatially extended 30 , 31 . Methods Observations, reduction and photometry This article is based on the first imaging taken with the NIRCam on JWST as part of the CEERS program (Principal Investigator, Finkelstein; Program Identifier, 1345). Four pointings have been obtained, covering roughly 38 arcmin 2 on the Extended Groth Strip HST legacy field and overlapping fully with the existing HST–Advanced Camera for Surveys (ACS) and WFC3 footprint. NIRCam observations were taken in six broad-band filters, F115W, F200W, F150W, F277W, F356W and F444W, and one medium bandwidth filter F410M. The F410M medium band sits within the F444W filter and is a sensitive tracer of emission lines, enabling improved photometric redshifts and stellar mass estimates of high-redshift galaxies 29 . Exposures produced by stage 2 of the JWST calibration pipeline (v.1.5.2) were downloaded from the MAST archive. The data reduction pipeline Grism redshift and line analysis software for space-based spectroscopy (Grizli 4 ) was used to process, align and co-add the exposures. The pipeline mitigates various artefacts, such as ‘snow-balls’ and 1/ f noise. the To improve pixel-to-pixel variation, custom flat-field calibration images ( ) were created from on-sky commissioning data (program COM-1063) that are the median of the source-masked and background-normalized exposures in each NIRCam detector. The pipeline then subtracts a large-scale sky background, aligns the images to stars from the Gaia DR3 catalogue and drizzles the images to a common pixel grid using astrodrizzle. The mosaics are available online as part of the v.3 imaging data release ( ). Existing multi-wavelength ACS and WFC3 archival imaging from HST were also processed with Grizli. For the analysis in this paper all images are projected to a common 40 mas pixel grid. Remaining background structure in the NIRCam mosaics is due to scattered light. The background is generally smooth on small scales and was effectively removed with a 5-inch median filter after masking bright sources. We use standard astropy 34 and photutils 36 procedures to detect sources, create segmentation maps and perform photometry. The procedures are like those used in previous ground- and space-based imaging surveys. Briefly, we create an inverse variance weighted combined F277W + F356W + F444W image and detect sources after convolution with a Gaussian of 3 pixels full-width at half-maximum (0.12 ′′ ) to enhance sensitivity for point sources. Point spread functions were matched to the F444W band using photutils procedures. Photometry was performed at the locations of detected sources in all filters using 0.32-inch-diameter circular apertures. The fluxes were corrected to total using the Kron autoscaling aperture measurement on the detection image. A second small correction was applied for light outside the aperture based on the encircled energy provided by the WebbPSF software. The final catalogue contains 42,729 sources and includes all available HST–ACS and JWST–NIRCam filters (ten bands, spanning 0.43 to 4.4 μm). Photometry for HST–WFC3 bands was also derived, but only used for zeropoint testing as the HST–WFC3 images are several magnitudes shallower than NIRCam. Photometric zero-points The first JWST images were released with preflight zero-points for the NIRCam filters. The preflight estimates do not match the in-flight performance, with errors up to roughly 20% in the long-wavelength bands. This analysis uses updated in-flight calibrations that were provided by the Space Telescope Science Institute on 29 July 2022 (jwst_0942.pmap) on the basis of observations of two standard stars. The calibrations improved the accuracy of the long-wavelength photometry but introduced errors in the short-wavelength bands, with variations up to 20% between detectors, as determined from comparisons to previous HST–WFC3 photometry and analyses of stars in the Large Magellanic Cloud and the globular cluster M92 (refs. 37 , 38 ). We derived new zero-points for all short-wavelength and long-wavelength bands, for both NIRCam modules, using two independent methods. The first method (‘GB’) uses zero-points that are based on standard stars observed by JWST in the B module and transferred to the A module using overlapping stars in the Large Magellanic Cloud. The second method (‘IL’) uses 5,000–10,000 galaxies at photometric redshifts 0.1 < z < 5 with a signal to noise ratio (SNR) greater than 15 from the CEERS parent catalogue and calculates the ratio between observed and EAZY model fluxes for each detector, module and photometric band. As the observed wavelengths sample different rest-frame parts of the SEDs of the galaxies, errors in the model fits can be separated from errors in the zero-points. More information on the methodology and the resulting zero-points are provided on github ( ). The methods agree very well, with differences of 3 ± 3% in all bands except F444W, where we find a difference of 8%. We use the GB values for all bands except F444W where we take the average of the GB and IL values (multiplicative corrections 1.064 for module A and 1.084 for module B). The photometry is listed in Extended Data Table 1 . Using the fiducial zero-points, Extended Data Fig. 1 shows offsets with respect to EAZY model fluxes, split by detector, module and filter, showing only 0–3% residuals. A third independent method used colour-magnitude diagrams of stars in M92 (refs. 37 , 38 ) in F090W, F150W, F277W and F444W bands, with reported consistency with the GB values within the uncertainties. Our adopted zero-points agree with the most recent NIRCam flux calibration (jwst_0989.pmap, October 2022) to within 4%. This paper adopts a 5% minimum systematic error (added in quadrature) for all photometric redshift and stellar population fits to account for calibration uncertainties. Finally, we compiled a sample of 450 galaxies with spectroscopic redshifts 0.2 < z < 3.8 from 3D-HST (ref. 39 ) and MOSDEF 40 to test photometric redshift performance, finding a normalized median absolute deviation of ( z phot − z spec )/(1 + z spec ) = 2.5%. Sample selection The JWST–NIRCam imaging in this paper reaches 5σ depths from 28.5 to 29.5 AB, representing an order of magnitude increase in sensitivity and resolution beyond wavelengths of 2.0 μm and allowing us to select galaxies at rest-frame optical wavelengths to z of roughly 10. To enable straightforward model-independent reproduction of the sample we use a purely empirical selection of high-redshift galaxies based on NIRCam photometry, rather than one on inferred photometric redshift or stellar mass. We select on a ‘double-break’ SED: no detection in the HST–ACS optical, blue in the NIRCam short-wavelength filters and red in the NIRCam long-wavelength filters, which is expected for sources at z ≳ 7 with Lyman break and with red UV-optical colours. The following colour selection criteria were applied: $${\rm{F150W}}-{\rm{F277W}} < 0.7$$ $${\rm{F277W}}-{\rm{F444W}} > 1.0$$ in addition to a non-detection requirement in HST–ACS imaging. $${\rm{SNR}}({{\rm{B}}}_{435,}{{\rm{V}}}_{606,}\,{{\rm{I}}}_{814}) < 2$$ To ensure good SNR, we limit our sample to F444W < 27 AB magnitude and F150W < 29 AB magnitude and require SNR(F444W) > 8. We manually inspected selected sources and removed a small number of artefacts, such as hot pixels, diffraction spikes and sources affected by residual background issues or bright neighbours. This selection complements the traditional drop-out colour selection techniques based on isolating the strong Lyman 1,216 Å break as it moves through the filters. Drop-out selection is not feasible here: the HST–ACS data are not deep enough to select drop-out galaxies to the same equivalent limits as the NIRCam imaging. Screening for two breaks has shown to be an effective redshift selection: a similar technique was used to successfully select bright galaxies at 7 < z < 9 from wide-field HST and Spitzer data 29 . A red F277W–F444W colour can be produced by large amounts of reddening by dust, evolved stellar populations with a Balmer Break 24 , strong optical emission lines 10 or a combination of these. This selection produced a total of 13 sources, with a median SNR ratio in the F444W band of roughly 30 (see Fig. 2 and Extended Data Fig. 2 ). The resulting sample is dark at optical wavelengths (2 σ upper limit of I 814 > 30.4 AB) and faint in F115W and F150W with median of roughly 28 AB magnitude, beyond the limits reached with HST–WFC3 except in small areas in the Hubble Ultra-Deep Field and the Frontier Fields. The absence of any flux in the ACS optical, the red I 814 − F115W > 2.5 and blue F115W–F150W of roughly 0.3 AB colours are consistent with a strong Lyman break moving beyond the ACS I 814 band at redshifts z > 6. The NIRCam F444W magnitudes are bright at around 26 AB, and the median F150W–F444W of roughly 2 AB colour is redder than any sample previously reported at z > 7 (refs. 3 , 18 , 21 , 29 , 41 ). Fits to the photometry Several methods are used to derive redshifts and stellar masses, all allowing extremely strong emission lines combined with a wide range of continuum slopes: (1) EAZY with extra templates that include strong emission lines, (2) Prospector with a strongly rising star formation history (SFH) prior that favours young ages, (3) Bagpipes to evaluate dependence on stellar population model assumptions and minimization algorithm. Finally, we also consider (4), a proposed template set for high-redshift galaxies with blue continua, strong emission lines and a non-standard IMF. Throughout, reported uncertainties are the 16th–84th percentiles of the probability distributions. A Salpeter 42 IMF is assumed throughout, for consistency with previous determinations of the high-redshift galaxy mass function 3 , 28 and constraints on the IMF in the centres of the likely descendants 35 , 43 , 44 , 45 . A summary of the results is presented in Extended Data Figs. 3 and 4 . EAZY The main benefits of EAZY 5 are ease of use, speed and reproducibility. EAZY fits non-negative linear combinations of templates, with redshift and scaling of each template as free parameters. The allowed redshift range was 0–20 and no luminosity prior was applied. The standard EAZY template set (tweak_fsps_QSF_12_v3) is optimized for lower redshift galaxies. High-redshift stellar populations tend to be younger, less dusty and have stronger emission lines. We create a more appropriate template set by removing the oldest and dustiest templates ( A V > 2.5) from the standard set, keeping templates 1, 2, 7, 8, 9, 10 and 11, and adding two flexible stellar population synthesis (FSPS) templates with strong emission lines. The first has a continuum that is roughly constant in F ν with EW(Hβ+[OIII]) = 650 Å, similar to NIRSpec-confirmed galaxies 46 at z = 7–8. The second has a red continuum that is constant in F λ with EW(Hβ+[OIII]) = 1,100 Å, comparable to line strengths inferred for bright Lyman-break galaxies at z = 7–9 (ref. 29 ). Each template has an associated M/L ratio, so the template weights in the fit can be converted to a total stellar mass. We fit all galaxies in the catalogue with the default EAZY template set first and then refit all galaxies at z > 7 using the new template set. The template set is available online with the photometric catalogue ( ). The EAZY redshift distribution of the sample of 13 galaxies is 7.3 < z < 9.4, with no low-redshift interlopers ( z < 6). EAZY masses range 9.2 < log( M * / M ☉ ) < 10.9. Prospector We perform a stellar population fit with more freedom than is possible in EAZY using the Prospector 17 , 47 framework, specifically the Prospector-α settings 48 and the mesa isochrones and stellar tracks stellar isochrones 49 , 50 from FSPS 51 , 52 . This mode includes non-parametric star formation histories, with a continuity prior that disfavours large changes in the star formation rate between time bins 53 . It uses a two-component, age-dependent dust model, allows full freedom for the gas-phase and stellar metallicity, and includes nebular emission in which the nebulae are self-consistently powered by the stellar ionizing continuum from the model 54 . The sampling was performed using the dynesty 55 nested sampling algorithm. We also adopt two new priors that disfavour high-mass solutions: first, a mass function prior on the stellar mass, adopting the observed z = 3 mass function for z > 3 solutions 56 , and second, a non-parametric SFH prior that favours rising SFHs in the early universe and falling SFHs in the late universe, following expectations from the cosmic star formation rate density. These are described in detail in ref. 57 . The masses from Prospector are consistent within the uncertainties with the EAZY masses, with a mean offset of log( M *Prosp / M *EAZY ) = 0.1 for objects with greater than 10 10 M ☉ . The most massive objects as indicated by EAZY are also the most massive in the Prospector fits. Prospector also provides ages and star formation rates. The star formation rates are generally not very well constrained in the fits due to the lack of infrared coverage. The ages are also uncertain and depend strongly on the adopted prior. For a constant SFH prior Prospector finds typical ages of roughly 0.3 Gyr, with substantial Balmer breaks, whereas for strongly rising SFHs Prospector finds a median mass-weighted age of 34 Myr, with strong emission lines and large amounts of reddening ( A V ≅ 1.5). This is reminiscent of the age-dust degeneracy that is well known at lower redshift. The stellar masses do not vary significantly between these two priors. The red SEDs (Fig. 3 ) require high M/L ratios for a large range of the best-fit stellar population ages, as is well known from studies of nearby galaxies 58 . Bagpipes Fits with the Bayesian analysis of galaxies for physical inference and parameter estimation (Bagpipes 59 ) software are also considered. Compared to Prospector, Bagpipes uses the Bruzual and Charlot stellar population models 60 and sampling algorithm Multinest 61 . Whereas Bagpipes does not cover new parameter space compared to Prospector, it allows us to evaluate how sensitive the masses are to the adopted stellar population model or fitting technique. Furthermore, Bagpipes is relatively fast, so we can use it explore the effect of modelling assumptions to investigate the role of systematic uncertainties on the derived redshift and stellar mass. We focus on attenuation law, SFH, age sampling priors and SNR. (1) Bagpipes_csf_salim: baseline model of constant SFH with redshift 0 to 20, age_max from 1 Myr to 10 Gyr, metallicity between 0.01 and 2.5 Solar, ionization parameter −4 < log( U ) < −2, a Salim 62 attenuation 0 < A v < 4 and adopting a linear prior in age and log prior in metallicity and ionization and uniform prior in redshift, age and A v . The Salim law varies between a steep Small Magellanic Cloud (SMC)-like extinction law at low optical depth and a flat Calzetti-like dust law at large optical depth, in accordance with empirical studies 62 and theoretical expectations 63 . The Bagpipes masses and redshifts are similar on average to those of EAZY and Prospector, with a mean offset of log( M *A / M *EAZY ) = 0 for the massive sample. (2) Bagpipes_rising_salim: this model is not intended to search for best fit in a wide parameter space but only in a restricted space to increase the emission-line contribution to the reddest filter, F444W, and decrease the stellar masses. The model is restricted to rising star formation rates at high redshift (delayed τ > 0.5 Gyr) and redshifts to z < 9.0 to force the Hb+[OIII] complex to fall within the F444W filter. The fits show strong emission lines, low ages (median roughly 30 Myr) and high dust content (median A V ≅ 1.7). Even with these restrictions, the mean stellar mass agrees well with the baseline (mean log( M *B / M *A ) = −0.1 for objects with greater than 10 10 M ☉ ). (3) Bagpipes_csf_salim_logage: like the model in (1) but with a logarithmic age prior, which is heavily weighted towards very young ages. For the five reddest, most massive galaxies in (1) the results are unchanged, whereas six other galaxies are now placed at significantly lower masses (inconsistent with model (1), given the uncertainties), including 14,924 (from log( M * / M ☉ ) = 10.1 to 8.7). The P ( z ) of these lower mass solutions is narrow and clustered in narrow spikes at z = 5.6, 6.9, 7.7, where the F410M filter cannot distinguish between strong lines and continuum SEDs (Extended Data Figs. 5 and 6 ). (4) Bagpipes_csf_salim_logage_snr10: to test if the fit in (3) is driven by the high SNR in long-wavelength filters (which put all the weight in the fits there), we impose an error floor of 10% on the photometry that roughly balances the SNR across all NIRCam bands. As JWST is still in early days of calibration, some limit on SNR is prudent. The SNR-limited fits result in high-mass solutions for 11 out of 13 galaxies. Notably, the uncertainties on the stellar mass do not encompass the low-mass solution from (3) indicating that detailed assumptions on the treatment of SNR can introduce systematic changes. (5) Bagpipes_csf_smc_logage: SMC extinction is often used in modelling high-redshift galaxies 14 . Our Bagpipes modelling use Salim-type dust that includes the SMC-like extinction at low optical depth, but it is useful to evaluate fits that are restricted to a steep extinction law in combination with a logarithmic age prior favouring young ages. The results are different from any of the modelling above: ten out of 13 galaxies show very low stellar masses (in the range 10 8 M ☉ –10 9 M ☉ ) in combination with extremely young ages (1–5 Myr). Another notable aspect is that these fits do not match the blue part of the SED well (NIRCam short-wavelength F115W, F150W, F200W) and the fits seem driven by the high SNR in the NIRCam long-wavelength filters (Extended Data Fig. 3 ). Most fits have significantly worse χ 2 than the high-mass fits (EAZY, Prospector, Bagpipes (1)–(4)). In conclusion, the derived masses depend on assumed attenuation law, parameterization of ages and treatment of photometric uncertainties. Together, these aspects can produce lower redshifts and lower masses by up to factors of 100 in ways that are not reflected by the random uncertainties. Therefore, different assumptions can change the stellar masses and redshifts systematically and the uncertainties are probably underestimated. Although neither high nor low-mass models can be excluded with the data available now, there are two features that would suggest the ultra-young, low-mass solutions are less plausible. First, whereas 1–5 Myr ages are formally allowed, the galaxy would not be causally connected: 10 8.5 M ☉ of star formation would have started spontaneously on timescales less than a dynamical time (although dynamical times are uncertain until velocity dispersions and corresponding sizes are measured). In addition, the probability of catching most galaxies at that precise moment is low, given the roughly 200 Myr search window at z = 7–9. It would suggest there are more than 40 older and more massive galaxies for every galaxy in our sample. Second, the P ( z ) of the low-mass solutions are extremely narrow and concentrated at nearly discrete redshifts z = 5.6, 6.9 and 7.7 (for example, 38094 z = 6.93 ± 0.01). Here strong Hɑ and Hβ+[OIII] transition between the overlapping F356W, F410M and F444W filter edges (Extended Data Fig. 5 ). A single line can contribute to several bands (for example, [OIII]5007 at z = 6.9), with great flexibility due to the rapidly varying transmission at the filter edges. The result is that line and continuum-dominated models are degenerate due to undersampling of the SED and resulting aliasing, but only at specific redshifts. Although finding one 5 Myr galaxy exactly in this narrow window could be luck, we find that ten out of 13 galaxies can only be fit with low-mass, ultra-young models at these discrete redshifts z = 5.6, 6.9 and 7.7. Such an age and P ( z ) distribution for the sample, at precisely the redshifts where this fortuitous overlap between filters occurs (roughly less than 8% of the redshift range between z = 5–9), is not implausible. To rule out that the spiked nature of the P ( z ) is the result of our double-break selection, we perform simple simulations. We take random draws from the posteriors of line-dominated model E, redshift the models to a uniform distribution between four and ten, perturb with the observational errors and apply our double-break selection criterion to the simulated photometry (Extended Data Fig. 6 ). This suggests that even if the sample were line dominated with ages less than 5 Myr, the redshift distribution should be different (not spiked) suggesting that these fits suffer from aliasing. By contrast, P ( z ) of high-mass model B is broadly self-consistent with the selection function based on the model B fits. The likely reason that this effect primarily occurs with an SMC extinction law is because of the strong wavelength dependence (steep in the far ultraviolet, flatter in optical). For the sample in this paper, fits with SMC have difficulty reproducing the overall (rest-optical) red SED shape. This can be clearly seen in Extended Data Fig. 3 , where the SMC-based fits have strongly ‘curved’ continuums, which are generally too steep in the rest-UV and too flat in the rest-optical (F356W, F410W, F444W bands), requiring strong emission lines at specific redshifts to produce the red colours. FSPS-hot model For completeness we also consider recently proposed ‘fsps-hot’ models 64 , which consist of templates with blue continua, strong emission lines and with a modified extremely bottom-light IMF that produces lower masses. Such an IMF is proposed to be appropriate for the extreme conditions that might be expected in high-redshift galaxies. For ten out of 13 galaxies (including all massive, greater than 10 10 M ☉ sources), the fsps-hot template set provides poorer fits to the photometry than the fsps-wulturecorn set (median Δ χ 2 = 31), due to the lack of red templates. The fsps-hot set places nine out of 13 galaxies in a narrow redshift range z = 7.7 with very small uncertainties σ ( z ) = 0.05, reminiscent of the spiked distribution found earlier for Bagpipes model (5). The blue template set can only produce red colours if strong emission lines are placed at specific redshifts. Because the fits are poor overall and no further insight is gained, we do not consider these masses further to avoid confusion due to adopting vastly different IMFs. The extremely bottom-light IMF, with suppression of (invisible) low-mass stars, is untestable with photometric data. Fiducial redshifts and stellar masses The results of all methods are shown graphically in Extended Data Fig. 4 . Most methods explored produce good fits and consistent masses and redshifts. Rather than favour one method over the others we derive fiducial masses and redshifts for each object by taking the median values of the EAZY (1), Prospector (2), the five Bagpipes fits (3–7) results from each galaxy. As discussed in the main text, the consistency between various methods may largely indicate a consistency in underlying assumptions. Different assumptions can change the stellar masses and redshifts systematically in ways that are not reflected by the random uncertainties. The fiducial redshifts and masses are listed in Extended Data Table 2 . Furthermore, we do not consider contributions from exotic emission-line species nor include AGN templates in the fits 14 . All objects in this paper should be considered candidate massive galaxies, to be confirmed with spectroscopy. Lensing A potential concern is that the fluxes (and therefore the masses) of some or all the galaxies are boosted by gravitational lensing. No galaxy is close to the expected Einstein radius of another object. The bright galaxy that is 1.2 ′′ to the southwest of 38094 has z grism ≅ 1.15 and M * ≅ 10.63 (object number 28717 in the 3D-HST AEGIS catalogue 23 ), and an Einstein radius (roughly 0.4 ′′ ) that is 0.3× the distance to 38094. If we assume that the mass profile of the lensing galaxy is an isothermal sphere, then the magnification is 1/(1 − θ E / θ ) where θ is the separation from the foreground source and θ E is the Einstein radius. This would indicate a relatively modest −0.15 dex correction to the stellar mass. We apply this correction when calculating densities in Fig. 4 . Volume Stellar mass densities for galaxies with M * > 10 10 M ☉ are calculated by grouping the galaxies in two broad redshift bins (7 < z < 8.5 and 8.5 < z < 10). At z of roughly 8.5, the Lyman Break moves through the F115W filter, allowing galaxies to be separated into the two bins. The cosmic volume is estimated by integrating between the redshift limits over 38 arcmin 2 , making no corrections for contamination or incompleteness. The key result is driven by the most massive galaxies. Any incompleteness would increase the derived stellar mass densities, whereas contamination would decrease it. Cosmic variance is about 30%, calculated using a web calculator 5 , 65 . The error bars on the densities are the quadratic sum of the Poisson uncertainty and cosmic variance, with the Poisson error dominant. The volume estimate is obviously simplistic, but the colour selection function (Extended Data Fig. 6 ) suggests that most of the sample should lie between 7 < z < 10. A more refined treatment does not seem warranted given that the main (orders of magnitude) uncertainty in our study is the interpretation of the red colours of the galaxies. Data availability The HST data are available in the MAST ( ), under program ID 1345. Photometry, EAZY template set, fiducial redshifts and stellar masses of the sources presented here are available at . Code availability Publicly available codes and standard data reduction tools in the Python environments were used: Grizli 4 , EAZY 5 , astropy 63 , photutils 64 and Prospector 17 , 36 , 37 .
The James Webb Space Telescope has spotted six massive galaxies that emerged not long after the Big Bang, a study said Wednesday, surprising scientists by forming at a speed that contradicts our current understanding of the universe. Since becoming operational last July, the Webb telescope has been peering farther than ever before into the universe's distant reaches—which also means it is looking back in time. For its latest discovery, the telescope spied galaxies from between 500 to 700 years million years after the Big Bang 13.8 billion years ago, meaning the universe was under five percent of its current age. Webb's NIRCam instrument, which operates in the near infrared wavelength invisible to the naked eye, observed the six galaxies in a little-known region of the sky, according to a study published in the journal Nature. Two of the galaxies had previously been spotted by the Hubble Space Telescope but were so faint in those images that they went unnoticed. These six new "candidate galaxies", so-called because their discovery still needs to be confirmed by other measurements, contain many more stars than scientists expected. One galaxy is even believed to have around 100 billion stars. That would make it around the size of the Milky Way, which is "crazy," the study's first author Ivo Labbe told AFP. 'Off a cliff' It took our home galaxy the entire life of the universe for all its stars to assemble. For this young galaxy to achieve the same growth in just 700 million years, it would have had to grow around 20 times faster than the Milky Way, said Labbe, a researcher at Australia's Swinburne University of Technology. For there to be such massive galaxies so soon after the Big Bang goes against the current cosmological model which represents science's best understanding of how the universe works. "According to theory, galaxies grow slowly from very small beginnings at early times," Labbe said, adding that such galaxies were expected to be between 10 to 100 times smaller. Inching towards the Big Bang: The James Webb telescope peers deep into space and time. But the size of these galaxies "really go off a cliff," he said. What could be going on? One suspect is mysterious dark matter, which makes up a sizeable amount of the Universe. While much about dark matter remains unknown, scientists believe it plays a key role in the formation of galaxies. When dark matter "clumps" together into a halo, it attracts gas from the surrounding universe which in turn forms a galaxy and its stars, Labbe said. But this process is supposed to take a long time, and "in the early universe, there's just not that many clumps of dark matter," he said. 'Model is cracking' The newly discovered galaxies could indicate that things sped up far faster in the early universe than previously thought, allowing stars to form "much more efficiently," said David Elbaz, an astrophysicist at the French Atomic Energy Commission not involved in the research. This could be linked to recent signs that the universe itself is expanding faster than we once believed, he added. This subject sparks fierce debate among cosmologists, making this latest discovery "all the more exciting, because it is one more indication that the model is cracking," Elbaz said. Elbaz is one of many scientists working on the European Space Agency's Euclid space telescope, which is scheduled to launch in July to join Webb in space. Euclid's mission is to uncover the secrets of dark matter and dark energy—and it could also help solve this latest mystery, Elbaz said. Labbe referred to the "black swan theory", under which just one unexpected event can overturn our previous understanding—such as when Europeans saw the first black swans in Australia. He called the galaxies "six black swans—if even one of them turns out to be true, then it means we have to change our theories."
10.1038/s41586-023-05786-2
Medicine
Study identifies therapeutic target for Alzheimer's disease, revealing strategy for slowing disease progression
Yuanyuan Zhao et al, ATAD3A oligomerization promotes neuropathology and cognitive deficits in Alzheimer's disease models, Nature Communications (2022). DOI: 10.1038/s41467-022-28769-9 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-022-28769-9
https://medicalxpress.com/news/2022-04-therapeutic-alzheimer-disease-revealing-strategy.html
Abstract Predisposition to Alzheimer’s disease (AD) may arise from lipid metabolism perturbation, however, the underlying mechanism remains elusive. Here, we identify ATPase family AAA-domain containing protein 3A (ATAD3A), a mitochondrial AAA-ATPase, as a molecular switch that links cholesterol metabolism impairment to AD phenotypes. In neuronal models of AD, the 5XFAD mouse model and post-mortem AD brains, ATAD3A is oligomerized and accumulated at the mitochondria-associated ER membranes (MAMs), where it induces cholesterol accumulation by inhibiting gene expression of CYP46A1, an enzyme governing brain cholesterol clearance. ATAD3A and CYP46A1 cooperate to promote APP processing and synaptic loss. Suppressing ATAD3A oligomerization by heterozygous ATAD3A knockout or pharmacological inhibition with DA1 restores neuronal CYP46A1 levels, normalizes brain cholesterol turnover and MAM integrity, suppresses APP processing and synaptic loss, and consequently reduces AD neuropathology and cognitive deficits in AD transgenic mice. These findings reveal a role for ATAD3A oligomerization in AD pathogenesis and suggest ATAD3A as a potential therapeutic target for AD. Introduction Alzheimer’s disease (AD) is the most common age-dependent neurodegenerative disease with unknown etiology. AD is characterized by the accumulation of amyloid deposition, neurofibrillary tangles, synaptic loss, and progressive cognitive decline 1 . Since the discovery of AD over 100 years ago, the underlying mechanisms of cellular damage and cognitive deficits remain elusive. As a result, current AD therapies are poorly effective and limited to acetylcholine or N-methyl-D-aspartate glutamatergic mechanisms that provide only mild symptomatic benefit and fail to slow disease progression 2 . Thus, the identification of novel therapeutic targets is imperative for developing disease-modifying therapies. Mitochondria and endoplasmic reticulum (ER) are highly interconnected. They physically interact to form specific microdomains called mitochondria-associated ER membranes (MAMs), where the outer mitochondrial membrane is close to the ER (i.e., within 10–30 nm) 3 . The MAMs are involved in many key metabolic functions 4 , including cholesterol metabolism 5 , lipid synthesis and trafficking 6 , mitochondrial dynamics 7 , calcium homeostasis 8 , and autophagy 9 . All these functions are altered in neurodegenerative diseases, including AD. Indeed, the integrity of the MAMs is significantly impaired in AD animal models and patients, manifesting as a hyperconnectivity of the MAMs 8 , 10 . MAM-resident proteins inositol 1,4,5‐trisphosphate receptor (IP3R) and long-chain acyl-CoA synthetase (FACL4) increase in various AD experimental models and the postmortem brains of AD patients 11 , 12 . Polymorphisms in mitofusin 2 and sigma non-opioid intracellular 1-receptor 1 (SigmaR1), two MAM proteins, correlate with the risk of developing AD 13 , 14 . Moreover, the amyloid precursor protein (APP) processing γ-secretases, presenilin-1 and presenilin-2, are highly enriched in the MAMs relative to other cell compartments, such as the plasma membrane, mitochondria, and ER 10 . These findings highlight the role of MAMs in amyloidogenesis. In addition, the ɛ4 allele of apolipoprotein E, the most common genetic risk factor of late-onset AD, upregulates MAM activity 15 . Thus, perturbed MAMs are a key event in AD pathogenesis and may serve as a common convergent neurodegenerative mechanism 16 . However, the factors that induce MAM hyperconnectivity in AD are poorly understood, and whether manipulation of impaired MAMs affects AD progression has not been explored. Perturbations in lipid homeostasis are another feature of AD. Accumulation of cholesterol has been observed in senile plaques and affected brain areas of AD patients 17 , and is associated with region-specific loss of synapses 18 . A growing number of animal studies have consistently demonstrated that hypercholesterolemia leads to dysfunction of the cholinergic system, cognitive deficits, and amyloid and tau pathology 19 , 20 , all of which strongly support a role for cholesterol disturbance in AD. In familial and sporadic AD subjects, increased cholesterol esters can be detected in the lipid raft-like MAMs 10 . Hyperactivity of MAM tethering causes cholesterol accumulation and synaptic loss and is associated with cognitive deficits 21 . In addition, the cleaved product of APP (i.e., C99) accumulates at MAMs, where it impairs mitochondrial bioenergetics, disrupts cellular lipid homeostasis, and causes alterations in membrane lipid composition commonly observed during AD pathogenesis 22 , 23 . Despite these findings, the mechanism that links MAM impairment, cholesterol accumulation, and amyloidogenesis in AD remains elusive. ATPase family AAA-domain containing protein 3A (ATAD3A) is a nuclear-encoded mitochondrial membrane protein that belongs to the AAA + -ATPase protein family. ATAD3A has a unique structure with a C-terminus that includes a conserved ATPase and is located in the mitochondrial matrix and an N-terminus associated with the MAMs via its proline-rich motif 24 , 25 . ATAD3A can regulate mitochondrial dynamics and maintain mitochondrial DNA (mtDNA) stability 25 , 26 , 27 . MAMs are a specialized subdomain of the ER with lipid raft features and rich in cholesterol and sphingomyelin 28 . Because of its unique localization on the MAMs, ATAD3A may regulate cholesterol trafficking through an unknown mechanism 26 . While global knockout of ATAD3A is embryonic lethal 29 , selective loss of ATAD3A in mouse skeletal muscle disrupts mtDNA integrity and impairs cholesterol trafficking 30 . Thus, by connecting two subcellular organelles (the mitochondria and ER) via the MAMs, ATAD3A simultaneously regulates mitochondrial structure integrity and cholesterol metabolism. The dysregulation of both these processes is observed in the early stage of AD. Patients deficient in ATAD3A develop neurodegenerative conditions associated with axonal neuropathy 31 , elevated free cholesterol, decreased expression of genes involved in cholesterol metabolism 26 , and spastic paraplegia 32 . More recently, we reported that in the fatal and inherited neurodegenerative condition of Huntington’s disease (HD), ATAD3A oligomerizes and accumulates at the contact sites of mitochondria and induces mitochondrial fragmentation, mitochondrial genome instability, and bioenergetic failure 27 . Moreover, blocking ATAD3A oligomerization by DA1, a peptide inhibitor, reduces HD pathology in various HD models 27 . Thus, ATAD3A may play an important role in the initiation and progression of neurodegeneration. However, whether ATAD3A is activated in AD and its exact roles in MAM hyperconnectivity and cholesterol disturbance underlying AD are unknown. In this study, we reported that ATAD3A oligomerization increased at the MAMs in various AD disease models and the postmortem brains of AD patients. This aberrant oligomerization of ATAD3A induced AD-like hyperconnectivity of MAMs and impaired neuronal cholesterol turnover by inhibiting CYP46A1 (Cytochrome P450 Family 46 Subfamily A Member 1) gene expression, which, in turn, promoted APP processing and synaptic loss. Notably, suppression of ATAD3A oligomerization by either heterozygous knockout or pharmacological inhibition in AD mice enhanced MAM integrity and cholesterol metabolism, suppressed APP processing, mitigated synaptic loss, and ultimately reduced AD-associated neuropathology and cognitive deficits. Thus, our results revealed that ATAD3A acts as a signaling node regulating MAM integrity to maintain cholesterol homeostasis and neuronal functions. Our findings also highlighted a potential therapeutic strategy for slowing AD progression by manipulating aberrant ATAD3A oligomerization. Results ATAD3A oligomerization increases in AD models To investigate the molecular involvement of ATAD3A in AD, we first carried out a computational analysis on the priority of ATAD3A in AD phenotypes, genes, and pathways by performing a virtual screening of a total of 10,072 prioritized disease phenotypes and 23,499 prioritized genes. We prioritized biomedical entities using a context-sensitive network-based ranking algorithm. The data mining showed that ATAD3A was closely associated with AD-specific phenotypes and AD-associated genes, ranking in the top 20.82% and 14.49%, respectively, which were significantly higher than random ranking ( p < 0.0001; Supplementary Fig. 1a–c ). The top-ranked pathways for ATAD3A were related to protein metabolism, gene transcription, immune response, and neurodegeneration (Supplementary Fig. 1d ). These data suggested that ATAD3A could be involved in the development of AD pathology. We recently reported that ATAD3A forms oligomers (mainly dimers) under pathological conditions, showing a gain-of-function that promotes neuropathology in HD models 27 . To determine the change in ATAD3A in AD, we first assessed ATAD3A oligomerization in various AD experimental models. Under non-reducing conditions (i.e., in the absence of β-mercaptoethanol, β-ME), the levels of ATAD3A oligomers increased in immortalized mouse hippocampal HT-22 neurons and Neuro2a neuroblastoma cells exposed to oligomeric Aβ 1–42 peptide in a time- and dose-dependent manner (Fig. 1a , Supplementary Fig. 2a ), and in toxic Aβ-treated mouse primary cortical neurons (Fig. 1b ). In parallel, we confirmed an enhancement of ATAD3A oligomers in stable APP wildtype (APP wt )- and APP Swedish mutant (APP swe )-expressing Neuro2a cells in the presence of a chemical cross-linker (bismaleimidohexane, BMH), with a greater increase observed in the APP swe -expressing cells (Fig. 1c ). Consistently, ATAD3A oligomers increased in the total protein lysates from the postmortem hippocampus of AD patients under non-reducing conditions (Fig. 1d ). ATAD3A oligomers were also elevated in the cortex, hippocampus, and thalamus of 5XFAD AD mice, but not in other brain regions (Fig. 1e , Supplementary Fig. 2c ), consistent with region-specific Aβ aggregation and human APP expression 33 . There was no change in ATAD3A ATPase activity in the cortex of 5XFAD mice relative to wildtype (WT) mice (Supplementary Fig. 2d ). Overexpression of ATAD3A ATPase dead mutant ATAD3A-K358E-Flag 32 also had no effects on ATAD3A oligomerization (Supplementary Fig. 2e ). Thus, ATAD3A oligomer formation is not associated with enzyme activity of the protein. Fig. 1: Aberrant ATAD3A oligomerization in AD models. a HT-22 neuronal cells were treated with oligomeric Aβ 1–42 peptides (1 µM). n = 6 independent experiments, One-way ANOVA with Tukey’s multiple comparisons test. b Primary mouse cortical neurons were treated with Aβ 1–42 peptides (1 µM, 12 h). ATAD3A protein levels were determined by western blotting (WB) with anti-ATAD3A antibody in the presence or absence of β-mercaptoethanol (βME). n = 3 independent experiments. c Stable APP wt - and APP swe -expressing Neuro2a cells were treated with the crosslinker BMH (1 mM) or DMSO control for 20 mins and then subjected to WB with an anti-ATAD3A antibody. n = 4 independent experiments. d ATAD3A oligomers were analyzed under non-reducing conditions in total lysates from postmortem hippocampus from AD patients by WB. Normal subjects (Nor): n = 6; AD patients: n = 10. e Total lysates from the cortex of 3-month-old 5XFAD mice or age-matched WT mice were analyzed by WB in the presence or absence of β-ME ( n = 7 mice/group). Representative blots from at least three independent experiments are shown in a – e . The histograms in a – e show the density of ATAD3A oligomers relative to total ATAD3A levels in the presence of β-ME. f Postmortem hippocampus sections from normal subjects and AD patients were stained with anti-ATAD3A antibodies. The intensity of the ATAD3A staining was quantified ( n = 5 individuals/group). g Postmortem cortex sections from normal subjects and AD patients were stained with anti-ATAD3A and anti-NeuN antibodies ( n = 5 individuals/group). The intensity of ATAD3A staining in NeuN + cells was quantified. DAPI was used to label nuclei. Brain sections from 3-month-old WT and 5XFAD mice were stained with anti-ATAD3A and anti-NeuN antibodies. The ATAD3A immunodensity in NeuN + cells in the h cortex ( n = 6 mice/group) or i subiculum ( n = 5 mice/group) was quantified. j Postmortem cortex sections from normal subjects and AD patients (left) and brain sections from 3-month-old WT and 5XFAD mice (right) were stained with anti-ATAD3A and anti-APP antibodies. Human subject information is shown in Supplementary Fig. 2b . The data are presented as the mean ± SEM. Data in b – i were compared with the unpaired Student’s t test (two-tailed). Full size image Immunohistochemical analysis revealed a higher ATAD3A staining in the postmortem hippocampus of AD patients than in normal subjects (Fig. 1f , Supplementary Fig. 2g ). Moreover, we observed a significant increase in ATAD3A immunodensity in neurons immunopositive for anti-NeuN antibodies in the postmortem cortex of AD patients compared to normal subjects (Fig. 1g ). The increased ATAD3A immunodensity in NeuN-immunopositive cells was consistently observed in cortical layer IV–V, the subiculum, and the hippocampus of 3-month-old 5XFAD AD mouse brains (Fig. 1h, i , Supplementary Fig. 2h ). In addition, ATAD3A was enriched in APP-immunopositive cells of the postmortem cortex of AD patients and mice (Fig. 1j ). The mRNA and total protein levels of ATAD3A were comparable in 3-month-old WT and 5XFAD mouse brains (Supplementary Fig. 2i, j ). Thus, the elevated immunodensity of ATAD3A in AD patients and mouse brains is likely due to increased ATAD3A oligomerization, consistent with our previous observation 27 . Collectively, our data demonstrate an aberrant increase in ATAD3A oligomerization during the manifestation of AD, which supports our computational analysis results. ATAD3A accumulates at the MAMs and impairs MAM integrity in AD models We and others previously showed that ATAD3A localizes to mitochondria-associated contact sites and may be enriched in the MAMs 27 , 34 . In the present study, mitochondrial sub-compartmental fractionation from mouse brains revealed ATAD3A enrichment in the MAM fractions; ATAD3A was present in the same mitochondrial fractions as VDAC and SigmaR1, two proteins that have been localized to the MAMs 35 (Supplementary Fig. 3a ). Notably, the distribution of ATAD3A to the MAM fraction was significantly enhanced in 5XFAD mice compared with WT mice (Supplementary Fig. 3a ). There was also a significant increase in ATAD3A on the MAM fractions of Neuro2a cells treated with oligomeric Aβ 1–42 (Supplementary Fig. 3a ). The physiological contact distance between the ER and mitochondria ranges between 10 and 30 nm 3 , 36 , which allows the use of in situ proximity ligation assay (PLA) to assess ER-mitochondria tethering and localization of proteins on the MAMs. We observed a twofold increase in the number of PLA-positive puncta in 5XFAD mouse cortex (Fig. 2a ) and postmortem AD patient cortex (Fig. 2b ) relative to control samples after staining brain sections with anti-SigmaR1 and anti-VDAC antibodies. The sizes of the PLA-positive puncta in these 5XFAD mice and AD patient brains were also larger than those of the control groups (Fig. 2a, b ). In addition, high-resolution microscopy demonstrated a higher co-localization between IP3R3 and VDAC in HT-22 cells exposed to oligomeric Aβ 1–42 peptide (Supplementary Fig. 3b ). These data suggest enhanced MAM tethering in AD models, which is in agreement with the hyperconnectivity of MAMs in AD 10 , 35 . Notably, we observed an approximately twofold increase in the number of PLA-positive puncta in the postmortem cortex of both 5XFAD mice and AD patients after staining with anti-ATAD3A and anti-FACL4 (a MAM marker) antibodies (Fig. 2a, b ). The size of the PLA-positive puncta between ATAD3A and FACL4 in the 5XFAD AD mice and postmortem brains from AD patients was also significantly increased compared to the controls (Fig. 2a, b ). These results demonstrated ATAD3A accumulation at the MAMs in the brains of AD patients and mice. Similarly, in HT-22 cells, treatment with oligomeric Aβ 1–42 peptide increased both the number and size of PLA-positive puncta when the cells were stained with anti-ATAD3A and anti-FACL4 antibodies (Fig. 2c ). The PLA-positive signals were not observed in HT-22 cells stained for ATAD3A and cytochrome c (a mitochondrial intermembrane space protein), IP3R3 (a MAM protein) or SigmaR1 and mtCO1 (a mitochondrial inner membrane protein), or FACL4 and mtCO2 (a mitochondrial inner membrane protein) (Supplementary Fig. 3c ), indicating the specificity of the PLA-positive puncta associated with ATAD3A and FACL4. Fig. 2: ATAD3A oligomerization impairs MAM integrity under AD conditions. a Brain sections from 3-month-old WT and 5XFAD mice ( n = 4 mice/group) and b postmortem cortex sections from normal subjects (Nor) and AD patients ( n = 3 subjects/group) were stained with anti-SigmaR1 and anti-VADC antibodies or anti-ATAD3A and anti-FACL4 antibodies.Human subject information is presented in Supplementary Fig. 2b . The In situ Duolink proximity ligation assay (PLA) was performed. Histogram: the number and size of PLA-positive puncta (red). The number and size of PLA puncta signals were quantified from six separate fields of each sample. c HT-22 cells were treated with oligomeric Aβ 1–42 peptides (5 µM) for 18 h. Control (Con): n = 6; Aβ 1–42 : n = 5. d HT-22 cells were infected with control or ATAD3A shRNA lentivirus and then treated with oligomeric Aβ 1–42 peptides (5 µM, 18 h). Cells were stained with the indicated antibodies and subjected to PLA analysis. Histogram: the number and size of PLA-positive puncta (red). n = 10 for sh ATAD3A -1/Aβ 1–42 group, n = 8 for the rest of three groups. At least 200 cells/group were analyzed. e Total protein lysates from the indicated groups were analyzed by WB. Histograms: the densities of FACL4 and IP3R3 relative to actin. n = 4 for Control group (NC) and n = 7 for sh ATAD3A group. f HT-22 cells were transfected with the indicated plasmids for 48 h. Cells were stained with the indicated antibodies and then subjected to PLA analysis. Histogram: the number and size of PLA-positive puncta (red). n = 10 for SigmaR1/VDAC group and n = 7 for IP3R3/VADAC group. At least 200 cells/group were analyzed. Scale bar: 10 µm. The data are presented as the mean ± SEM. Representative images and blots from at least three independent experiments are shown. The data in a – c were compared by the unpaired Student’s t -test (two-tailed), and the data in d – f were compared by one-way ANOVA with Tukey’s multiple comparisons test. Full size image Based on our observations of ATAD3A oligomerization and accumulation at MAMs in various AD models, we determined the impact of aberrant ATAD3A oligomerization on ER-mitochondria tethering, a marker of MAM integrity and activity 11 . We knocked down ATAD3A in HT-22 cells using lentiviral ATAD3A shRNAs or treated the cells with DA1 peptide that we developed to block ATAD3A oligomerization 27 . In the presence of oligomeric Aβ 1–42 peptides, either ATAD3A downregulation or DA1 treatment significantly reduced the number of PLA-positive puncta in Aβ-treated HT-22 cells stained with anti-IP3R3 and anti-VDAC antibodies or anti-SigmaR1 and anti-VDAC compared to control groups (Fig. 2d , Supplementary Fig. 3d ). DA1 treatment also reduced Aβ-induced mitochondrial fragmentation (Supplementary Fig. 3e ). Consistent with previous studies 11 , oligomeric Aβ 1–42 increased IP3R3 and FACL4 protein levels, which was abolished by ATAD3A knockdown (Fig. 2e ). Thus, increased ATAD3A oligomerization is required for AD-associated MAM hyperconnectivity. We previously demonstrated that a truncated ATAD3A mutant in which the first 50 amino acids are removed (ATAD3A ΔN50-Flag) enhanced ATAD3A oligomerization 27 . Here, we transduced HT-22 cells with ATAD3A-WT-Flag or truncated mutant ATAD3A-ΔN50-Flag and evaluated MAM tethering. Overexpression of ATAD3A-WT-Flag and ATAD3A-ΔN50-Flag significantly enhanced the number of PLA-positive puncta when HT-22 cells were stained with anti-SigmaR1 and anti-VDAC antibodies or anti-IP3R3 and anti-VDAC antibodies, with a higher number of PLA-positive puncta observed in cells expressing ATAD3A-ΔN50-Flag mutant (Fig. 2f ). The expression of ATAD3A-WT-Flag or ATAD3A-ΔN50-Flag did not alter the levels of the MAM-related proteins (Supplementary Fig. 3f ), nor ATAD3A ATPase activity (Supplementary Fig. 2f ). In contrast, the expression of ATAD3A-K358E-Flag, an ATAD3A ATPase dead mutant, did not affect MAM tethering (Supplementary Fig. 3g ), further supporting the notion that the biological effects of ATAD3A oligomers do not result from its enzymatic activity. Collectively, our results indicate that ATAD3A oligomerization and accumulation at MAMs could induce the hyperconnectivity of ER-mitochondria tethering, reminiscent of AD-like pathology. ATAD3A haploinsufficiency reduces cognitive deficits and AD pathology in 5XFAD AD mice Next, we determined whether suppressing aberrant ATAD3A oligomerization affected AD-associated neuropathology and behavioral deficits in mice. ATAD3A homozygous knockout mice are embryonically lethal, but heterozygous knockout mice are normal and fertile 29 , 37 . Thus, we knocked out one ATAD3A allele from ATAD3A fl/fl mice by expressing CMV recombinase, which deleted loxP-flanked genes in all tissues (hereafter referred to as CMV; ATAD3A fl/+ ). We then generated double-mutant 5XFAD het ; CMV; ATAD3A fl/+ mice by crossing 5XFAD heterozygous mice with CMV; ATAD3A fl/+ mice (Supplementary Fig. 4a ). CMV; ATAD3A fl/+ mice and 5XFAD het ; CMV; ATAD3A fl/+ double-mutant mice were born at the expected Mendelian ratio and indistinguishable from WT and 5XFAD littermates, suggesting a lack of overt developmental deficits. No significant differences in body weight were observed between the four genotypes (Supplementary Fig. 4b ). Western blot analysis revealed that total ATAD3A protein levels were lower in both the cortex and hippocampus of CMV; ATAD3A fl/+ and 5XFAD het ; CMV; ATAD3A fl/+ mice than in WT littermates and 5XFAD het ; ATAD3A +/+ mice (Fig. 3a ). The levels of proteins from the mitochondrial subcompartments (VDAC and Tom20, the outer membrane proteins; ClpP, the matrix protein; ATPB, the inner membrane protein) were comparable in all four mouse genotypes (Fig. 3a ), suggesting that heterozygous knockout of ATAD3A did not alter mitochondrial mass. In addition, immunofluorescence staining confirmed ATAD3A downregulation in CMV; ATAD3A fl/+ and 5XFAD het ; CMV; ATAD3A fl/+ mice (Supplementary Fig. 4a ). These data confirmed specific, heterozygous ATAD3A knockout in mice. Consistent with our findings, ATAD3A oligomers in 3-month-old 5XFAD het ; ATAD3A +/+ mice were significantly higher than that in age-matched WT littermates. Notably, the level of ATAD3A oligomers in age-matched 5XFAD het ; CMV; ATAD3A fl/+ mice returned to the levels observed in WT littermates (Fig. 3b ). Thus, removing one copy of the ATAD3A gene reduced ATAD3A oligomerization in AD mice. Fig. 3: ATAD3A heterozygous knockout is neuroprotective in 5XFAD mice. Total protein lysates were extracted from the cortex and hippocampus from 3-month-old WT, CMV; ATAD3A fl/+ , 5XFAD het ; ATAD3A +/+ , and 5XFAD het ; CMV; ATAD3A fl/+ mice. a WB was performed. Histogram: the density of ATAD3A relative to actin ( n = 8 mice for cortex group, n = 7 mice for hippocampus group.). b ATAD3A oligomers were analyzed by WB under non-reducing conditions (βME: β-mercaptoethanol) (ATAD3A oligomer/monomer: n = 6 mice/group; ATAD3A oligomer/actin: n = 7 mice/group). Histogram: the density of ATAD3A oligomers relative to total ATAD3A protein levels or actin. c Short-term cognitive activity was assessed in 6-month-old mice of the indicated genotypes using the Y-maze ( n = 18 mice for WT, n = 18 mice for CMV; ATAD3A fl/+ , n = 33 mice for 5XFAD het ; ATAD3A +/+ , and n = 21 mice for 5XFAD het ;CMV; ATAD3A fl/+ group). d Long-term cognitive activity was evaluated in 8-month-old mice of the indicated genotypes using the Barnes maze test ( n = 14 mice for WT, n = 16 mice for CMV; ATAD3A fl/+ , n = 19/18 mice for 5XFAD het ; ATAD3A +/+ in day 5/12 group, and n = 21 mice for 5XFAD het ; CMV; ATAD3A fl/+ group). Brain sections were prepared from 8-month-old WT, CMV; ATAD3A fl/+ , 5XFAD het ; ATAD3A +/+ , and 5XFAD het ;CMV; ATAD3A fl/+ mice ( n = 4 mice/group). e Brain sections were stained with anti-SigmaR1 and anti-VDAC or anti-ATAD3A and anti-FACL4 antibodies and then subjected to PLA analysis ( n = 4 mice/group). Histogram: the number of PLA-positive puncta (red). f Brain sections were stained with anti-6E10 antibody (red) to label amyloid deposits ( n = 4 mice/group). The area covered by 6E10 + Aβ plaques and the immunodensity of 6E10 in the cortex, hippocampus, and subiculum were quantified from three separate fields of each mice. g Brain sections were stained with anti-Iba1 (green) and anti-GFAP (red) antibodies ( n = 4 mice/group). The intensities of Iba1 and GFAP were quantified from three separate fields of each mice. Scale bar: 100 µm. Representative images and blots from at least three independent experiments are shown. All data are presented as the mean ± SEM and were compared by one-way ANOVA with Tukey’s multiple comparisons test. Full size image To assess the spatial learning and memory of 5XFAD het ;CMV; ATAD3A fl/+ mice, we performed Y-maze and Barnes maze tests with mice of all four genotypes. 5XFAD het ; ATAD3A +/+ mice had decreased short-term cognitive ability as assessed by the Y-maze at 6 months of age. In contrast, age-matched 5XFAD het ; CMV; ATAD3A fl/+ mice had an improved spontaneous alteration ratio in the Y-maze test, reaching levels similar to those observed with WT mice (Fig. 3c ). During the day 5 and day 12 assessments of the Barnes maze test, 8-month-old 5XFAD het ; ATAD3A +/+ mice took a longer time and made more errors finding the target escape box than WT and CMV; ATAD3A fl/+ mice, indicating a cognitive decline. Age-matched 5XFAD het ;CMV; ATAD3A fl/+ mice exhibited a significant reduction in the latency and number of errors on day 5, and improved cognitive activity remained at day 12 (Fig. 3d ; Supplementary Fig. 4c ). Consistent with a previous report 38 , 5XFAD mice were hyperactive during the open field test. 5XFAD het ;CMV; ATAD3A fl/+ mice showed a normalized total distance traveled in the open field test similar to that of the WT mice (Supplementary Fig. 4d ). These data suggest that reduced ATAD3A oligomerization improved the spatial learning and long-term memory of 5XFAD AD mice. We stained brain sections of mice from the four genotypes with anti-SigmaR1 and anti-VDAC or anti-ATAD3A and anti-FACL4 antibodies and then performed PLA to assess MAM integrity in vivo. We observed an increased number of PLA-positive puncta in the cortex of 8-month-old 5XFAD het ; ATAD3A +/+ mice, which was reduced in the age-matched 5XFAD het ;CMV; ATAD3A fl/+ mice to levels similar to those observed in WT mice (Fig. 3e , Supplementary Fig. 4e ), demonstrating normalization of MAM hyperconnectivity by the ATAD3A heterozygous knockout. To assess the amyloid aggregation featured in 5XFAD AD mice, we stained brain sections of 8-month-old mice representing the four genotypes with anti-6E10 antibody, which labels amyloid aggregation. There was an increased number of 6E10 + amyloid depositions and a larger area covered by amyloid plaques in the CA1 and subiculum regions of the hippocampus and cortex from 5XFAD het ; ATAD3A +/+ mice. This abnormal amyloid accumulation was reduced in age-matched 5XFAD het ;CMV; ATAD3A fl/+ mice (Fig. 3f ). Neuroinflammation is another pathological marker of AD. We showed that the immunodensities of Iba1 (a marker of microglia) and GFAP (a marker of astrocytes) were significantly reduced in the cortex of 5XFAD het ;CMV; ATAD3A fl/+ mice compared to 5XFAD het ; ATAD3A +/+ mice (Fig. 3g ), indicating a reduction of AD-associated gliosis. These data demonstrate that genetic reduction of enhanced ATAD3A oligomerization reduced neuropathology in 5XFAD AD mice. Inhibition of ATAD3A oligomerization by DA1 is neuroprotective in AD models We recently developed a peptide-based ATAD3A inhibitor, DA1, which specifically binds to ATAD3A protein and suppresses its oligomerization under stress conditions and in experimental models of HD 27 . DA1 can pass through the blood–brain barrier of mice and is tolerated by mice during long-term treatment ( 27 , Supplementary Fig. 5a–d ). In addition, FITC-conjugated DA1 fluorescence signals significantly accumulated in the brains of WT mice one day after osmotic pump implantation (Supplementary Fig. 5e ), confirming that DA1 administered subcutaneously can enter mouse brains. In cultured HT-22 cells exposed to oligomeric Aβ 1–42 peptides, the DA1 peptide abolished ATAD3A oligomerization and elevated immunodensity signals (Supplementary Fig. 6a ), validating the target. To test the in vivo efficacy of the DA1 peptide, we subcutaneously treated homozygous 5XFAD (5XFAD homo ) mice with DA1 or control peptide TAT (1 mg/kg/day) using an Alzet mini-pump from the age of 1.5 to 9 months (Supplementary Fig. 6b ). Compared with 5XFAD het , 5XFAD homo mice develop amyloid pathology much more rapidly, together with broader neurological phenotypes, and lack gene-dosage effects 39 , making them more suitable for assessing the efficacy of DA1 treatment. Treatment with DA1 not only abolished aberrant increase in ATAD3A oligomerization but also reduced enhanced ATAD3A immunodensity in 5XFAD homo mice (Fig. 4a , Supplementary Fig. 6a ), confirming the inhibitory effect of DA1 in vivo. Moreover, treatment with DA1 suppressed MAM hyperconnectivity in the AD mice, as demonstrated by the reduced number of PLA-positive puncta in 6-month-old 5XFAD AD mouse brains following staining with anti-SigmaR1 and anti-VDAC or anti-ATAD3A and anti-FACL4 antibodies (Fig. 4b , Supplementary Fig. 6c ). Notably, sustained DA1 treatment significantly improved the performance of 5XFAD mice in the Y-maze test at 6 months of age (Fig. 4c ) and the Barnes maze test at 8 months of age (Fig. 4d ; Supplementary Fig. 6d ) compared to age-matched 5XFAD mice treated with the control peptide. In addition, DA1 treatment enhanced the nest-building ability of 5XFAD homo mice at 8.5 months of age, which was used as a complementary behavior assay because it is sensitive to spatial memory and hippocampus neuronal lesion in AD (Supplementary Fig. 6e ). DA1 treatment also normalized the total traveled distance of 6-month-old 5XFAD homo mice relative to WT mice (Supplementary Fig. 6f ). The body weight of mice was comparable between the TAT- and DA1-treated WT or 5XFAD mice (Supplementary Fig. 6g ). Importantly, sustained treatment with DA1 (seven months) had no observable effects on the behavioral status or body weight of WT mice (Fig. 4 , Supplementary Fig. 6 ), suggesting a lack of long-term toxicity. Fig. 4: Suppression of ATAD3A oligomerization by DA1 is neuroprotective in AD mice. 5XFAD and age-matched WT mice were subcutaneously treated with either TAT or DA1 beginning at six weeks of age (1 mg/kg/day) using an Alzet pump (treatment timeline shown in Supplementary Fig. 5b ). a Total protein lysates were harvested from 6-month-old mouse brains, and ATAD3A oligomerization was analyzed by WB under non-reducing conditions (βME: β-mercaptaethanol). Histogram: the density of ATAD3A oligomers relative to total ATAD3A protein levels ( n = 8 mice/group). b Brain sections from 6-month-old mice were stained with anti-SigmaR1 and anti-VDAC or anti-ATAD3A and anti-FACL4 antibodies and then subjected to PLA analysis. Histogram: the number of PLA-positive puncta (red) ( n = 4 mice/group). c The Y-maze test was administered to 6-month-old mice (WT/TAT and 5XFAD homo /TAT: n = 26 mice/group; n = 23 mice for the WT/DA1 group and n = 24 mice for the 5XFAD homo /DA1 group). d The Barnes maze test was administered to 8-month-old mice (WT/TAT: n = 16 mice/group; WT/DA1: n = 14 mice/group; 5XFAD homo /TAT: n = 13 mice/group; 5XFAD homo /DA1: n = 13/12 mice/group in day5/ day12 test). e Brain sections from 6-month-old mice of the indicated groups were stained with an anti-6E10 antibody. The area covered by 6E10 + Aβ plaques and the immunodensity of 6E10 in cortex were quantified from three separate fields of each mice ( n = 4 mice/group). f Brain sections from 6-month-old mice of the indicated groups were stained with anti-Iba1 (green) and anti-GFAP (red) antibodies. The intensity of Iba1 and GFAP was quantified from four separate fields of each mice ( n = 4 mice/group). Scale bar: 50 µm. g Brain sections from 6-month-old mice were stained with Jade C probe (green) and anti-NeuN antibody (red). The Jade C intensity in NeuN+ cells in the cortex and hippocampus was quantified and shown in the histograms ( n = 3 mice/group). Scale bar: 20 µm. Representative images and blots are at least three independent experiments are shown. All data are presented as the mean ± SEM and were compared by one-way ANOVA with Tukey’s multiple comparisons test. Full size image Immunohistochemistry showed that treatment with DA1 reduced the Aβ-covered area and Aβ immunodensity in the cortex of 6-month-old 5XFAD homo mice (Fig. 4e ). The treatment also abolished Iba1 + and GFAP + immunoreactivity in the cortex and hippocampus of 5XFAD homo mice (Fig. 4f , Supplementary Fig. 6h ), indicating inhibition of neuroinflammation. Neuronal loss has been observed in the hippocampus and cortical layer V in 5XFAD homo AD mice concomitant with amyloid aggregation and neuroinflammation 40 , 41 . We stained brain sections of 6-month-old 5XFAD homo mice with the Fluoro-Jade C (FJC) fluorescent probe that selectively binds to degenerating neurons 42 and anti-NeuN antibody. We observed FJC-positive fluorescence signals in NeuN + cells in the CA1 region of the hippocampus and cortex, indicating ongoing neuronal loss (Fig. 4g ). Treatment of 5XFAD homo mice with DA1 reduced the extent of neuronal degeneration. Indeed, the density of FJC fluorescence signals was decreased by more than 70% compared to 5XFAD homo mice treated with control peptide TAT (Fig. 4g ). Furthermore, sustained DA1 treatment did not elicit neuroinflammatory and neurodegenerative responses in WT mice (Fig. 4 ; Supplementary Fig. 6 ). Therefore, inhibition of ATAD3A oligomerization by DA1 reduced the AD neuropathology and cognitive deficits manifested in 5XFAD mice, consistent with our observations made in heterozygous ATAD3A knockout 5XFAD mice (Fig. 3 ; Supplementary Fig. 4 ). Aberrant ATAD3A oligomerization suppresses CYP46A1-mediated brain cholesterol turnover in AD models To investigate the mechanisms by which ATAD3A oligomerization mediates AD-associated neuropathology, we carried out unbiased label-free proteomic analysis on the brain tissue of 5XFAD het ;CMV; ATAD3A fl/+ mice. We harvested the brain cortex from mice at 3 months of age, the age at which mice showed enhanced ATAD3A oligomerization but did not exhibit obvious amyloid accumulation or cognitive deficits. Among the 2639 proteins identified in the cortical tissue of 5XFAD het ;CMV; ATAD3A fl/+ , we focused on proteins that were altered in 5XFAD het ; ATAD3A +/+ (i.e., >1.5-fold downregulation or upregulation relative to WT mice) and simultaneously modified by heterozygous knockout of ATAD3A (i.e., >1.5-fold upregulation or downregulation relative to 5XFAD het ; ATAD3A +/+ ; Fig. 5a ). A total of 774 proteins that met these criteria (Fig. 5 , marked with *) were subsequently used for pathway enrichment analysis. A graphical comparison of the KEGG analysis showed that the categories “metabolic pathway” and “Alzheimer’s disease pathway” ranked as the top two protein enrichment pathways, in which proteins enriched for “lipid metabolic process” and “cholesterol metabolic process” were mostly affected, respectively (Fig. 5a , Supplementary Fig. 7a ). Subsequent GO biological pathway analysis revealed that CYP46A1 was overlapped between the “lipid metabolic process” and “cholesterol metabolic process” (Fig. 5a , Supplementary Fig. 7a ). Moreover, CYP46A1 ranked as the top candidate regulated by heterozygous ATAD3A knockout in AD mice. In particular, CYP46A1 was downregulated in 5XFAD het ; ATAD3A +/+ mice relative to WT mice and restored in 5XFAD het ;CMV; ATAD3A fl/+ mice (Fig. 5a , Supplementary Fig. 7a ). CYP46A1 is a brain-specific enzyme that catalyzes cholesterol 24-hydroxylation, the main mechanism for cholesterol removal from the brain 43 . We hypothesize that ATAD3A oligomerization might cause AD-associated neuropathology and cognitive deficits by suppressing CYP46A1-mediated brain cholesterol metabolism. Fig. 5: ATAD3A oligomerization inhibits cholesterol turnover in AD models. a The cortex of 3-month-old mice ( n = 3 mice/group) was subjected to Label-free tandem mass spectrometry. Left: the number of proteins changed in mice. Middle: KEGG database analysis on proteins from the pool marked as *. Right: GO biological pathway analysis of the proteins enriched in the “metabolic pathway”. Heat map: proteins involved in the “lipid metabolic process”. Brain sections from b 8-month-old mice and d 6-month-old mice were stained with anti-CYP46A1 and anti-NeuN antibodies. The intensity of CYP46A1 in NeuN + cells from mouse cortex was quantified from three separate fields per mouse in c (WT and 5XFAD het ; ATAD3A +/+ : n = 4; CMV; ATAD3A fl/+ and 5XFAD het ;CMV; ATAD3A fl/+ : n = 3) and e ( n = 6 mice/group). Scale bar: 20 µm. The total cholesterol content was measured in f the cortex from 8-month-old mice and g 9-month-old mice (WT: n = 8; CMV; ATAD3A fl/+ : n = 6; 5XFAD het ; ATAD3A +/+ : n = 8 and 5XFAD het ;CMV; ATAD3A fl/+ : n = 8. WT/TAT: n = 10; the rest of groups: n = 9). h Left: Brain sections from 9-month-old mice were stained with the filipin probe. The immunodensity of filipin + cholesterol was quantified from three separate fields per mouse ( n = 3 mice/group). Right: Stable APP wt - and APP swe -expressing Neuro2a cells were treated with DA1 or TAT (1 µM/day for 4 days) and stained with the filipin probe. The intensity of filipin + cholesterol was quantified from five separate fields per sample ( n = 5 independent biological experiments). Representative images are shown in Supplementary Fig. 6c . i Plasma 24-OHC levels from 9-month-old mice were measured (WT/TAT: n = 13; WT/DA1: n = 9; 5XFAD homo /TAT: n = 12; 5XFAD homo /DA1: n = 11). j CYP46A1 mRNA was measured by qPCR ( n = 6). k Left: HT-22 cells were treated with TAT or DA1 (1 µM) and Aβ 1–42 peptides (5 µM, 9 h). Right: Stable APP wt -expressing Neuro2a cells were treated with TAT or DA1. CYP46A1 mRNA was measured by qPCR ( n = 3). l Total cholesterol content was measured ( n = 6). m The concentration of 24-OHC was measured ( n = 6). All data are presented as mean ± SEM from at least three independent experiments and compared by the unpaired two-tailed Student’s t test ( k -right) or one-way ANOVA with Tukey’s multiple comparisons test ( c – m ). Full size image The CYP46A1 immunodensity decreased in the cortex, hippocampus CA1, and subiculum of 5XFAD mouse brain, mainly in NeuN + neuronal cells (Fig. 5b–e , Supplementary Fig. 7b ), which is consistent with previous findings 44 . In contrast, the intensity of CYP46A1 staining in NeuN + neurons was significantly elevated in both 5XFAD het ;CMV; ATAD3A fl/+ mice and 5XFAD homo mice treated with DA1 peptides (Fig. 5b–e , Supplementary Fig. 7b ). The decreased protein level of CYP46A1 in 5XFAD het mouse brains was also restored by heterozygous knockout of ATAD3A (Supplementary Fig. 7b ). These results validated our proteomic analysis that demonstrated that a reduction in ATAD3A oligomerization improved CYP46A1 levels in AD mice. CYP46A1 deficiency causes cholesterol accumulation in neurons due to an impairment in neuronal cholesterol turnover 45 . ELISA analysis showed that the total cholesterol content increased in the cortex of 5XFAD mice, however, this effect was significantly reduced in heterozygous ATAD3A knockout 5XFAD mice (Fig. 5f ). Moreover, DA1 reduced the cholesterol content in the cortex of 5XFAD homo mice compared to that of AD mice treated with control peptide (Fig. 5g ). Filipin is a commonly used fluorescence probe to monitor cholesterol deposition in cells and brain tissue 46 . We observed a significant accumulation of filipin-bound cholesterol in 9-month-old 5XFAD homo mouse brains and stable APP-expressing Neuro2a cells relative to their WT counterparts. Furthermore, inhibition of ATAD3A oligomerization by DA1 reduced cholesterol deposition (Fig. 5h , Supplementary Fig. 7c ). Conversion of brain cholesterol into 24(S)-hydroxycholesterol (24-OHC) represents the primary cholesterol elimination mechanism in the brain. The 24-OHC content has been used as a marker of brain cholesterol dysregulation in AD 47 . Unlike cholesterol, 24-OHC can cross the blood-brain barrier at a high rate. More than 90% of 24-OHC in plasma is processed from the brain, making plasma 24-OHC concentrations a useful marker to monitor brain cholesterol turnover and neuronal CYP46A1 activity 48 . We collected plasma from 9-month-old WT and 5XFAD homo mice treated with DA1 or control peptide TAT. The concentration of 24-OHC was decreased in the plasma of 5XFAD homo mice, reflecting the suppression of brain cholesterol elimination. Importantly, sustained treatment of 5XFAD homo mice with DA1 peptide restored the plasma concentrations of 24-OHC to levels similar to those measured in WT mice treated with control peptide (Fig. 5i ), further supporting a role for ATAD3A oligomerization in inhibiting CYP46A1 in AD mouse brains. In parallel, we profiled brain sterols in heterozygous ATAD3A knockout 5XFAD and DA1-treated 5XFAD mouse cortex using gas chromatography-mass spectrometry (GC-MS). The levels of the sterols (e.g., lanosterol, zymosterol, desmosterol, and lathosterol) were comparable between all analyzed experimental groups (Supplementary Fig. 7d ). Thus, ATAD3A oligomerization affects brain cholesterol levels in AD models by altering cholesterol metabolism, not biosynthesis. ATAD3A oligomerization inhibits CYP46A1 at the transcriptional level Next, we set out to determine how aberrant ATAD3A oligomerization affected CYP46A1 expression. Overexpression of ATAD3A-WT-Flag or ATAD3A-ΔN50-Flag, which enhances ATAD3A oligomerization strikingly decreased the CYP46A1 mRNA and protein levels in Neuro2a cells (Fig. 5j ; Supplementary Fig. 7e ) but did not alter the protein levels of mitochondrial ATPB and VDAC or the MAM protein SigmaR1 (Supplementary Fig. 7e ). ATAD3A-WT and ATAD3A-ΔN50 did not change the mRNA levels of CYP51A1 and HMGCS1 , genes involved in cholesterol metabolism and biosynthesis, respectively 49 , 50 (Supplementary Fig. 7f ). In HT-22 cells exposed to oligomeric Aβ 1–42 peptide or stable APP-expressing Neuro2a cells, inhibition of ATAD3A oligomerization by DA1 significantly enhanced CYP46A1 mRNA levels (Fig. 5k ). In contrast, the CYP51A1 mRNA levels were comparable between the different experimental groups under the same conditions (Supplementary Fig. 7g ). ATAD3A protein levels and oligomerization were not affected by CYP46A1 overexpression or inhibition of CYP46A1 enzyme activity by voriconazole (Supplementary Fig. 7h, i ). Thus, ATAD3A acts upstream of CYP46A1, and ATAD3A oligomerization selectively suppresses CYP46A1 at the transcriptional level. Overexpression of ATAD3A-WT-Flag and ATAD3A-ΔN50-Flag variants in Neuro2a cells increased the total cholesterol content and decreased the 24-OHC content compared to cells expressing the control vector, with a worse extent observed in ATAD3A-ΔN50-Flag-expressing cells (Fig. 5l, m , Supplementary Fig. 7j ). In contrast, DA1 treatment abolished ATAD3A-WT- or -ΔN50-induced cholesterol accumulation, and overexpression of ATAD3A-ΔCC-Flag variant, which blocks ATAD3A oligomer formation, showed a comparable effect on cholesterol content to that seen in control vector-expressing cells (Supplementary Fig. 7j ). These data support a direct effect of ATAD3A oligomerization on cholesterol accumulation. Furthermore, CYP46A1 overexpression reduced the cholesterol content increase and corrected the 24-OHC level in both ATAD3A-WT-Flag and - ΔN50-Flag expressing cell lines (Fig. 5l, m ). Similarly, treatment with efavirenz (EFV), an allosterical activator of CYP46A1, also abolished ATAD3A-WT- and -ΔN50-induced cholesterol accumulation (Supplementary Fig. 7k ). Thus, ATAD3A oligomerization-induced perturbation of cholesterol homeostasis depends on CYP46A1. Among the 16 genes implicated in cholesterol metabolic pathways, LDLR and ApoE mRNA levels were significantly elevated in 5XFAD mice but attenuated in both 5XFAD het ;CMV; ATAD3A fl/+ mice and DA1-treated 5xFAD homo mice (Supplementary Fig. 7l ). These results are in line with the fact that 24-OHC is an endogenous agonist of nuclear liver X receptors (LXRs), which subsequently regulate LXR-targeted gene expression (e.g., LDLR and ApoE ), balancing brain cholesterol homeostasis 51 . Thus, a reduction in LDLR and ApoE mRNA levels in AD mice by ATAD3A heterozygous knockout or DA1 treatment is consistent with enhanced CYP46A1 levels. Altogether, these data support our hypothesis that a loss of CYP46A1 is, at least in part, responsible for the ATAD3A oligomerization-induced neuronal cholesterol accumulation in AD models. ATAD3A oligomerization promotes APP processing in a CYP46A1-dependent manner Compensation for CYP46A1 deficiency in vivo reduces amyloid deposits and improves spatial memory in AD mice 52 . The mechanisms underlying the neuroprotection provided by CYP46A1 overexpression are thought to decrease the cholesterol content in the lipid rafts of membranes and reduce amyloidogenic APP processing 52 . In the postmortem cortex of AD patients, we observed enlarged and swollen lipid rafts that were immunopositive for anti-cholera toxin B pentamer (CTxB), a lipid raft marker. Moreover, CTxB-positive lipid rafts were colocalized with APP (Supplementary Fig. 8a ), consistent with the notion that APP is enriched at lipid rafts for processing 53 . In 5XFAD mouse brains, the intensity of the CTxB-positive lipid rafts increased in cortical layer V and colocalized with increased APP protein expression (Supplementary Fig. 8b ). Heterozygous ATAD3A knockout in 5XFAD het mice or DA1-mediated suppression of ATAD3A oligomerization attenuated the CTxB immunodensity (Fig. 6a, b ), suggesting the normalization of the lipid rafts. Because the ATAD3A intensity increased in APP-immunopositive cells in the brains of both AD patients and AD 5XFAD mice (Fig. 1j ), we examined whether APP processing was affected by aberrant ATAD3A oligomerization. Enhanced ATAD3A oligomerization mediated by overexpression of ATAD3A-WT-Flag or ATAD3A-ΔN50-Flag exacerbated the production of the C99 fragment, a pathological proteolytic product of APP present in stable APP wt -expressing Neuro2a cells. In contrast, blockage of ATAD3A oligomerization by either expression of the ATAD3A-ΔCC-Flag variant or treatment with DA1 abolished the C99 production (Fig. 6c ), indicative of a direct impact. DA1 treatment also abolished the C99 fragment in stable APP wt - and APP swe -expressing Neuro2a cells (Supplementary Fig. 8c ). Furthermore, we observed a significant increase in C99 fragment levels in 5XFAD mouse brains, which was reduced by either heterozygous ATAD3A knockout or DA1 treatment (Fig. 6d ). Thus, inhibition of ATAD3A oligomerization could reduce APP processing in AD models, consistent with reducing amyloid aggregation in both 5XFAD het ;CMV; ATAD3A fl/+ mice and DA1-treated 5XFAD homo mice (Figs. 3 , 4 ). Fig. 6: ATAD3A and CYP46A1 cooperate to promote APP processing. a Brain sections from 3-month-old mice of the indicated genotypes were stained with anti-CTxB (green) and anti-APP (red) antibodies. The intensity of CTxB in the mouse cortex was quantified from three separate fields of each mouse and shown in the histogram ( n = 3 mice/group). b Brain sections from 9-month-old mice of the indicated treatment groups were stained with anti-CTxB (green) and anti-APP (red) antibodies. The intensity of CTxB in the mouse cortex was quantified from three separate fields of each mouse and shown in the histogram ( n = 4 mice/group). c Stable APP wt -expressing Neuro2a (N2a) cells were transfected with ATAD3A-WT-Flag, ATAD3A-ΔN50-Flag, ATAD3A-ΔCC-Flag, or control vector for 24 h, followed by treatment with DA1 or control peptide TAT for 48 h (1 µM, each). WB was performed. Histogram: the densities of C99 and APP relative to actin ( n = 4 independent experiments). d Total brain lysates were harvested from the cortex of 8-month-old 5XFAD het ;CMV; ATAD3A fl/+ group or 9-month-old mice treated with DA1. The APP proteolytic product C99 was assessed by WB (Indicated by red arrow). Histogram: the density of C99 relative to actin ( n = 6 mice for DA1 treated groups, n = 9 mice for 5XFAD het ;CMV; ATAD3A fl/+ groups). e CYP46A1 was overexpressed in stable APP wt - and APP swe -expressing Neuro2a cells, which were then treated with DA1 or control peptide TAT (1 µM for 3 days). WB was performed. Ctl and CYP on the images indicate Control Vector and CYP46A1, respectively. The relative densities of the ATAD3A oligomers and C99 were quantified ( n = 5). Representative images and blots from at least three independent experiments are shown. All data are presented as the mean ± SEM. The data in a and b were compared by the unpaired Student’s t test (two-tailed), and the data in c – e were compared by one-way ANOVA with Tukey’s multiple comparisons test. Full size image As part of the intracellular lipid rafts, MAMs provide a crucial signal platform for APP processing. Recent proteomic analysis demonstrated that proteins involved in cholesterol metabolism and Aβ clearance (e.g., CYP46A1 and ABCG1) resided on the MAMs and were altered in the early presymptomatic stage of AD 54 . Like APP, CYP46A1 could be detected in the MAM fraction of mouse brains, although CYP46A1 was enriched in the ER (Supplementary Fig. 8d ). PLA-positive puncta between CYP46A1 and VDAC was also observed in WT mouse brain (Supplementary Fig. 8e ). Moreover, ATAD3A and CYP46A1 interacted in the brains of WT mice, whereas the interaction was decreased in 5XFAD mouse brains, most likely due to the loss of CYP46A1 in the AD mouse brain (Supplementary Fig. 8f ). These data indicated that CYP46A1 present on the MAMs formed a complex with ATAD3A. To determine if the loss of CYP46A1 mediated ATAD3A oligomerization-induced APP processing, we overexpressed CYP46A1 or control vector in stable APP-expressing Neuro2a cells in the presence of DA1 peptide or control peptide TAT. Similar to the results observed with stable APP-expressing Neuro2a cells treated with DA1 peptide, overexpression of CYP46A1 alone abolished C99 fragment levels in stable APP wt - and APP swe -expressing Neuro2a cells (Fig. 6e ), suggesting the inhibition of APP processing. DA1-mediated suppression of ATAD3A oligomerization followed by overexpression of CYP46A1 had no additive effects on C99 fragment levels (Fig. 6e ). These results are consistent with our observation that EFV treatment abolished the ATAD3A-WT- or -ΔN50-induced increase in C99 production in stable APP wt Neuro2a cells (Supplementary Fig. 8g ). Collectively, our results suggest that in AD, ATAD3A cooperates with CYP46A1 to mediate APP processing, presumably at the MAMs. ATAD3A oligomerization causes synaptic loss in AD models Synaptic loss is observed in AD pathology, and disruption of brain cholesterol metabolism has been shown to lead to synaptic loss and subsequent cognitive deficits 55 . We assessed whether aberrant ATAD3A oligomerization influenced synaptic morphology by quantifying synaptic density using anti-synaptophysin (a presynaptic protein) and anti-PSD95 (a postsynaptic protein) co-staining. In primary mouse cortical neurons, overexpression of either Flag-tagged ATAD3A-WT or ATAD3A-ΔN50 decreased the colocalization of synaptophysin and PSD95 along the dendrites, reflecting a reduction in synaptic density, and increased neuronal cell death (Fig. 7a , Supplementary Fig. 9a ). Moreover, the treatment of primary neurons with oligomeric Aβ 1–42 peptide reduced synaptic density and induced cell death (Fig. 7b, c ) These effects were corrected by either ATAD3A knockdown or DA1 treatment (Fig. 7b, c , Supplementary Fig. 9b ). Furthermore, CYP46A1 overexpression alone increased synaptic density in primary neurons exposed to oligomeric Aβ 1–42 peptide. Blocking ATAD3A oligomerization with DA1 followed by CYP46A1 overexpression did not have an additional effect on the number of synaptophysin-positive clusters along the MAP2 + dendrites compared to DA1 treatment or CYP46A1 overexpression alone (Fig. 7d ). Similarly, enhancement of CYP46A1 by EFV treatment abolished ATAD3A-WT- or ATAD3A-ΔN50-induced synaptic loss in primary neurons (Supplementary Fig. 9c ). These results further support the hypothesis that CYP46A1 mediates ATAD3A oligomerization-induced neuronal damage in AD by impairing cholesterol turnover. Fig. 7: ATAD3A oligomerization causes synaptic loss in AD models. a Primary mouse cortical neurons (DIV 7 days) were transfected with the indicated plasmids for 48 h. b Primary mouse cortical neurons were infected with lentiviral ATAD3A or control (Con) shRNAs for 48 h and then treated with Aβ 1–42 peptides (1 µM, 12 h). c Primary mouse cortical neurons were treated with DA1 or TAT (1 µM) followed by treatment with Aβ 1–42 peptides (1 µM, 12 h). The neuronal cells were stained with anti-synaptophysin and anti-PSD95 antibodies. The synaptophysin + PSD95 + clusters along the dendrites were counted, and the number of synapses per micron of dendrites was quantified. n = 26–29 neurons in a , n = 50–54 neurons in b and n = 54–57 neurons in c . Cell death was measured by the release of LDH from at least three independent experiments, n = 5 in a , n = 4 ( b ), n = 3 in c . d Primary mouse cortical neurons were infected with CYP46A1 or control lentivirus (Con) for 72 h. The cells were then treated with DA1 or TAT (1 µM) followed by treatment with Aβ 1–42 (1 µM, 12 h). The neuronal cells were stained with anti-synaptophysin and anti-MAP2 antibodies. The synaptophysin + clusters along the MAP2 + dendrites were counted, and the number of synapses per micron of dendrites was quantified. n = 23–28 neurons from at least three independent experiments. Representative images from at least three independent experiments are shown. Scale bar: 5 µm. e Total protein lysates were prepared from the cortex of 8-month-old mice ( n = 11 mice/group for PSD95 analysis, n = 7 mice/group for Synaptophysin analysis.). f Total protein lysates were harvested from cortex of the 9-month-old mice ( n = 10 mice/group for PSD95 analysis, n = 11 mice/group for Synaptophysin analysis.). WB was performed. Histogram: the densities of PSD95 and synaptophysin relative to actin. g Golgi-Cox staining of mouse brains was shown. The relative spine number per 10 µm dendrite was quantified ( n = 3 mice/group, 37–54 neurons/group). Scale bar: 5 µm. All data are presented as the mean ± SEM and compared by one-way ANOVA with Tukey’s multiple comparisons test. Full size image Finally, we assessed the effects of ATAD3A oligomerization on synaptic proteins and synaptic morphology in AD mice. Western blot analysis showed significant reductions in synaptophysin and PSD95 protein levels in the cortex of 5XFAD mice; however, their levels were restored by either heterozygous ATAD3A knockout or DA1 treatment (Fig. 7e, f ). The neuronal spine density assessed by Golgi–Cox staining was decreased in 9-month-old 5XFAD homo mice. Notably, treatment with DA1 in age-matched AD mice increased the number of dendritic spines compared to 5XFAD mice treated with control peptide (Fig. 7g ). Thus, suppression of ATAD3A oligomerization reduced synaptic loss in 5XFAD mice, indicating the potential for improving the cognitive activity of AD mice by genetic or pharmacological inhibition of ATAD3A oligomerization (Figs. 3 , 4 ). Discussion In this study, we demonstrated that pathological oligomerization of ATAD3A upregulated ER-mitochondrial connections, impaired cholesterol homeostasis, and promoted amyloid processing, leading to neurodegeneration in AD. Moreover, we discovered the ATAD3A-CYP46A1-APP signaling axis that mediates the development of AD pathology and cognitive deficits (Supplementary Fig. 9d ). Therefore, our research findings provide insights into the pathogenesis of AD and demonstrate that ATAD3A oligomerization is a potential therapeutic target for the treatment of AD and other neurological disorders associated with MAM hyperconnectivity and cholesterol disturbance. Environmental and genetic factors involved in the disturbance of cholesterol metabolism have been suggested as risk factors for AD development 56 . Abnormal retention of brain cholesterol causes increased Aβ production, secretion, and fibrillization and facilitates Aβ toxicity 57 , 58 . In turn, Aβ can modulate cholesterol homeostasis, establishing a vicious cycle between cholesterol accumulation and Aβ generation. CYP46A1 is a cholesterol degradation enzyme that converts cholesterol to 24-OHC by hydroxylation, which is the key process mediating brain cholesterol elimination and turnover 45 . CYP46A1 polymorphisms are a risk factor for AD 59 . CYP46A1 deficiency causes brain cholesterol accumulation, amyloid accumulation and aggregation, and cognitive deficits 60 . Compensation for CYP46A1 deficiency in AD mice markedly reduces amyloid deposits and improves spatial memory 52 . However, the mechanism of CYP46A1 loss in AD remains unknown. In the present study, we discovered that ATAD3A oligomerization acts upstream of CYP46A1 and is a trigger for CYP46A1 deficiency. We established a molecular model that under basal condition, both ATAD3A and CYP46A1 reside on the MAMs where ATAD3A interacts with and stabilizes CYP46A1. Under AD-like pathological conditions, ATAD3A oligomerization inhibits CYP46A1 at the transcriptional level, leading to neuronal cholesterol accumulation, MAM hyperconnectivity, and, ultimately, synaptic loss. This hypothesis was supported by the observations that suppression of ATAD3A oligomerization by heterozygous knockout or pharmacological inhibition restored neuronal CYP46A1 levels, promoted brain cholesterol turnover, and normalized MAM tethering, resulting in a reduction in amyloidogenesis and improved cognitive ability in AD mice. Moreover, multiple lines of evidence from our study indicate that loss of CYP46A1 is the underlying mechanism for ATAD3A oligomerization-induced neuropathology. Thus, our research findings support the notion that ATAD3A and CYP46A1 synergistically regulate cholesterol metabolism and amyloidogenesis. Future studies of CYP46A1-overexpression in neurons of ATAD3A heterozygous knockout AD mice will further help to address the impact of neuronal CYP46A1 restoration on the MAM impairment, cholesterol metabolism dysregulation, and AD-associated neuropathology that are mediated by ATAD3A oligomerization. Currently, the mechanism by which ATAD3A oligomerization suppresses CYP46A1 gene expression remains unknown. There are several possibilities. CYP46A1 mRNA expression could be regulated by oxidative stress 61 . ATAD3A oligomerization-induced oxidative stress 27 may indirectly suppress CYP46A1 transcription. It is also possible that ATAD3A oligomerization under AD conditions disrupts the ATAD3A/CYP46A1 complex, resulting in cholesterol disturbance and a negative feedback loop between cholesterol accumulation and CYP46A1 suppression. We propose that both mitochondrial oxidative stress and negative feedback from cholesterol turnover might collectively result in the suppression of CYP46A1 gene expression. However, we cannot exclude the possibility that enhanced ATAD3A oligomerization inhibits CYP46A1 mRNA expression by directly interfering with its transcription. Further investigation is needed to define the mechanism of ATAD3A-mediated CYP46A1 deficiency. Cholesterol disturbance at the MAMs of intracellular lipid rafts promotes amyloidogenic APP processing 23 . Accumulation of the APP proteolytic fragment C99 at the MAMs disrupts cholesterol trafficking and homeostasis 62 . Our study showed that ATAD3A oligomerization is an inducer of both MAM hyperconnectivity and neuronal cholesterol accumulation. Moreover, ATAD3A immunodensity was concomitantly increased with APP in the brains of AD patients and mice. These lines of evidence raise the possibility that ATAD3A is involved in APP pathology. Indeed, our results demonstrated that a reduction in ATAD3A oligomerization by genetic knockdown or DA1 treatment normalized MAM tethering and suppressed APP processing and Aβ accumulation, resulting in reduced AD pathology. Moreover, we showed that ATAD3A oligomerization-mediated APP processing is, at least in part, dependent on CYP46A1. In this study, soluble Aβ was a driving force for aberrant ATAD3A oligomerization, as either the presence of oligomeric Aβ peptide or overexpression of APP elicited ATAD3A oligomerization. Thus, soluble Aβ-induced ATAD3A oligomerization via the ATAD3A-CYP46A1-APP signaling axis may exacerbate Aβ deposition and contribute to the AD pathology observed later in both animals and patients. Again, our observations propose a signaling pathway that supports a vicious cycle between Aβ accumulation, impaired brain lipid metabolism, and MAM hyperconnectivity. Our recent study revealed that ATAD3A forms higher-ordered oligomers and acts as a molecular linker coupling Drp1-mediated mitochondrial fragmentation and mtDNA-mediated bioenergetic defects and triggering mitochondrial dysfunction and neuropathology in HD models 27 . The ER-mitochondria contacts also provide a platform for mtDNA replication and mitochondrial division 63 . Several mitochondrial nucleoid component proteins (e.g., twinkle) are part of a cholesterol-rich membrane structure close to the ER, suggesting that disturbed cholesterol homeostasis could affect mtDNA stability at the MAMs. ATAD3A was identified as a component of the mitochondrial nucleoid complex 64 . We previously showed that ATAD3A oligomerization suppressed mtDNA replication and mitochondrial nucleoid complex stability by disrupting TFAM-mtDNA binding 27 . Patient fibroblasts deficient in ATAD3A exhibited abnormal mtDNA distribution and replication. This phenotype was replicated by treatment with a cholesterol trafficking inhibitor 26 . Future investigation into the mechanism by which ATAD3A links cholesterol metabolism and mtDNA stability may provide a better understanding of the role of ATAD3A in neurodegenerative diseases. Drugs that modify cholesterol homeostasis are being considered as potential therapies for AD. Cholesterol synthesis inhibitors (e.g., statins) reduce the amyloid burden in AD transgenic mouse models 65 , but this positive effect awaits validation in clinical trials 66 . An inhibitor of acyl-coenzyme A: cholesterol acyltransferase also reduced amyloid pathology in an AD mouse model 67 . These drugs modify cholesterol biosynthes is not only in the brain but also in peripheral tissues and plasma. Because cholesterol overload often happens in adult neurons 68 , identifying new targets involved in neuronal cholesterol turnover may offer a new avenue for the development of AD therapeutics. We previously reported that the peptide-based ATAD3A inhibitor, DA1, had a minor effect on ATAD3A oligomerization at the steady-state and was mainly effective under stress or disease conditions 27 . In the current study, we demonstrated that DA1 significantly reduced AD-associated neuropathology, neuroinflammation, and short- and long-term cognitive deficits in AD mice. In contrast, it had no observable effects on WT animals. Notably, treatment with DA1 reduced ATAD3A oligomerization-induced CYP46A1-dependent neuronal cholesterol accumulation and had no impact on cholesterol biosynthesis, which is an advantage over cholesterol synthesis inhibitors. Therefore, DA1 or DA1-like reagents might be a potential therapeutic strategy for preventing or treating AD. Methods All animal experiments in this study were conducted in accordance with protocols approved by the Institutional Animal Care and Use Committee of Case Western Reserve University and performed according to the National Institutes of Health Guide for the Care and Use of Laboratory Animals. The experiments using human postmortem tissue samples were performed with the approval of the Institutional Review Board of Case Western Reserve University. Reagents and antibodies Protein phosphatase inhibitor (P5726), protease inhibitor cocktails (P8340), voriconazole (PZ0005), and filipin (F4767) were purchased from Sigma-Aldrich. BMH (bismaleimidohexane, 22330) was purchased from Thermo Fisher Scientific. The antibody against ATAD3A (H00055210-D01, 1:1000) was from Abnova. The antibody against ATAD3A (GTX116301, 1:50) was from Genetex. The antibodies against ATPB (17247-1-AP, 1:1000), SigmaR1 (15168-1-AP, 1:1000), calnexin (17090-1-AP, 1:1000), CYP46A1 (12486-1-AP, 1:1000), and mtCO2 (55070-1-AP, 1:1000) were from ProteinTech. The antibodies against FACL4 (sc-365230, 1:1000), Tim23 (sc-514463, 1:1000), and Tom20 (sc-11415, 1:1000) were purchased from Santa Cruz Biotechnology. The antibodies against APP (ab32136, 1:5000), mtCO1 (ab14705, 1:1000), cytochrome C (ab110325, 1:10000), VDAC (ab14734, 1:2000), ClpP (ab124822, 1:2000), and synaptophysin (ab32127, 1:10000) were from Abcam. The antibodies against the C-terminal of APP (A8717, 1:5000), NeuN (A60, MAB377), FLAG (F3165, 1:2000), and β-actin (A1978, 1:10000) were obtained from Sigma-Aldrich. The Iba1 (019–19741, 1:500) antibody was from Wako Chemicals. The antibodies against GFAP (MAB360, 1:1000) and IP3R3 (AB9076,1:1000) were purchased from Millipore. The CTxB (1:200, bs-12862R) antibody was received from Bioss, and the MAP2 antibody (NB300-213, 1:500) was purchased from Novus. The PSD-95 antibody (MA1-045, 1:500) was obtained from Invitrogen. The antibody against purified anti-β-amyloid 1-16 (clone 6E10, #803001, 1:1000) was from BioLegend. The HRP-conjugated anti-mouse (31430, 1:5000) and rabbit (31360, 1:5000) secondary antibodies were from Thermo Fisher Scientific. The VeriBlot secondary antibody (HRP) (ab131366, 1:2000), which does not recognize heavy or light chains, was from Abcam. The Alexa 488 (A11034, 1:1000), 568 (A11031, 1:1000), 405 (A31553, 1:1000) fluorescent secondary antibodies were from Life Technologies and DyLight 405 (703-475-155, 1:500) was from Jackson Immuno Research. Cholesterol biosynthetic intermediates were purchased from Avanti Polar Lipids as solids: lanosterol, zymosterol, lathosterol, and desmosterol. Cholesterol-d7 standard (25,26,26,26,27,27,27-2H7-cholesterol) was purchased from Cambridge Isotope Laboratories. Efavirenz was obtained from Selleckchem (S4685). Mitotracker was purchased from Thermo Fisher Scientific (M7512). Computational virtual screening of ATAD3A in AD phenotypes and associated genes Chemical-gene network (ChemicalGN) ChemicalGN contains 473,602 chemical nodes, 18,701 gene nodes, and 15,473,939 genes. We obtained genes associated with microbial metabolites from the STITCH (Search Tool for Interactions of Chemicals) database. This database contains 15,473,939 chemical-gene associations found in the human body, representing 473,602 chemicals and 18,701 human genes (data accessed in June 2019). Mutational phenotype-gene network (PhenGN) PhenGN consists of 9982 phenotype nodes, 11,021 gene nodes, and 517,381 phenotype-gene edges. We obtained a total of 517,381 systematic genetic knockouts of phenotype-gene associations (9982 phenotypes and 11,021 mapped human genes) from the Mouse Genome Database. In this study, we developed network-based prediction models leveraging these causal phenotype-gene associations to assess how mutations in specific genes (e.g., ATAD3A) affected Alzheimer-related phenotypes. Pathway-Gene network (PathGN) PathGN contains 8868 gene nodes, 1329 pathway nodes, and 66,293 gene-pathway edges. We obtained a total of 66,293 canonical gene-pathway associations for 8868 genes and 1329 pathways from the Molecular Signatures Database (MSigDB). MSigDB is currently the most comprehensive resource for annotated pathways and gene sets. Protein-protein interaction network (PPIN) PPIN contains 22,982 gene nodes and 382,256 gene-gene edges. We obtained a total of 382,256 gene-gene associations (22,982 human genes) from BioGrid. Prioritization algorithm For the input gene ATAD3A, we prioritized other biomedical entities (genes, pathways, and phenotypes) using the context-sensitive network-based ranking algorithm that we previously developed. The output from the CSN-based algorithm was a list of phenotypes, genes, pathways, and chemicals for the input gene ATAD3A ranked based on their genetic, functional, and phenotypic relevance to the input gene. Evaluation of AD-associated phenotypes Thirteen AD-associated phenotypes were obtained from the Research Models at Alzforum database [ ), including amyloid-beta deposits , amyloidosis , cerebral amyloid angiopathy , neurofibrillary tangles , tau protein deposits , neurodegeneration , neuron degeneration, gliosis, astrocytosis, microgliosis, abnormal synaptic transmission, abnormal long-term potentiation , and abnormal long-term depression . The above network-based prioritization algorithm (input: ATAD3A; output: a list of prioritized mouse mutational phenotypes) was evaluated using these 13 AD-related phenotypes to examine how ATAD3A is phenotypically related to AD. Evaluation of AD-associated genes In this study, we used 22 genes that were strongly associated with or causally involved in AD from the Online Mendelian Inheritance in Man (OMIM) database. These 22 strong AD genes were A2M, ABCA7, ACE, ADAM10, APBB2, APOE, APP, BLMH, HFE, MPO, MT-ND1, NOS3, PAXIP1, PLAU, PLD3, PRNP, PSEN1, PSEN2, SORL1, TF, TNF , and VEGFA . The network-based prioritization algorithm (input: ATAD3A; output: a list of prioritized human genes) was evaluated using these 22 AD genes to examine how ATAD3A is genetically related to AD. Preparation of oligomeric Aβ 1–42 The Aβ 1–42 (GenicBio Limited) peptides were dissolved in 1,1,1,3,3,3-hexafluoro-2-propanol (105228, Sigma-Aldrich) to a final concentration of 5 mM and placed in a chemical hood overnight. The next day, HFIP was further evaporated using a SpeedVac concentrator for 1 h. Monomer Aβ (5 mM) was prepared by dissolving Aβ peptide in anhydrous dimethyl sulfoxide (Sigma-Aldrich). The oligomeric Aβ peptides were prepared by diluting the monomer Aβ solution in Dulbecco’s Modified Eagle Medium (DMEM)/F12 (1:1) (21041-025, Gibco) and then incubating at 4 °C for 24 h. Cell culture HEK293T (MilliporeSigma, 12022001), mouse hippocampal HT-22 cells (MilliporeSigma, SCC129) and Neuro2a cells (ATCC, CCL-131) were cultured in DMEM supplemented with 10% (v/v) heat-inactivated FBS and 1% (v/v) antibiotics (100 unit/mL penicillin, 100 µg/mL streptomycin). Neuro2a cells stably overexpressing human APP wildtype (APP wt ) or Swedish mutant (APP swe , K670N and M671L APP, clone Swe.10) were obtained from Dr. Gopal Thinakaran (University of Chicago) and maintained as described above. Primary cortical neurons were isolated from E18 mouse cortex (C57ECX) from BrainBits following the manufacturer’s protocol and grown according to the supplier’s culturing protocol. The cells were gently resuspended in NbActiv1 culture medium (NB1, BrainBits) and plated on poly-D-lysine/laminin (P6407, Sigma-Aldrich)-coated culture plates with or without coverslips at an appropriate cell density. All cells were maintained at 37 °C and 5% CO 2 . Plasmids and transfection Human ATAD3A-WT-Flag and ATAD3A-ΔN50-Flag plasmids were previously described 27 . Cells were transfected with plasmids using TransIT ® −2020 transfection reagent (MIR5406, Mirus Bio LLC, Madison, WI), according to the manufacturer’s instructions. Lenti-Syn-CYP46A1-mCherry-puro plasmid and Lenti-mCherry control plasmids were obtained from VectorBuilder Inc. Lentiviruses were generated by transfecting human embryonic kidney 293T (HEK293T) cells with plasmids encoding the envelope (pCMV-VSV-G; catalog no. 8454, Addgene), packing (psPAX2; catalog no. 12260, Addgene), and targeted open reading frame. The medium was changed 12 h after transfection, and the lentiviruses were harvested after 36 h. The lentiviruses were diluted with the corresponding medium at a 1:1 ratio, and the cells of interest were infected in the presence of Polybrene (8 µg/mL, Sigma-Aldrich) for 48 h. For knockdown of ATAD3A in Neuro2a, HT-22 cells, and mouse primary cortical neurons, cells were infected with lentivirus of control shRNA and ATAD3A shRNAs (Sigma, TRCN0000242003 and TRCN0000241479), as described previously 27 . Generation of ATAD3A heterozygous knockout AD mice All mice were maintained under a 12 h/12 h light/dark cycle (light on at 6 a.m. and off at 6 p.m.) with ad libitum access to food and water under the ambient temperature at 23 °C and humidity at 40–60%. All animal experimental protocols were approved by the Institutional Animal Care and Use Committee of Case Western Reserve University. Sufficient procedures were employed to reduce the pain and discomfort of the mice during the experiments. The mice were mated, bred, and genotyped in the animal facility of Case Western Reserve University. All mice used in this study were maintained on a C57BL/6J (Strain #000664, The Jackson Laboratory) background. 5XFAD transgenic mice [Tg(APPSwFlLon,PSEN1*M146L*L286V)6799Vas, strain #034840-JAX] breeders were purchased from Jackson Laboratory. The ATAD3A heterogeneous knockout-first mice were obtained from the Wellcome Trust Sanger Institute (Colony name: MGPY; Genetic background: C57BL/6NTac; strain #EPD0159_4_A12). A pair of loxP sites were inserted flanking ATAD3A exon 2, and a LacZ–neomycin cassette flanked with FRT was inserted in intron 1, which terminated Atad3a transcription. The knockout-first mice were then bred with Flp recombinase transgenic mice (129S4/SvJaeSor-Gt(ROSA)26Sortm1(FLP1)Dym/J, strain #003946, The Jackson Laboratory) to remove the LacZ–neomycin cassette and obtain the Atad3a -conditional knockout mice ATAD3A flox/flox ( ATAD3A fl/fl ), which contain the ATAD3A allele with exon 2 flanked by LoxP sites. ATAD3A fl/fl mice were bred with CMV-Cre mice (B6.C-Tg(CMV-cre)1Cgn/J, strain #006054, The Jackson Laboratory) to generate CMV; ATAD3A fl/+ heterozygous mice. 5XFAD heterozygous mice were crossed to the CMV; ATAD3A fl/+ mice to generate 5XFAD het ;CMV; ATAD3A fl/+ mice. Inbred, age-matched, and sex-balanced WT, CMV; ATAD3A fl/+ , 5XFAD het ; ATAD3A +/+ , 5XFAD het ;CMV; ATAD3A fl/+ mice were used for further study. Systemic peptide treatment of AD mice Control peptide TAT and DA1 peptide (Product number P103882, Lot# 0P082714SF-01) were synthesized at Ontores (Hangzhou, China). Their purities were assessed as >90% by mass spectrometry. Lyophilized peptides were dissolved in sterile water and stored at −80 °C until use. All randomization and peptide treatments in AD mice were prepared by an experimenter not associated with the behavioral and neuropathology analysis. 5XFAD transgenic mice and their age-matched and sex-balanced WT littermates were implanted with a 28-day osmotic pump (Alzet, Cupertino CA, Model 2004) containing either TAT control peptide or DA1 peptide, which delivered the peptides at a rate of 1 mg/kg/day, from age of 1.5 to 9 months. The pump was replaced once every 4 weeks. Behavioral analysis All behavioral analyses were conducted by an experimenter who was blinded to the genotypes and treatment groups. All mice were subjected to a series of behavioral measurements to monitor locomotor activity (open field test), spontaneous spatial working memory (Y-maze test), long-term spatial learning, and memory functions (Barnes maze test). Body weights and survival rates were recorded throughout the study period. Y-maze test On the test day, mice (6 months old) were brought to the testing room one hour before performing the Y-maze test to allow habituation. The mice were placed in the middle of the Y-maze and allowed to explore the three arms for 6 min. During exploration, the arm entries were recorded. The equipment was cleaned after every test to avoid odor disturbance. Spontaneous alternation was defined as a successive entry into three different arms on overlapping triplet sets. Barnes maze test On the test day, the mice were brought to the testing room 30 min before performing the Barnes maze test to allow habituation. Briefly, all the testing mice received three consecutive days of trials, with three trials each day. After being placed in the center of the platform at the beginning of each trial, the mice were allowed to explore for 3 min to find the target escape box. Mice that failed to enter the target escape hole in the given time were led to it by the operator. Mice were allowed to remain in the target hole for 2 min before returning to the home cage. After completing the 3-day trials, the mice were examined on days 5 and 12 with one test to monitor the long-term spatial learning and memory activities. The maze and the escape box were cleaned carefully after each trial to avoid odor disturbance. All the trials and tests were recorded with a video system. The total time to enter the target escape box (latency to the target box) and the number of times the wrong holes were explored (the total errors) were recorded. Nest building performance test Mice (8.5 months old) were subjected to the nest building test. Briefly, ~1 h before the dark phase (light on/off 6:00 a.m./6:00 p.m.), a mouse was transferred to a new cage containing a new nestlet (around three grams) made from pressed cotton. The performance of nest building by the mouse was evaluated the next morning using a rating scale of 1–5. The rating score was assessed: 1, >90% intact, not noticeably touched; 2, 50–90% intact, partially torn; 3, 10–50% intact, mostly shredded nestlet without an identifiable nest site; 4, 0–10% intact, an identifiable but flat nest; 5, 0–10% intact, crater-shaped nest. The untorn nestlet pieces in each cage were weighed, and the percentage of usage of the nestlet was obtained by subtracting the remaining nestlet weight from the initial nestlet weight. Open field test The locomotor activity of all experimental mice was assessed in an open field at six months of age. Briefly, the mice were placed in the center of an activity chamber (Omnitech Electronics) and allowed to explore the chamber while being tracked using an automated infrared tracking system (Vertax, Omnitech Electronics). A 12-h locomotor activity analysis was performed. Human postmortem brain samples All postmortem brain samples were collected by the National Institutes of Health (NIH) NeuroBioBank (NBB; ) under the approval of the Institutional Review Boards (IRB) and the institution’s Research Ethics Board. All brains were donated to the NBB by informed consent through the Brain and Tissue Repositories sites. Donation was voluntary and had no financial benefits. All brain specimens donated to the NIH NBB were assessed and reviewed by board-certified neuropathologists. A standard assessment was performed to document possible neuropathologies and establish a disease condition diagnosis. In addition, postmortem blood was sampled and submitted for serology and toxicology testing. The human postmortem brain samples used in the experiments were obtained from the NBB under a material transfer agreement between the NIH and Case Western Reserve University. The experiments were performed with the approval of the IRB of Case Western Reserve University. The detailed information for the human postmortem brain samples is listed in Supplementary Fig. 2b . Mouse brain mitochondrial sub-compartmental fractionation Isolation of mitochondrial sub-compartmental fractions, including the ER, mitochondria, and ER-mitochondrial contact sites, was performed as previously described 69 . Briefly, 3-month-old WT mice (C57BL/6) were deeply anesthetized and transcardially perfused with PBS. The brain tissues were washed and homogenized with ice-cold mitochondrial isolation buffer (225 mM mannitol, 75 mM sucrose, 0.5% BSA, 0.5 mM EGTA, and 30 mM Tris–HCl, pH 7.4) on ice. Cell debris and nuclei were removed by centrifugation, and the supernatants were subjected to further differential centrifugation to get the crude mitochondrial fraction in the pellet. The supernatants were further centrifuged at 100,000 × g for 1 h to obtain the ER fraction. The crude mitochondrial fraction was resuspended in mitochondrial resuspending buffer (MRB, 250 mM mannitol, 0.5 mM EGTA, and 5 mM HEPES, pH 7.4) to the final volume of 2 mL, and the crude mitochondrial suspension was layered on the top of the Percoll medium (225 mM mannitol, 25 mM HEPES pH 7.4, 1 mM EGTA and 30% Percoll (vol/vol)). The pure mitochondrial and MAM fractions were isolated by centrifugation at 95,000 × g for 30 min, washed to remove the Percoll, and further purified by centrifugation to eliminate contaminants. All fractions were reconstituted in mitochondrial buffer and stored at −80 °C until analysis. In situ proximity ligation assay PLA was performed using the Duolink ® In Situ Red Starter Kit (mouse/rabbit, DUO92101, Sigma). Briefly, fixed cells or brain sections were permeabilized and blocked with PLA blocking buffer for 1 h at 37 °C and incubated with the indicated primary antibodies overnight at 4 °C. The samples were then incubated with the PLA probes (Anti-Rabbit PLUS and Anti-Mouse MINUS) for 1 h at 37 °C, followed by the ligation and amplification steps. The PLA signal was visible as a distinct fluorescent spot and analyzed by confocal microscopy (Fluoview FV1000, Olympus). The number of fluorescent signals was quantitated using NIH ImageJ software. ATPase activity measurement Cells and mouse brain tissues were harvested and lysed in total lysis buffer. The supernatants were incubated with ATAD3A or Flag antibodies overnight at 4 °C, followed by incubation with protein A/G beads (Santa Cruz Biotechnology, sc-2003,) for 2 h at 4 °C. The immunoprecipitates were washed with lysis buffer and then ATPase activity was measured using a commercially available kit (Abcam, ab234055). Label-free proteomics Each frozen mouse cortex ( n = 3 mice per group) was collected in a 1.5-mL tube containing 300 µL of 2% SDS and protease inhibitor cocktail. The samples were incubated on ice for 30 min and then sonicated with a probe sonicator at 50% amplitude, followed by vortexing. This cycle was repeated four times, with samples placed on ice between each round. Following lysis, the samples were processed using a filter-aided sample preparation cleanup protocol using Amicon Ultra MWCO 3K filters (Millipore, Billerica, MA). The samples were reduced and alkylated on the filters with 10 mM dithiothreitol (Acros, Fair Lawn, NJ) and 25 mM iodoacetamide (Acros, Fair Lawn, NJ), respectively, and then concentrated to a final volume of 40 µL in 8 M urea. Protein concentration was measured using the Bradford method, according to the manufacturer’s instructions (Bio-Rad, Hercules, CA). Following reduction and alkylation, total protein (10 mg) was subjected to enzymatic digestion. The urea concentration was adjusted to 4 M using 50 mM Tris (pH 8), and then proteins were digested with mass spectrometry-grade lysyl endopeptidase (Wako Chemicals, Richmond, VA) for 2 h at 37 °C using an enzyme to substrate ratio of 1:40. The urea concentration was further adjusted to 2 M using 50 mM Tris (pH 8), and the lysyl peptides were digested with sequencing-grade trypsin (Promega, Madison, WI) at 37 °C overnight using an enzyme to substrate ratio of 1:40. Finally, the samples were diluted in 0.1% formic acid (Thermo Scientific, Rockford, IL) before LC-MS/MS analysis. The peptide digests (320 mg, 8 µL) were loaded onto a column with blanks in between for a total of four LC/MS/MS runs. The resulting data were acquired using an Orbitrap Velos Elite mass spectrometer (Thermo Electron, San Jose, CA) equipped with a Waters nanoACQUITY LC system (Waters, Taunton, MA). The peptides were desalted on a trap column (180 μm × 20 mm, packed with C18 Symmetry, 5 μm, 100 Å, Waters, Taunton, MA) and resolved on a reversed-phase column (75 μm × 250 mm nano column, packed with C18 BEH130, 1.7 μm, 130 Å) (Waters, Taunton, MA). Liquid chromatography was carried out at an ambient temperature at a flow rate of 300 nL/min using a gradient mixture of 0.1% formic acid in water (solvent A) and 0.1% formic acid in acetonitrile (solvent B). The gradient ranged from 4 to 44% solvent B over 210 min. The peptides eluting from the capillary tip were introduced into the nanospray mode with a capillary voltage of 2.4 kV. A full scan was obtained for the eluted peptides in the range of 380–1800 atomic mass units, followed by 25 data-dependent MS/MS scans. The MS/MS spectra were generated by collision-induced dissociation of the peptide ions at a normalized collision energy of 35% to generate a series of b- and y-ions as major fragments. In addition, a one-hour wash was included between each sample. The proteins were identified and quantified using PEAKS 8.5 (Bioinformatics Solutions Inc., Waterloo, ON, CA). Total RNA isolation and real-time quantitative RT-PCR Total RNA was isolated using the RNeasy Mini Kit (74004, QIAGEN) or TRIzol Reagent (15596-026, Invitrogen), and cDNA was synthesized from 0.5–1 μg of total RNA using the QuantiTect Reverse Transcription Kit (205311, QIAGEN). qRT-PCR was performed with QuantiTect SYBR Green (204143, QIAGEN) and analyzed using the StepOnePlus Real-Time PCR System (Thermo Fisher Scientific). Three replicates were performed with each biological sample, and the expression values of each replicate were normalized against GAPDH cDNA using the 2 −ΔΔCT method. The primers used in this study are presented in Supplementary Table 1 . LDH assays Cell death was determined using the Cytotoxicity Detection Kit (LDH), according to the manufacturer’s protocol (Roche, REF 001,11,644,793). GC-MS measurements of sterol profiles Brain samples were resuspended in a 2:1 chloroform/methanol mixture and homogenized. Cholesterol-d7 standard (25,26, 26,26,27,27,27-2H7-cholesterol, Cambridge Isotope Laboratories) was added before drying under nitrogen stream and derivatization with bis(trimethylsilyl)trifluoroacetamide/trimethylchlorosilane to form trimethylsilyl derivatives. Following derivatization samples were analyzed by gas chromatography/mass spectrometry using an Agilent 5973 Network Mass Selective Detector equipped with a 6890 gas chromatograph system and a HP-5MS capillary column (60 m × 0.25 mm × 0.25 µm). The samples were injected in splitless mode and analyzed using electron impact ionization. Ion fragment peaks were integrated to calculate sterol abundance, and quantitation was performed relative to cholesterol-d7. The following m / z ion fragments were used to quantitate each metabolite: cholesterol-d7 (465), zymosterol (456), desmosterol (456, 343), lanosterol (393), and lathosterol (458). Calibration curves were generated by injecting varying concentrations of sterol standards and maintaining a fixed amount of cholesterol-D7. Total cholesterol and 24-OHC measurements by ELISA The total cholesterol content was measured using the total cholesterol assay kit (STA-384, Cell Biolabs), according to the manufacturer’s instructions. Briefly, the total lipids were extracted from cells or cortex brain tissue samples using a mixture of chloroform, isopropanol, and NP-40 (7:11:0.1). The homogenates were centrifuged at 15,000 × g for 10 min, and the supernatants were collected and dried to remove the organic solvent. The dried lipids were dissolved in 1× assay diluent for further quantification assay by adding cholesterol reaction reagent (cholesterol oxidase 1:50, HRP 1:50, colorimetric probe 1:50 and cholesterol esterase 1:250 in 1× assay diluent). The calculated amount of total cholesterol for each sample was normalized to the total cell number or the weight of the cortex tissue. The 24-OHC levels in the mouse serum or cell culture samples were determined using the mouse 24-OHC ELISA kit (MBS7256268, MyBioSource), following the manufacturer’s protocol. The 24-OHC concentration was extrapolated from a standard curve and normalized to the total cell number. Immunofluorescence Cells were grown on coverslips, fixed with 4% paraformaldehyde for 20 min at room temperature, permeabilized with 0.1% Triton X-100 in PBS, and blocked with 2% normal goat serum. The cells were incubated with the indicated primary antibodies overnight at 4 °C. After washing with PBS, the cells were incubated with Alexa Fluor 488/568 or 405/568 secondary antibody (1:500; Thermo Fisher Scientific) for 2 h at room temperature. The nuclei were counterstained with DAPI (1:10,000; Sigma-Aldrich). Images of the staining were acquired using a Fluoview FV 1000 confocal microscope (Olympus). For immunofluorescence staining of mouse brain sections, mice were deeply anesthetized and transcardially perfused with 4% paraformaldehyde in PBS. Brain sections were permeabilized with 0.2% Triton X-100 in TBS-T buffer, followed by blocking with 5% normal goat serum. The brain sections were incubated with the indicated primary antibodies overnight at 4 °C and then stained with secondary antibodies. Images of the staining were acquired using a Fluoview FV 1000 confocal microscope (Olympus). For detection of cholesterol, brain sections were stained with filipin (F4767, Sigma-Aldrich) at room temperature in the dark. Filipin staining was imaged using an all-in-one fluorescence microscope (Keyence BZ-X710). All quantification of immunostaining was performed using ImageJ software. The same image exposure times and threshold settings were used for all sections from all the experimental groups. Quantitation was performed blinded to the experimental groups. Immunohistochemistry Paraffin-embedded brain sections (10 μm, coronal) were staining for ATAD3A (ab112572, Abcam; or GTX116301, GeneTex) using the IHC Select HRP/DAB kit (Millipore). Quantification of the ATAD3A immunostaining was conducted using NIH ImageJ software. The same image exposure times and threshold settings were used for all sections from all treatment groups. Quantitation was performed blinded to the experimental groups. Fluoro-Jade C Staining Fluoro-Jade ® C Staining was performed following the manufacturer’s protocol (Biosensis Ready-to-Dilute (RTD)™ Fluoro-Jade ® C Staining Kit, TR-100-FJT, Biosensis). Briefly, brain sections were incubated with NeuN antibody diluted in PBS containing 0.3% Triton-100 and 3% goat serum overnight at 4 °C and then stained with Alexa Fluor 568 secondary antibody for 2 h at room temperature. After washing in PBS, the brain sections were then incubated with 0.06% potassium permanganate solution for 5 min to block the background fluorescence and optimize the signal contrast. Following washing with distilled water, the sections were stained with Fluoro-Jade C/DAPI solution for 10 min in the dark. The slides were then rinsed thoroughly with distilled water and dried at 50–60 °C for 5 min in the dark. The dried slides were cleared using xylene and then permanently mounted. FJC and NeuN double-labeled degenerating neurons were visualized using a Fluoview FV1000 confocal microscope (Olympus). Synaptic density measurement in primary cortical neurons The synaptic density of primary cortical neurons was measured by counting the PSD95 + synaptophysin + clusters adhered to the dendrites. Briefly, the primary neurons were fixed with 4% paraformaldehyde for 30 min, followed by permeabilization and blocking at room temperature. The cells were then incubated with primary antibodies against PSD95 (MA1-045, Invitrogen) and synaptophysin (ab32127, Abcam) overnight at 4 °C, followed by Alexa 488- and Alexa 568-labeled secondary antibodies, respectively. The number of synapses per micron of dendrites was calculated 70 . Golgi staining and quantification of dendrite spine density Golgi-cox staining was performed using the NovaUltra Golgi-Cox Stain Kit (IHCWorld, SKU IW-3023). Briefly, mice were deeply anesthetized and transcardially perfused with PBS. The mouse brains were immersed in the Golgi-Cox Solution in the dark at room temperature. After 2 days of immersion, fresh Golgi-Cox Solution was added to the samples, which were incubated at room temperature in the dark for an additional 14 days, according to the manufacturer’s instructions. After washing with PBS for two days, serial coronal sections (200-μm thick) were cut with a Vibratome Series 1000 Sectioning System. The coronal sections were washed with water and stained with Post-Impregnation Solution for 10 min in the dark at room temperature. Following three washes with water, the brain sections were mounted on Superfrost Plus slides (Thermo Scientific). The dendritic spine images were acquired using a 100× oil objective. Cortical pyramidal neurons were selected for analysis. The dendritic spines were counted in 50–100 μm segments that were at least 50 μm away from the cell body. The total spine density was measured using the NIH Image J plug-in simple neurite tracer. Co-immunoprecipitation Tissues were lysed in total cell lysate buffer (50 mM Tris-HCl [pH 7.5], 150 mM NaCl, 1% Triton X-100, and protease inhibitor cocktail). Total lysates were incubated with the indicated antibodies overnight at 4 °C, followed by the addition of protein A/G beads for 2 h at room temperature. The immunoprecipitates were washed four times with cell lysate buffer and analyzed by western blotting. Western blotting The protein concentration in each sample was determined using protein assay dye reagents (Bio-Rad). The proteins were resuspended in Laemmli buffer, separated using sodium dodecyl sulfate-polyacrylamide gels, and transferred to nitrocellulose membranes. The membranes were probed with the indicated antibodies, and the specific proteins were visualized by electrochemiluminescence. The chemiluminescence signals were captured by X-ray films or cSeries Capture Software 1.9.8.0403 (C600 azure biosystem). Quantification and statistical analysis Sample sizes were determined by power analysis based on pilot data collected in our laboratory or published studies. For the animal studies, we used 13–33 mice/group for the behavioral tests, n = 3–8 mice/group for the biochemical analyses, n = 3–11 mice/group for the pathology studies, and n = 4–13 mice/group for the cholesterol metabolic pathway analyses. Both male and female mice were used through all the study, mix-sex analysis was used in all the experiments. For the cell culture studies, we performed each experiment with at least three independent replicates. For the animal studies, we ensured randomization and blinded evaluation. All imaging analyses were conducted by an observer blinded to the experimental groups. No samples or animals were excluded from the analysis. The data were analyzed using GraphPad Prism 9.0 software. The unpaired Student’s t test (two-tailed) was used for comparisons between two groups. Comparisons between three or more independent groups were performed using one-way analysis of variance (ANOVA), followed by Tukey’s multiple comparison’s test. Comparisons of the effect of independent variables on a response variable were performed using two-way ANOVA. The data are presented as the mean ± standard errors of the mean. Statistical parameters are presented in each figure legend. Values of p < 0.05 were considered statistically significant. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Data supporting the findings of this study are provided within the paper and its supplementary information. Source data are provided with this paper and all statistical data are presented in Source Data file. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD031523 ( ). Source data are provided with this paper.
About 11% of the U.S. population 65 and older has been diagnosed with Alzheimer's disease (AD), the most common form of dementia that results in memory loss and cognitive impairment, according to the Alzheimer's Association. And the World Health Organization predicts the number of people living with Alzheimer's will grow by millions each year. Despite decades of research, scientists don't fully understand what causes the brain condition. And there is no known therapeutic treatment. But a new study published recently in Nature Communications by a team of researchers from the Case Western Reserve University School of Medicine suggests a key protein molecule plays a major role in the accumulation of brain cholesterol, triggering the development of Alzheimer's. The lab of Xin Qi, professor of physiology and biophysics at the School of Medicine, developed and patented a peptide inhibitor earlier in hopes of treating AD and Huntington's disease. She said this study found that mice, when treated with the peptide inhibitor, demonstrated 50% restored memory function, based on testing such as navigating mazes. The impact of Alzheimer's disease AD is an age-related neurodegenerative disorder that results in progressive cell death, leading to memory loss and cognitive dysfunction. The numbers around the disease are staggering—more than 5.7 million people have AD, and that group is estimated to reach 14 million by 2050, according to the Alzheimer's Association. That number is expected to balloon to 16 million by 2050. The cost of annual out-of-pocket healthcare for Alzheimer's totals more than $250 billion. Understanding the pathology Risk factors that contribute to AD include vascular diseases that impact the heart and blood vessels. While some risk factors are well known—aging, for example—others, such as brain cholesterol, play a key role in understanding how the disease develops. Brain cells communicate through cholesterol-rich cell membranes, a process that occurs naturally and is essential for healthy brain function. Research shows the brain contains 23-25% of the body's cholesterol. "Cholesterol accumulates in the brain and causes damage to the neuron—it's long been understood as playing a role in Alzheimer's disease pathology," Qi said. "However, what causes the cholesterol accumulation in the brain continues to be unknown and could hold answers." The study The paper is the result of more than five years of research into the role of brain cholesterol and its relationship with AD. The researchers set out to tackle two main questions: What role does brain cholesterol play in the disease? How can this new pathway be used for future treatment options? Qi, the paper's senior author, said the study centered on the protein coding gene, ATAD3A. Much is unknown about how the protein functions within neurodegenerative diseases. "In Huntington's disease, the molecule ATAD3A becomes hyperactive and is oligomerized (repeated), which is a cause of the disease," Qi said. "We worked with data scientists to see if ATAD3A also has a link to Alzheimer's disease and, to our surprise, we found that the molecule is a top candidate linked to Alzheimer's." From there, researchers gathered data by analyzing models and found a pathway linking ATAD3A and brain cholesterol. The researchers found that once ATAD3A forms repeating similar or identical parts through a process called oligomerization, it suppresses another protein called CYP46A1. The new protein then prevents cholesterol from being metabolized in the brain, meaning it accumulates. Researchers have linked the accumulation of brain cholesterol to disease progression in neurodegenerative diseases. The findings The data shows that ATAD3A—especially during oligomerization—could be the cause of AD development. With a possible target identified, Qi believes the next step to treatment lies in peptide inhibitors, which bind to ATAD3A and block it in action. "Models treated with the peptide showed improved performance on the memory tests," Qi said. "They showed increased memory retention, stronger cognitive activity and up to 50% restored damage to the memory." This means that targeting ATAD3A oligomerization can likely slow the progression of Alzheimer's disease, Qi said. Further testing is underway.
10.1038/s41467-022-28769-9
Medicine
Researchers develop implantable, bioengineered rat kidney (w/ video)
dx.doi.org/10.1038/nm.3154 Journal information: Nature Medicine
http://dx.doi.org/10.1038/nm.3154
https://medicalxpress.com/news/2013-04-implantable-bioengineered-rat-kidney-video.html
Abstract Approximately 100,000 individuals in the United States currently await kidney transplantation, and 400,000 individuals live with end-stage kidney disease requiring hemodialysis. The creation of a transplantable graft to permanently replace kidney function would address donor organ shortage and the morbidity associated with immunosuppression. Such a bioengineered graft must have the kidney's architecture and function and permit perfusion, filtration, secretion, absorption and drainage of urine. We decellularized rat, porcine and human kidneys by detergent perfusion, yielding acellular scaffolds with vascular, cortical and medullary architecture, a collecting system and ureters. To regenerate functional tissue, we seeded rat kidney scaffolds with epithelial and endothelial cells and perfused these cell-seeded constructs in a whole-organ bioreactor. The resulting grafts produced rudimentary urine in vitro when perfused through their intrinsic vascular bed. When transplanted in an orthotopic position in rat, the grafts were perfused by the recipient's circulation and produced urine through the ureteral conduit in vivo . Main Nearly 1 million patients in the United States live with end-stage renal disease (ESRD), with over 100,000 new diagnoses every year 1 . Although hemodialysis has increased the survival of patients with ESRD, transplantation remains the only available curative treatment. About 18,000 kidney transplants are performed per year in the United States 1 , yet approximately 100,000 Americans currently await a donor kidney 2 . Stagnant donor organ numbers have increased waiting times to over 3 years and waitlist mortality to 5–10%. Despite advances in renal transplant immunology 3 , 20% of recipients will experience an episode of acute rejection within 5 years of transplantation, and approximately 40% of recipients will die or lose graft function within 10 years after transplantation. Creation of a bioengineered kidney could theoretically bypass these problems by providing an autologous graft on demand. Hemofiltration and hemodialysis use an acellular semipermeable membrane to substitute the native kidney's functions. Several attempts have been made to bioengineer viable tubular structures to further supplement hemofiltration with cell-dependent functions 4 , 5 . When hemofiltration devices have been combined with bioengineered renal tubules, the resulting bioartificial kidney replaced renal function in uremic dogs 6 and temporarily improved renal function in patients with acute renal failure 7 , 8 . In an alternative approach, kidney primordia have been shown to develop into functional organs in vivo and prolong life when transplanted into anephric rats 9 . Devices to make renal-assist devices more portable 10 or even implantable 11 have reached the stage of preclinical evaluation and hold tremendous promise for improving the quality of life of patients in end-stage renal failure. Autologous urinary tract tissue generated from biocompatible matrix and patient-derived cells has been used clinically for bladder augmentation 12 . A key step toward a fully implantable, permanent graft is the development of a biocompatible scaffold that facilitates cell engraftment and function and allows full recipient integration through blood perfusion. On the basis of our previous experience with whole-organ heart 13 and lung 14 extracellular matrix (ECM) scaffolds, we hypothesized that the native kidney ECM could provide such a scaffold for subsequent cell seeding. In previous studies, detergents were used to remove cells from native kidney ECM while preserving the biomechanical properties and matrix protein characteristics in tissue slices 15 and whole organs 16 , 17 , 18 , 19 . We therefore decellularized kidneys using detergent perfusion to create whole-organ scaffolds with intact and perfusable vascular, glomerular and tubular compartments. We repopulated the decellularized kidney scaffolds with endothelial and epithelial cells. In vitro biomimetic culture using arterial perfusion led to the formation of functional renal grafts. To test in vivo host integration and function, we transplanted bioengineered kidneys in an orthotopic position and documented urine production. Results Perfusion decellularization of cadaveric kidneys We decellularized cadaveric rat kidneys using renal artery perfusion with 1% sodium dodecyl sulfate (SDS) at a constant pressure of 40 mm Hg ( Fig. 1a ). Histology of the acellular kidneys showed preservation of tissue architecture and the complete removal of nuclei and cellular components ( Fig. 1b ). Perfusion decellularization preserved the structure and composition of the renal ECM, which is integral in filtration (glomerular basement membrane), secretion and reabsorption (tubular basement membrane). As has been seen with other tissues 13 , 14 , the arterial elastic fiber network remained preserved in acellular cortical and medullary parenchyma. Immunohistochemical staining confirmed the presence of key ECM components such as laminin and collagen IV in a physiologic distribution, for example, in the acellular glomerular basement membrane ( Fig. 1c,d ). The microarchitecture of the lobulated glomerular basement membrane with the capillary and mesangial matrix extending from the centrilobular stalk remained intact. Acellular glomeruli were further encompassed by a multilayered corrugated and continuous Bowman's capsule basement membrane ( Fig. 1e,f ). Tubular basement membranes remained preserved, with dentate evaginations extending into the proximal tubular lumen. SDS, deionized water and Triton X-100 reduced the total DNA content per kidney to less than 10% ( Fig. 1g ). After washing with PBS, SDS was undetectable in the acellular kidney scaffolds. Concentrations of total collagen and glycosaminoglycans in the ECM were preserved at levels not significantly different from those in cadaveric kidney tissue ( Fig. 1h ). To confirm the scalability of the perfusion decellularization protocol to kidneys of large animals and humans, we successfully decellularized porcine and human kidneys using a similar perfusion protocol ( Fig. 2a–d ). We confirmed preservation of perfusable channels along a hierarchical vascular bed by dye perfusion in a manner similar to that in our previous experiments with perfusion of decellularized hearts and lungs 13 , 14 ( Supplementary Fig. 1 ). Functional testing of acellular kidney scaffolds by perfusion of the vasculature with modified Krebs-Henseleit solution under physiologic perfusion pressure resulted in production of a filtrate that was high in protein, glucose and electrolytes, suggesting hydrostatic filtration across glomerular and tubular basement membranes with loss of macromolecular sieving and active reabsorption. Figure 1: Perfusion decellularization of whole rat kidneys. ( a ) Time-lapse photographs of a cadaveric rat kidney undergoing antegrade renal arterial perfusion decellularization. Shown are a freshly isolated kidney (left) and the same kidney after 6 h (middle) and 12 h (right) of SDS perfusion. Ra, renal artery; Rv, renal vein; U, ureter. ( b ) Representative corresponding Movat's pentachrome–stained sections of rat kidney during perfusion decellularization (black arrowheads indicate the Bowman's capsule). Scale bars, 250 μm (main images); 50 μm (insets). ( c ) Representative immunohistochemical stains of cadaveric rat kidney sections showing the distribution of elastin (black arrowheads indicate elastic fibers in the tunica media of cortical vessels), collagen IV and laminin (black arrowheads indicate the glomerular basement membranes). Scale bars, 250 μm (main images); 50 μm (insets). ( d ) Corresponding sections of decellularized rat kidney tissue after immunohistochemical staining for elastin, collagen IV and laminin confirming the preservation of extracellular matrix proteins in the absence of cells. Black arrowheads indicate the preserved vascular and glomerular basement membranes. Scale bars, 250 μm (main images); 50 μm (insets). ( e ) Transmission electron micrograph (TEM) of a cadaveric rat glomerulus showing capillaries (C), the mesangial matrix (M) and podocytes (P) surrounded by Bowman's capsule (BC). Scale bar, 10 μm. ( f ) TEM of decellularized rat glomerulus showing acellularity in decellularized kidneys with preserved capillaries, mesangial matrix and Bowman's space encapsulated by Bowman's capsule. Scale bar, 10 μm. ( g , h ) Biochemical quantification of DNA and total collagen in cadaveric and decellularized rat kidney tissue showing a reduction of DNA content and a preservation of collagen after perfusion decellularization. Data are shown as the mean ± s.d. NS, not significant. Statistical significance was determined by Student's t test. Full size image Figure 2: Perfusion decellularization of porcine and human kidneys. ( a ) Photograph of cadaveric (left) and decellularized (right) human kidneys suggesting that perfusion decellularization of rat kidneys can be upscaled to generate acellular kidney ECMs of clinically relevant size. Ra, renal artery; Rv, renal vein; U, ureter. ( b ) Corresponding pentachrome staining for decellularized human kidneys (black arrowheads indicate acellular glomeruli). Scale bar, 250 μm; insets are ×40 magnification. ( c ) Photograph of cadaveric (left) and decellularized (right) porcine kidneys. ( d ) Corresponding pentachrome staining for decellularized porcine kidneys (black arrowheads indicate acellular glomeruli). Scale bar, 250 μm; insets are ×40 magnification. Full size image To assess the microarchitecture of the acellular kidney scaffolds, we applied an established histology-based morphometry protocol to quantify the average number of glomeruli, glomerular diameter, glomerular capillary lumen and partial Bowman's space 20 . The total number of glomeruli remained unchanged between cadaveric and decellularized kidney cross sections through the hilum (12,380.25 ± 967.37 (mean ± s.d.) compared to 14,790.35 ± 2,504.93, respectively; P = 0.931). Glomerular diameter, Bowman's space and glomerular capillary surface area did not differ between cadaveric and decellularized kidneys ( Supplementary Table 1 ). Recellularization of acellular kidney scaffolds To regenerate functional kidney tissue, we repopulated acellular rat kidneys with endothelial and epithelial cells. We instilled suspended human umbilical venous endothelial cells (HUVECs) through the renal artery and suspended rat neonatal kidney cells (NKCs) through the ureter. Cell delivery and retention improved when kidney scaffolds were mounted in a seeding chamber under a vacuum to generate a pressure gradient across the scaffold ( Fig. 3a ). Attempts to seed NKCs by applying positive pressure to the collecting system did not reach the glomerulus, whereas cell seeding using a transrenal gradient allowed for cell dispersion throughout the entire kidney parenchyma. Vacuum pressure exceeding 70 cm H 2 O led to tissue damage in the calyxes and parenchyma, but a vacuum pressure of 40 cm H 2 O did not cause macroscopic or microscopic tissue damage or leakage of cells, which is consistent with data on isolated tubular basement membrane mechanical properties 21 . After seeding, we transferred kidney constructs to a perfusion bioreactor designed to provide whole-organ culture conditions ( Fig. 3b,c ). After 3–5 d of perfused organ culture, HUVECs lined the vascular channels throughout the entire scaffold cross section from segmental, interlobar and arcuate arteries to glomerular and peritubular capillaries ( Fig. 3d ). Figure 3: Cell seeding and whole-organ culture of decellularized rat kidneys. ( a ) Schematic of a cell-seeding apparatus enabling endothelial cell seeding through port A attached to the renal artery (Ra) and epithelial cell seeding through port B attached to the ureter (U) while negative pressure in the organ chamber is applied to port C, thereby generating a transrenal pressure gradient. ( b ) Schematic of a whole-organ culture in a bioreactor enabling tissue perfusion through port A attached to the renal artery and drainage to a reservoir through port B. K, kidney. ( c ) Cell-seeded decellularized rat kidney in whole-organ culture. ( d ) Fluorescence micrographs of a re-endothelialized kidney constructs. CD31-positive (red) and DAPI-positive HUVECs line the vascular tree across the entire graft cross section (image reconstruction, left) and form a monolayer to glomerular capillaries (right; white arrowheads indicate endothelial cells). Scale bar, 500 μm (left); 50 μm (right). ( e ) Fluorescence micrographs of re-endothelialized and re-epithelialized kidney constructs showing engraftment of podocin-expressing cells (green) and endothelial cells (CD31 positive; red) in a glomerulus (left; white arrowheads indicate Bowman's capsule and the asterisk indicates the vascular pole); engraftment of Na/K-ATPase–expressing cells (green) in a basolateral distribution in tubuli resembling proximal tubular structures with the appropriate nuclear polarity (left middle); engraftment of E-cadherin–expressing cells in tubuli resembling distal tubular structures (right middle); and a three-dimensional reconstruction of a re-endothelialized vessel leading into a glomerulus (white arrowheads indicate Bowman's capsule, and the asterisk indicates the vascular pole). T, tubule; Ptc, peritubular capillary. Scale bar, 25 μm (left); 10 μm (middle and right). ( f ) Image reconstruction of an entire graft cross section confirming engraftment of podocin-expressing epithelial cells (left) and representative immunohistochemical staining of a reseeded glomerulus showing podocin expression (right). Scale bar, 500 μm (left); 50 μm (right). ( g ) Nephrin expression in regenerated glomeruli. Scale bar, 50 μm. ( h ) Aquaporin-1 expression in regenerated proximal tubular structures (left); Na/K-ATPase expression in regenerated proximal tubular epithelium (middle left); E-cadherin expression in regenerated distal tubular epithelium (middle right); and β-1 integrin expression in a regenerated glomerulus (right). Scale bars, 50 μm. ( i ) Representative TEM of a regenerated glomerulus showing a capillary with red blood cells (R) and foot processes along the glomerular basement membrane (black arrowheads; left) and TEM of a podocyte (P) adherent to the glomerular basement membrane (black arrowheads; right). BC, Bowman's capsule. Scale bars, 2 μm. ( j ) Scanning electron micrograph of a glomerulus (white arrowheads) in a regenerated kidney graft cross section. The asterisk indicates a vascular pedicle. Scale bar, 10 μm. Full size image Because a variety of epithelial cell phenotypes in different niches along the nephron contribute to urine production, we elected to reseed a combination of rat NKCs through the ureter in addition to HUVECs through the renal artery. Freshly isolated, enzymatic digests of rat neonatal kidneys produced single-cell suspensions of NKCs consisting of a heterogeneous mixture of all kidney cell types, most of which were of epithelial lineage but some of which were of endothelial and interstitial lineages. When cultured on cell-culture plastic after isolation, 8% of adherent cells expressed podocin, indicating a glomerular epithelial phenotype, 69% expressed Na/K-ATPase, indicating a proximal tubular phenotype, and 25% expressed E-cadherin, indicating a distal tubular phenotype (data not shown). After cell seeding, we mounted the kidney constructs in a perfusion bioreactor and cultured them in whole-organ biomimetic culture. An initial period of static culture enabled cell attachment, after which we initiated perfusion to provide oxygenation, nutrient supply and a filtration stimulus. Neonatal rats are unable to excrete concentrated urine due to immaturity of the tubular apparatus 22 . To accelerate in vitro nephrogenesis and maturation of NKCs in acellular kidney matrices, we supplemented the culture medium with in vivo maturation signals such as glucocorticoids and catecholamines. We cultured the reseeded kidneys under physiologic conditions for up to 12 d. By histologic evaluation after as early as 4 d in culture, we found repopulation of the renal scaffold with epithelial and endothelial cells and preservation of glomerular, tubular and vascular architecture. NKCs and HUVECs engrafted in their appropriate epithelial and vascular compartments ( Fig. 3e ). The spatial relationship of the regenerated epithelium and endothelium resembled the microanatomy and polarity of the native nephron, providing the anatomic basis for water and solute filtration, secretion and reabsorption. Immunostaining revealed densely seeded glomeruli with endothelial cells and podocytes. Across the entire kidney, podocytes seemed to preferentially engraft in glomerular regions, although we did find occasional non–site specific engraftment ( Fig. 3f,g ). Epithelial cells on the glomerular basement membranes stained positive for β-1 integrin, suggesting site-specific cell adhesion to ECM domains ( Fig. 3h ). We found that engrafted epithelial cells reestablished polarity and organized in tubular structures expressing Na/K-ATPase and aquaporin, which is similar to native proximal tubular epithelium. Epithelial cells expressing E-cadherin formed structures resembling native distal tubular epithelium and collecting ducts ( Fig. 3e,h ). E-cadherin–positive epithelial cells lined the renal pelvis, which is similar to native transitional epithelium. Transmission and scanning electron microscopy of regenerated kidneys showed perfused glomerular capillaries with engrafted podocytes and formation of foot processes ( Fig. 3i,j ). Morphometric analysis of regenerated kidneys showed recellularization of more than half of the glomerular matrices, resulting in an average number of cellular glomeruli per regenerated kidney that was approximately 70% that of cadaveric kidneys. The average glomerular diameter, Bowman's space and glomerular capillary lumen were smaller in regenerated kidneys compared to cadaveric kidneys ( Supplementary Table 2 ). In vitro function of acellular and regenerated kidneys After cell seeding and whole-organ culture, we tested the in vitro capacity of regenerated kidneys to filter a standardized perfusate, clear metabolites, reabsorb electrolytes and glucose and generate concentrated urine ( Fig. 4a ). Decellularized kidneys produced nearly twice as much filtrate as cadaveric controls, and regenerated kidneys produced the least amount of urine. All three groups maintained a steady urine output over the testing period ( Fig. 4b ). On the basis of the results of urinalysis, we calculated creatinine clearance as an estimate for glomerular filtration rate and fractional solute excretion as a measure of tubular absorptive and secretory function ( Fig. 4c ). Because of increased dilute urine production, the calculated creatinine clearance was increased in decellularized kidneys when compared to cadaveric kidneys, indicating increased glomerular (and probably additional tubular and ductal) filtration across acellular basement membranes. After repopulation with endothelial and epithelial cells, creatinine clearance of regenerated constructs reached approximately 10% of that in cadaveric kidneys, which indicates a decrease of glomerular filtration across a partially reconstituted and probably immature glomerular membrane ( Fig. 4c ). Figure 4: In vitro function of bioengineered kidney grafts and orthotopic transplantation. ( a ) Photograph of a bioengineered rat kidney construct undergoing in vitro testing. The kidney is perfused through the cannulated renal artery (Ra) and renal vein (Rv), while urine is drained through the ureter (U). The white arrowhead indicates the urine-air interface in the drainage tubing. ( b ) Bar graph summarizing average urine flow rate (ml min −1 ) for decellularized, cadaveric and regenerated kidneys perfused at 80 mm Hg and regenerated kidneys perfused at 120 mm Hg (regenerated*). Decellularized kidneys showed a polyuric state while regenerated constructs were relatively oliguric compared to cadaveric kidneys. ( c ) Bar graph showing the average creatinine clearance in cadaveric, decellularized and regenerated kidneys perfused at 80 mm Hg and regenerated kidneys perfused at 120 mm Hg (regenerated*).With increased perfusion pressure creatinine clearance in regenerated kidneys improved. ( d ) Bar graph showing vascular resistance of cadaveric decellularized and regenerated kidneys showing an increase in vascular resistance with decellularization and partial recovery in regenerated kidneys. Error bars, s.d. ( b – d ). ( e ) Photograph of rat peritoneum after laparotomy, left nephrectomy and orthotopic transplantation of a regenerated left kidney construct. The recipient left renal artery and left renal vein are connected to the regenerated kidney's renal artery and vein. The regenerated kidney's ureter remained cannulated for collection of urine production after implantation (left). Right, photograph of the transplanted regenerated kidney construct after unclamping of left renal artery and renal vein showing homogeneous perfusion of the graft without signs of bleeding. ( f ) Composite histologic image of a transplanted regenerated kidney confirming perfusion across the entire kidney cross section and the absence of parenchymal bleeding. Scale bar, 500 μm. Full size image We found that vascular resistance increased with decellularization and decreased after re-endothelialization, but it remained higher in regenerated constructs compared to cadaveric kidneys ( Fig. 4d ). This finding is in line with previous observations in cardiac and pulmonary re-endothelialization 13 , 14 and may be related to the relative immaturity of the vascular bed and microemboli from cell-culture medium. When we increased the in vitro renal arterial perfusion pressure to 120 mm Hg, urine production and creatinine clearance in regenerated kidneys reached up to 23% of that in cadaveric kidneys ( Fig. 4b,c ). Albumin retention was decreased from 89.9% in cadaveric kidneys to 23.3% in decellularized kidneys, which is an amount consistent with the estimated contribution of the denuded glomerular basement membrane to macromolecular sieving. With recellularization, albumin retention was partially restored to 46.9%, leading to improved albuminuria in regenerated kidneys. Glucose reabsorption decreased from 91.7% in cadaveric kidneys to 2.8% after decellularization, consistent with free filtration and the loss of tubular epithelium. Regenerated kidneys showed partially restored glucose reabsorption of 47.38%, suggesting engraftment of proximal tubular epithelial cells with functional membrane transporters, resulting in decreased glucosuria. Higher perfusion pressure did not lead to increased albumin or glucose loss in regenerated kidneys. Selective electrolyte reabsorption was lost in decellularized kidneys. Slightly more creatinine than electrolytes were filtered, leading to an effective fractional electrolyte retention ranging from 5% to 10%. This difference may be attributed to the electrical charge of the retained ions and the basement membrane 23 , whereas the range among ions may be related to subtle differences in diffusion dynamics across acellular vascular, glomerular and tubular basement membranes. In regenerated kidneys, electrolyte reabsorption was restored to approximately 50% of physiologic levels, which further indicates engraftment and function of proximal and distal tubular epithelial cells. Fractional urea excretion was increased in decellularized kidneys and returned to a more physiologic range in regenerated kidneys, which suggests partial reconstitution of functional collecting duct epithelium with urea transporters. For further details about the urinalysis and fractional excretion, see Supplementary Table 3 . Orthotopic transplantation of regenerated kidneys Because we observed urine production in vitro , we hypothesized that regenerated kidneys could function in vivo after orthotopic transplantation. We performed experimental left nephrectomies and transplanted regenerated left kidneys in an orthotopic position. We anastomosed regenerated left kidneys to the recipient's renal artery and vein ( Fig. 4e ). Throughout the entire test period, regenerated kidney grafts seemed well perfused without any evidence of bleeding from the vasculature, collecting system or parenchyma ( Fig. 4e ). The ureter remained cannulated to document in vivo production of clear urine without evidence of gross hematuria and to collect urine samples. Regenerated kidneys produced urine from shortly after the unclamping of recipient vasculature until the planned termination of the experiment. Histological evaluation of explanted regenerated kidneys showed blood-perfused vasculature without evidence of parenchymal bleeding or microvascular thrombus formation ( Fig. 4f ). Corresponding to the in vitro studies, decellularized kidneys produced a filtrate that was high in glucose (249 ± 62.9 mg dl −1 (mean ± s.d.) compared to 29 ± 8.5 mg dl −1 in native controls) and albumin (26.85 ± 4.03 g dl −1 compared to 0.6 ± 0.4 g dl −1 in controls) but low in urea (18 ± 42.2 mg dl −1 compared to 617.3 ± 34.8 mg dl −1 in controls) and creatinine (0.5 ± 0.3 mg dl −1 compared to 24.6 ± 5.8 mg dl −1 in controls). Regenerated kidneys produced less urine than native kidneys (1.2 ± 0.1 μl min −1 (mean ± s.d.) compared to 3.2 ± 0.9 μl min −1 in native controls and 4.9 ± 1.4 μl min −1 in decellularized kidneys) with lower creatinine (1.3 ± 0.2 mg dl −1 ) and urea (28.3 ± 8.5 mg dl −1 ) than native controls but showed improved glucosuria (160 ± 20 mg dl −1 ) and albuminuria (4.67 ± 2.51 g l −1 ) when compared to decellularized kidneys. Also consistent with the in vitro results, creatinine clearance in regenerated kidneys was lower than that in native kidneys (0.01 ± 0.002 ml min −1 compared to 0.36 ± 0.09 ml min −1 in controls), as was urea excretion (0.003 ± 0.001 mg min −1 compared to 0.19 ± 0.01 mg min −1 in controls). Orthotopic transplantation of regenerated kidneys showed immediate graft function during blood perfusion through the recipient's vasculature in vivo without signs of clot formation or bleeding. Results of urinalysis corresponded to the in vitro observation of the relative immaturity of the constructs. Discussion A bioengineered kidney derived from patient-derived cells and regenerated 'on demand' for transplantation could provide an alternative treatment for patients suffering from renal failure. Although many hurdles remain, we describe a new approach for the creation of such a graft and report three milestones: the generation of three-dimensional acellular renal scaffolds using perfusion decellularization of cadaveric rat, porcine and human kidneys; the repopulation of endothelial and epithelial compartments of such renal scaffolds, leading to the formation of viable tissue; and excretory function of the resulting graft during perfusion through its vasculature in vitro and after orthotopic transplantation in vivo . In line with previous studies, we confirmed that detergent decellularization of whole rat, porcine and human kidneys removes cells and cellular debris without disruption of vascular, glomerular and tubular ultrastructure 14 , 16 , 17 , 18 . Decellularization led to a loss of cell-mediated functions such as macromolecular sieving and solute transport. Glomerular and tubular basement membranes are permeable to macromolecules, small solutes and water because filtration and reabsorption occur across these barriers in the native kidney 21 , 24 . Because macromolecule retention, secretion and reabsorption of metabolites and electrolytes and regulation of an acid-base equilibrium depend on viable endothelium and epithelium, we repopulated acellular scaffolds with endothelial and immature epithelial cells 25 . Whereas cell seeding of tissues such as muscle, vasculature or trachea can be accomplished by surface attachment or intraparenchymal injection, repopulation of a more complex organ such as the kidney poses numerous challenges 26 , 27 . Similarly to in our previous experiments with lungs, we took advantage of pre-existing vascular and urinary compartments and seeded endothelial cells through the vascular tree and epithelial cells through the collecting system 13 . The length and diameter of the acellular nephrons posed a major challenge to cell seeding from the urinary side. Tubule elasticity and permeability increase with decellularization, whereas tubular diameters increase with pressure 21 . Here a transrenal pressure gradient during the initial cell seeding substantially increased cell delivery and retention. Glomerular ECM structures predominantly repopulated with podocytes, whereas tubular structures repopulated with tubular epithelial cells with reestablished polarity. Such site-selective engraftment of podocytes on glomerular basement membranes highlights the value of native ECM in tissue regeneration. Laminins and collagen IV are the major ECM proteins of the glomerular basement membrane and are necessary for podocyte adhesion, slit diaphragm formation and glomerular barrier function 28 , 29 . Immunohistochemical staining of matrix cross sections in our study showed preservation of these proteins within the glomerular basement membrane after perfusion decellularization. In vitro , α3β1 integrin mediates rat podocyte adhesion and regulates anchorage-dependent growth and cell-cycle control 30 , 31 . Preserved glomerular basement membrane proteins in decellularized kidney scaffolds and β-1 integrin expression in engrafted podocytes suggest site-specific cell adhesion to physiologic ECM domains. After several days in organ culture, regenerated kidney constructs produced urine in vitro . The intact whole-organ architecture of regenerated kidneys provided a unique opportunity for global functional testing and assessment of in vivo function after transplantation. In regenerated kidneys, macromolecular sieving, glucose and electrolyte reabsorption were partially restored, indicating engraftment and function of endothelial cells, podocytes and tubular epithelial cells. The glomerular filtration rate in regenerated kidneys was lower compared to in cadaveric kidneys, which was in part a result of increased vascular resistance leading to decreased graft perfusion. Creatinine clearance improved with increased renal arterial perfusion pressures. Fractional reabsorption of electrolytes did not reach the levels of cadaveric kidneys, which could be related to incomplete seeding and the immature stage of seeded neonatal epithelial cells 22 . Further maturation of the cell-seeded constructs will probably improve vascular patency and control of electrolyte reabsorption 28 . Despite this functional immaturity, regenerated kidney constructs provided urine production and clearance of metabolites after transplantation in vivo . We did not observe bleeding or graft thrombosis during perfusion through the recipient's vascular system. Successful orthotopic transplantation in rats demonstrates the advantage of physiologic graft size and anatomic features such as renal vascular conduits and ureter. Once developed further, bioengineered kidneys could become a fully implantable treatment option for renal support in end-stage kidney disease. In summary, cadaveric kidneys can be decellularized, repopulated with endothelial and epithelial cells, matured to functional kidney constructs in vitro and transplanted in an orthotopic position to provide excretory function in vivo . Translation of this technology beyond proof of principle will require the optimization of cell-seeding protocols to human-sized scaffolds and an upscaling of biomimetic organ culture, as well as the isolation, differentiation and expansion of the required cell types from clinically feasible sources. Methods Perfusion decellularization of kidneys. We isolated 68 rat kidneys for perfusion decellularization. All animal experiments were performed in accordance with the Animal Welfare Act and approved by the institutional animal care and use committee at Massachusetts General Hospital. We anesthetized male, 12-week-old Sprague-Dawley rats (Charles River Labs) using inhaled 5% isoflurane (Baxter). After systemic heparinization (American Pharmaceutical Partners) through the infrahepatic inferior vena cava, a median laparotomy exposed the retroperitoneum. After removal of the Gerota's fascia, perirenal fat and kidney capsule, we transected the renal artery, vein and ureter and retrieved the kidney from the abdomen. We cannulated the ureter with a 25-gauge cannula (Harvard Apparatus). Then we cannulated the renal artery with a prefilled 25-guage cannula (Harvard Apparatus) to allow antegrade arterial perfusion of heparinized PBS (Invitrogen) at 30 mm Hg arterial pressure for 15 min to rid the kidney of residual blood. We then administered decellularization solutions at 30 mm Hg of constant pressure in the following order: 12 h of 1% SDS (Fisher) in deionized water, 15 min of deionized water and 30 min of 1% Triton X-100 (Sigma) in deionized water. After decellularization, we washed the kidney scaffolds with PBS containing 10,000 U/ml penicillin G, 10 mg/ml streptomycin and 25 μg/ml amphotericin B (Sigma) at 1.5 ml per min constant arterial perfusion for 96 h. Rat neonatal kidney cell isolation. We euthanized day 2.5–3.0 Sprague-Dawley neonates for organ harvests. We then excised both kidneys by median laparotomy and stored them on ice (4 °C) in renal epithelial growth medium (REGM, Lonza). We transferred kidneys to a 100-mm culture dish (Corning) for residual connective tissue removal and subsequent mincing into <1 mm 3 pieces. The renal tissue slurry was resuspended in 1 mg/ml collagenase I (Invitrogen) and 1 mg/ml dispase (StemCell Technologies) in DMEM (Invitrogen) and incubated in a 37 °C shaker for 30 min. The resulting digest slurry was strained (100 μm; Fisher) and washed with 4 °C REGM. We then resuspended nonstrained tissue digested in collagenase and dispase as described above and repeated the incubation, straining and blocking. The resulting cell solution was centrifuged (200 g , 5 min), and cell pellets were resuspended in 2.5 ml REGM, counted and seeded into acellular kidney scaffolds as described below. HUVEC subculture and preparation. We expanded mCherry-labeled HUVECs at passages 8–10 on gelatin A–coated (BD Biosciences) cell-culture plastic and grew the cells with endothelial growth medium-2 (EGM2, Lonza). At the time of seeding, cells were trypsinized, centrifuged, resuspended in 2.0 ml of EGM2, counted and subsequently seeded into decellularized kidneys as described below. Cell seeding. We trypsinized and diluted 50.67 × 10 6 ± 12.84 × 10 6 (mean ± s.d.) HUVECs in 2.0 ml EGM2 and seeded these onto the acellular kidney scaffold through the arterial cannula at a constant flow of 1.0 ml per min. Cells were allowed to attach overnight, after which perfusion culture resumed. Following the procedure above, 60.71 × 10 6 ± 11.67 × 10 6 rat neonatal kidney cells were isolated, counted and resuspended in 2.5 ml of REGM. The cell suspension was seeded through the ureter cannula after subjecting the organ chamber to a −40 cm H 2 O pressure. Cells were allowed to attach overnight, after which perfusion culture resumed. Bioreactor design and whole-organ culture. We designed and custom built the kidney bioreactor as a closed system that could be gas sterilized after cleaning and assembly, needing only to be opened once at the time of organ placement. Perfusion media and cell suspensions were infused through sterile access ports (Cole-Parmer) to minimize the risk of contamination. Media was allowed to equilibrate with 5% CO 2 and 95% room air by flowing through a silicone tube oxygenator (Cole-Parmer) before reaching the cannulated renal artery at 1.5 ml per min. The ureter and vein were allowed to drain passively into the reservoir during the biomimetic culture. Isolated kidney experiments. To assess in vitro kidney function, we perfused single native, regenerated and decellularized kidneys with 0.22 μm–filtered (Fisher) Krebs-Henseleit solution containing NaHCO 3 (25.0 mM), NaCl (118 mM), KCl (4.7 mM), MgSO 4 (1.2 mM), NaH 2 PO 4 (1.2 mM), CaCl 2 (1.2 mM), BSA (5.0 g/dl), D -glucose (100 mg/dl), urea (12 mg/dl) and creatinine (20 mg/dl), (Sigma-Aldrich). We added the amino acids glycine (750 mg/l), L -alanine (890 mg/l), L -asparagine (1,320 mg/l), L -aspartic acid (1,330 mg/l), L -glutamic acid (1,470 mg/l), L -proline (1,150 mg/l) and L -serine (1,050 mg/l) before testing (Invitrogen). Krebs-Henseleit solution was oxygenated (5% CO 2 and 95% O 2 ), warmed (37 °C) and perfused through the arterial cannula at constant pressures of 80–120 mm Hg without recirculation. Urine and venous effluent passively drained into separate collection tubes. We took samples at 10, 20, 30, 40 and 50 min after initiating perfusion and froze them immediately at –80 °C until analysis. Urine, venous effluent and perfusing Krebs-Henseleit solutions were quantified using a Catalyst Dx Chemistry Analyzer (IDEXX). We calculated the vascular resistance (RVR) as arterial pressure (mm Hg)/renal blood flow (ml/g/min). After completion of in vitro experiments, we flushed kidneys with sterile PBS, decannulated them and transferred to a sterile container in cold (4 °C) PBS until further processing. Histology, immunofluorescence and immunohistochemistry. We processed native, decellularized and regenerated kidneys following the identical fixation protocol for paraffin embedding (5% formalin buffered PBS; Fisher) for 24 h at room temperature, and the sections to be frozen were fixed overnight in 4% paraformaldehyde (Fisher) at 4 °C. We embedded sections in paraffin or Tissue Tek Optimal Cutting Temperature (OCT) compound (VWR) for sectioning following standard protocols. Tissue sections were cut into 5-μm sections, and H&E staining was performed (Sigma-Aldrich) using standard protocols. Sections were also stained with Movat's pentachrome (American Mastertech) following the manufacturer's protocol. Paraffin-embedded sections underwent deparaffinization with two changes of xylene (5 min), two changes of 100% ethanol (3 min) and two changes of 95% ethanol (3 min) and were placed in deionized water (solutions all from Fisher). For immunostaining, deparaffinized slides first underwent antigen retrieval in heated (95 °C) sodium citrate buffer solution, pH 6.0 (Dako), for 20 min and were then allowed to cool to room temperature for 20 min. For immunostaining of collagen IV, elastin and laminin epitopes, slides were blocked for 5 min in PBS and then incubated with 20 mg/ml Proteinase K (Sigma) in TE (Tris and EDTA) buffer, pH 8.0, at 37 °C for 10 min. After a 5-min block in PBS, slides received Dual Endogenous Enzyme-Blocking Reagent (Dako) for 5 min and then blocking buffer (1% BSA and 0.1% Triton X-100 in PBS; Sigma) for 30 min. Primary antibodies were allowed to attach overnight at 4 °C. Primary antibody dilutions were made with blocking buffer and were as follows: 1:50 antibody to elastin (Santa Cruz Biotechnology, sc-17581), 1:50 antibody to laminin Y-1 (B-4) (Santa Cruz Biotechnology, sc-13144), 1:50 antibody to collagen IV (Lifespan Bioscience, LS-C79592), 1:200 antibody to podocin (Abcam, ab50339), 1:200 antibody to Na/K-ATPase (Abcam, ab7671) and 1:200 antibody to E-cadherin (BD Biosciences, 610181). After primary antibody incubation, slides were washed in PBS for 5 min, and a secondary antibody conjugated to horseradish peroxidase (HRP) was added at 1:100 for 30 min (Dako). The resulting slides were washed with PBS and developed with 3,3′-diaminobenzidine (Dako) until good staining intensity was observed. Nuclei were counterstained with hematoxylin (Sigma). A coverslip was mounted using permount (Fisher) after dehydration with a sequential alcohol gradient and xylene (Fisher). For immunofluorescence, paraffin-embedded sections underwent deparaffinization and antigen retrieval and received primary antibody dilutions prepared in blocking buffer as described above. After primary antibody addition, slides were blocked as described above. Fluorescent secondary antibodies, all 1:250 diluted in blocking buffer (anti-species conjugated to Alexa fluorophores; Invitrogen) were allowed to attach for 45 min. Nuclei were counterstained with DAPI (Invitrogen) and coverslip (Fisher) mounted using ProLong Gold antifade reagent (Invitrogen, P36930). Omission of primary antibody and species immunoglobulin G1 antibody (Vector Labs) served as negative controls for both immunohistochemistry and immunofluorescence. Immunohistochemistry, H&E- and pentachrome-stained images were recorded using a Nikon Eclipse TE200 microscope (Nikon), and immunofluorescent images were recorded using a Nikon A1R-A1 confocal microscope (Nikon). Transmission electron microscopy. Tissues were fixed in 2.0% glutaraldehyde in 0.1 M sodium cacodylate buffer, pH 7.4, overnight at 4 °C, rinsed, post fixed in 1.0% osmium tetroxide in cacodylate buffer for 1 h at room temperature and rinsed (Electron Microscopy Sciences). Then sections were dehydrated through a graded series of ethanol and infiltrated with Epon resin (Ted Pella) in a 1:1 solution of Epon and ethanol overnight. Sections were then placed in fresh Epon for several hours and embedded in Epon overnight at 60 °C. Thin sections were cut on a UC6 ultramicrotome (Leica), collected on formvar-coated grids, stained with uranyl acetate and lead citrate and examined in a JEM 1011 transmission electron microscope at 80 kV (Jeol). Images were collected using an AMT digital imaging system (Advanced Microscopy Techniques). SDS, DNA collagen and sulfated glycosaminoglycans (sGAG) quantification. SDS was quantified using Stains-All Dye (Sigma) as previously described 30 . Briefly, lyophilized tissues were digested in collagenase buffer (Sigma) for 48 h at 37 °C with gentle rotation. Digest supernatants (1 μl) containing any residual SDS were then added to 4 ml of a working Stains-All Dye solution, and absorbance was measured at 488 mm. DNA was quantified using the Quanti-iT PicoGreen dsDNA kit (Invitrogen). Briefly, DNA was extracted from lyophilized tissue samples in Tris-HCl buffer with Proteinase K (200 μg/ml; Sigma) for 3 h at 37 °C with gentle rotation. Digest supernatants (10 μl) were diluted in TE buffer and then mixed with prepared PicoGreen reagent. Samples were excited at 480 nm, and fluorescence was measured at 520 nm. Soluble collagen was quantified using the Sircol Assay (Biocolor), as per the manufacturer's instructions. Lyophilized tissue samples were first subjected to acid-pepsin collagen extraction overnight at 4 °C and then to overnight isolation and concentration. Assay was then performed as instructed. sGAG were quantified using the Blyscan Assay (Biocolor). Before measurement, sGAG were extracted using a papain extraction reagent (Sigma) and heated for 3 h at 65 °C. Assay was then performed as instructed. All concentrations were determined on the basis of a standard curve generated in parallel, and values were normalized to original tissue dry weight. Chemical analysis of blood and urine samples. Blood and urine chemistries were analyzed using a Catalyst Dx Chemistry Analyzer (IDEXX Laboratories) integrated with an IDEXX VetLab Station for comprehensive sample and data management. As per the manufacturer's protocol, 700 μl was analyzed for each blood sample, and 300 μl was analyzed for each urine sample. When necessary, urine samples were diluted on the basis of the urine volume collected and then diluent was added for a sample volume of 300 μl, and the results account for any dilution calculations. Blood samples were first passed through a lithium heparin whole-blood separator before being analyzed, and no dilutions were needed for these samples. All samples were passed through proprietary IDEXX diagnostic CLIPs Chem 10 (ALB, ALB/GLOB, ALKP, ALT, BUN, BUN/CREA, CREA, GLOB, GLU and TP) and Lyte 4 (Cl, K, Na and Na/K), as well as single diagnostic slides for magnesium, calcium and phosphate. Morphometric quantification of glomeruli. Ten low-powered fields (4×) were randomly selected from the subcapsular and juxtamedullary regions of H&E-stained sections (5 μm) of native, decellularized and regenerated kidneys ( n = 3 in each group). Glomeruli were counted in each of the ten fields to determine the average number of glomeruli per section, and the numbers of glomeruli per section in experiments from the same group were used to determine the mean glomeruli in each type of kidney (mean ± s.e.m.). Reseeded glomeruli in regenerated kidneys were counted as a subset in each of the ten low-powered fields and then averaged per experiment. The percentage of reseeded glomeruli for each experiment was calculated using the average number of reseeded glomeruli compared to the average number of glomeruli per section and used to calculate the mean percentage of reseeded glomeruli in regenerated kidneys (mean percentage ± s.e.m.). Ten high-powered fields (20×) of individual glomeruli from the same H&E-stained sections of native, decellularized and regenerated kidneys were used for morphometric analysis ( n = 3 in each group). All morphometric measurements were determined using ImageJ (NIH). For each of the individual glomeruli, both the long- and short-axes diameters of the renal corpuscle were measured. Bowman's space was determined by subtracting the area measured around the inner surface of the Bowman's capsule from the area measured around the outer surface of the glomerular capillary bed. All measurements were averaged per experiment, and experiments from the same group were used to determine mean values ± s.e.m. Organ preparation and orthotopic transplantation. Native, decellularized or regenerated kidneys were treated identically with the exception that native kidneys were harvested from anesthetized (5% inhaled isoflurane), 12-week-old male Sprague-Dawley rats after systemic heparinization. Native kidneys were exposed and harvested identically as described above for perfusion decellularization with the exception that the left renal artery was flushed with 4 °C Belzer UW Cold Storage Solution (Bridge to Life) at 1 ml/min for 5 min before surgical manipulation of the kidney and rinsed with 20 ml sterile 4 °C PBS before implantation. Kidney grafts were prepared for orthotopic transplantation by dissecting the hilar structures (artery, vein and ureter) circumferentially on ice. The graft renal artery and vein were cuffed using a modified cuff technique described previously 17 with 24-G and 20-G, respectively, FEP polymer custom-made cuffs (Smith-Medical). For in vivo experiments, 10-week-old (220–225 g) NIHRNU-M recipient rats (Taconic Farms) underwent 5% inhaled isoflurane induction and were maintained with ventilated 1–3% inhaled isoflurane through a 16-G endotracheal tube (BD Biosciences). Rats were placed supine on a heating pad (Sunbeam). After a median laparotomy and systemic heparinization through the right renal vein, the left recipient renal artery, vein and ureter were identified, dissected circumferentially and incised close to the left hilum, sparing the left suprarenal artery. The left renal artery and vein were then clamped using a micro-serrefine clamp (Fine Science Tools). The left kidney was then carefully separated from the Gerota's fascia and removed. The regenerated kidney graft artery and venous cuffs were inserted into the recipient's vessels and secured with a 6-0 silk ligation (Fine Science Tools). The recipient artery and vein were then unclamped, and patent anastomoses were confirmed. Urine was allowed to drain passively from the ureter through a 25-G angiocath (Harvard Apparatus). Cadaveric orthotopic kidney transplants and decellularized kidney transplants served as controls. Change history 18 October 2013 In the version of this article initially published, the human kidneys in Figure 2a,b were incorrectly described as porcine and the porcine kidneys in Figure 2c,d were incorrectly described as human in the figure legend. The errors have been corrected in the HTML and PDF versions of the article.
Bioengineered rat kidneys developed by Massachusetts General Hospital (MGH) investigators successfully produced urine both in a laboratory apparatus and after being transplanted into living animals. In their report, receiving advance online publication in Nature Medicine, the research team describes building functional replacement kidneys on the structure of donor organs from which living cells had been stripped, an approach previously used to create bioartificial hearts, lungs and livers. "What is unique about this approach is that the native organ's architecture is preserved, so that the resulting graft can be transplanted just like a donor kidney and connected to the recipient's vascular and urinary systems," says Harald Ott, MD, PhD, of the MGH Center for Regenerative Medicine, senior author of the Nature Medicine article. "If this technology can be scaled to human-sized grafts, patients suffering from renal failure who are currently waiting for donor kidneys or who are not transplant candidates could theoretically receive new organs derived from their own cells." Around 18,000 kidney transplants are performed in the U.S. each year, but 100,000 Americans with end-stage kidney disease are still waiting for a donor organ. Even those fortunate enough to receive a transplant face a lifetime of immunosuppressive drugs, which pose many health risks and cannot totally eliminate the incidence of eventual organ rejection. The approach used in this study to engineer donor organs, based on a technology that Ott discovered as a research fellow at the University of Minnesota, involves stripping the living cells from a donor organ with a detergent solution and then repopulating the collagen scaffold that remains with the appropriate cell type – in this instance human endothelial cells to replace the lining of the vascular system and kidney cells from newborn rats. The research team first decellularized rat kidneys to confirm that the organ's complex structures would be preserved. They also showed the technique worked on a larger scale by stripping cells from pig and human kidneys. Making sure the appropriate cells were seeded into the correct portions of the collagen scaffold required delivering vascular cells through the renal artery and kidney cells through the ureter. Precisely adjusting the pressures of the solutions enabled the cells to be dispersed throughout the whole organs, which were then cultured in a bioreactor for up to 12 days. The researchers first tested the repopulated organs in a device that passed blood through its vascular system and drained off any urine, which revealed evidence of limited filtering of blood, molecular activity and urine production. This is a previously decellularized rat kidney after reseeding with endothelial cells, to repopulate the organ's vascular system, and neonatal kidney cells. Credit: Massachusetts General Hospital Center for Regenerative Medicine Bioengineered kidneys transplanted into living rats from which one kidney had been removed began producing urine as soon as the blood supply was restored, with no evidence of bleeding or clot formation. The overall function of the regenerated organs was significantly reduced compared with that of normal, healthy kidneys, something the researchers believe may be attributed to the immaturity of the neonatal cells used to repopulate the scaffolding. "Further refinement of the cell types used for seeding and additional maturation in culture may allow us to achieve a more functional organ," says Ott. "Based on this inital proof of principle, we hope that bioengineered kidneys will someday be able to fully replace kidney function just as donor kidneys do. In an ideal world, such grafts could be produced 'on demand" from a patient's own cells, helping us overcome both the organ shortage and the need for chronic immunosuppression. We're now investigating methods of deriving the necessary cell types from patient-derived cells and refining the cell-seeding and organ culture methods to handle human-sized organs." Ott's team focuses on the regeneration of hearts, lungs, kidneys and grafts made of composite tissues, while other teams – including one from the MGH Center for Engineering in Medicine – are using the decellularization technique to develop replacement livers. Lead author of the Nature Medicine paper is Jeremy Song, MGH Center for Regenerative Medicine; additional co-authors are Jacques Guyette, PhD, Sarah Gilpin, PhD, Gabriel Gonzalez, PhD, and Joseph Vacanti, MD, all of the MGH Center for Regenerative Medicine. The study was supported by National Institute of Health Director's New Innovator Award DP2 OD008749-01.
dx.doi.org/10.1038/nm.3154
Nano
Filter membrane renders viruses harmless
Archana Palika et al, An antiviral trap made of protein nanofibrils and iron oxyhydroxide nanoparticles, Nature Nanotechnology (2021). DOI: 10.1038/s41565-021-00920-5 Journal information: Nature Nanotechnology
http://dx.doi.org/10.1038/s41565-021-00920-5
https://phys.org/news/2021-06-filter-membrane-viruses-harmless.html
Abstract Minimizing the spread of viruses in the environment is the first defence line when fighting outbreaks and pandemics, but the current COVID-19 pandemic demonstrates how difficult this is on a global scale, particularly in a sustainable and environmentally friendly way. Here we introduce and develop a sustainable and biodegradable antiviral filtration membrane composed of amyloid nanofibrils made from food-grade milk proteins and iron oxyhydroxide nanoparticles synthesized in situ from iron salts by simple pH tuning. Thus, all the membrane components are made of environmentally friendly, non-toxic and widely available materials. The membrane has outstanding efficacy against a broad range of viruses, which include enveloped, non-enveloped, airborne and waterborne viruses, such as SARS-CoV-2, H1N1 (the influenza A virus strain responsible for the swine flu pandemic in 2009) and enterovirus 71 (a non-enveloped virus resistant to harsh conditions, such as highly acidic pH), which highlights a possible role in fighting the current and future viral outbreaks and pandemics. Main The current ongoing COVID-19 pandemic illustrates the importance of valid therapeutic tools 1 , but also the notable lack of technologies capable of fighting the spread of viruses in the environment. Many viruses diffuse and transmit in the environment in water, some in their bulk form (waterborne viruses) 2 , 3 , 4 and others in small droplets suspended in the air (airborne viruses) 5 , 6 , 7 , 8 , 9 . A key defence strategy against infectious diseases is always the prevention of pathogen transmission from an infected person to an uninfected one. This is achieved by using masks, gloves, physical barriers and disinfection, which introduces other challenges on a global scale, such as achieving the goal in a fully sustainable and environmentally friendly way 10 , 11 , 12 . In particular, the COVID-19 pandemic revealed the paradox that, although both technological and scientific knowledge are available to develop a vaccine within the record time of less than a year 13 , 14 , there is still a lack of preparedness to fight the rapid spread of new viruses until such vaccines are developed and a substantial portion of the population is vaccinated 15 . Without the appropriate readiness, viruses spread widely and rapidly and, eventually, new virus strains emerge via mutations 16 , 17 , which could potentially confer resistance to vaccines that target the original strain or increase the virulence of the virus, and possibly lead to an endless vicious cycle. Viruses can spread through many different routes, but mostly through fomites 18 , small water droplets 5 , 6 , 7 , 8 , 9 and bulk water bodies (which include wastewater) 2 , 3 , 4 . Proper hand hygiene serves as a very effective practice against the spread of infections through fomites 19 . In all the other cases, however, virus inactivation must be tackled in the hosting fluid, and the overarching strategy is then to target the virus in its surrounding aqueous environment, may this be in the form of microscopic suspended water droplets or bulk waters. For airborne viruses, suitable face masks, although effective, pose the risk of further dissemination of the viruses when improperly handled 20 and/or disposed 21 , 22 . Additionally, the generated plastic waste eventually emerges as a parallel environmental problem, especially in times of pandemics 10 , 11 , 12 . For waterborne viruses, and despite decades-long technological developments, contaminated drinking water is still responsible for 500,000 annual deaths, of which more than half occur in children under five years of age 23 . Non-enveloped enteric viruses, such as enteroviruses, adenoviruses and rotaviruses, can cause gastrointestinal infections with diseases such as diarrhoea and dysentery. It is estimated that ~40% of often-fatal childhood diarrhoea in developing countries is connected to viral agents 24 . Non-enveloped viruses can persist in water bodies for long periods of time 25 and can resist even some of the harshest treatments 26 , 27 . This challenge is not only exclusive to unfavoured communities, but extends also to countries with state-of-the-art water and wastewater treatment facilities 2 . Furthermore, even enveloped viruses, such as influenza viruses and coronaviruses, which were often regarded as unstable in water environments, have now been shown to remain highly infective for long times in bulk water bodies 28 , 29 : SARS-CoV-2, for example, can retain its infectivity for longer than seven days in tap water and wastewater at room temperature 30 . Therefore, the development of efficient barriers against the spread of viruses via diffusing environmental fluids becomes crucial if global contamination is to be prevented. In spite of decades of scientific and technological development, no existing technology can universally eliminate viruses from water, unless it is extremely energy intensive (for example, reverse osmosis) 31 or starts to pose the risk of toxicity towards humans and the environment (for example, silver-based technologies) 32 , 33 . All these limitations render the existing technologies obsolete facing global challenges such as pandemics. To address such a challenge, we developed an antiviral membrane trap composed of amyloid nanofibrils obtained from a food-grade milk protein, β-lactoglobulin (BLG), modified in situ with iron oxyhydroxide nanoparticles (NPs) (Fig. 1a ). The conversion of BLG monomers into a network of amyloid fibrils (AFs) is a straightforward process achieved by simply lowering the pH to 2.0 with simultaneous heating up to 90 °C; afterwards, the iron oxyhydroxide NPs are precipitated directly on the formed network of AFs by raising the pH in the presence of FeCl 3 ·6H 2 O. Figure 1b shows a schematic of the process; the detailed chemical analysis and composition of the iron NPs as studied by X-ray photoelectron spectroscopy (XPS) reveals the presence of both iron( ii) and iron( iii) oxide and iron chloride species that coexist with the majority of iron oxyhydroxide NPs (Supplementary Fig. 1 ). By depositing the produced material on a porous solid support, we obtained a filtration membrane composed of an intricate network of BLG AFs decorated by Fe oxyhydroxide NPs a few nanometres in size (in what follows, the hybrid membrane is simply referred to as BLG AF–Fe and the decorating iron nanoparticles as Fe NPs; BLG AF–Fe hybrids refer instead to the solution precursor). The resulting membrane is thus composed of food-grade components and shows no toxicity on experimentally treated cell lines, as demonstrated by cytotoxicity tests (Supplementary Fig. 2 ). The general structure of the membrane can be clearly observed by cryogenic scanning electron microscopy (cryo-SEM) (Fig. 1c ) on a fracture plane through the inner structure of the fully hydrated filter: controlled etching lowers the level of water and enhances the visibility of the BLG AF–Fe hybrids. Alternatively, Fe NPs that decorate the AF can be visualized by cryogenic transmission electron microscopy (cryo-TEM) (Fig. 1d ) on samples prepared using a fivefold lower Fe concentration to allow the resolution of single NPs. Finally, scanning electron microscopy (SEM) micrographs (Fig. 1e ) of fully dried membranes reveal, in high-contrast images, the dense packing of the Fe NPs on the surface of AFs. It is important to mention that the observed difference in the Fe NP density between the cryo-SEM (Fig. 1c ) and the SEM (Fig. 1e ) micrographs is due to the shrinkage of the sample during preparation and drying. Fig. 1: Schematic, fabrication and characterization of BLG AF–Fe membranes. a , A schematic showing the filtration set-up. b , A schematic showing the fabrication processes of the BLG AF–Fe membranes. c , Cryo-SEM micrographs of freeze-fractured and freeze-etched hydrated BLG AF–Fe hybrids at two magnifications. d , Cryo-TEM image of the BLG AF–Fe hybrids; the BLG AF–Fe hybrids were prepared using a fivefold lower Fe concentration (10 mgFe ml −1 ) than the concentration normally used to prepare the membranes to enable the visualization of single Fe NPs. e , SEM micrographs of a chemically fixed and critical point-dried sample of the filter bulk material at two magnifications. Representations of the virions in a are reproduced from pictures on ViralZone 51 . The cartoon structure of the BLG monomers in b is based on the crystallographic structures 5IO5 52 obtained from the Protein Data Bank 53 ( , accessed October 2020). Source data Full size image Performance of membranes on enveloped viruses We first tested this filtration membrane trap by filtering water that contained three different types of enveloped viruses: (1) Φ6, an enveloped bacteriophage that infects Pseudomonas syringae bacteria and is often used as a surrogate of human enveloped viruses 34 , (2) H1N1, an influenza A virus strain responsible for the swine flu pandemic in 2009 35 and (3) SARS-CoV-2, the coronavirus strain responsible for the ongoing COVID-19 pandemic. The membrane showed an efficiency of more than six orders of magnitude reduction in infectivity for all three viruses (Fig. 2 ). The infectious virus concentrations went from ~10 6 PFU ml −1 before filtration to below the detection limit after filtration for all three viruses. No remarkable effect on the infectivity of the viruses was observed when filtering the viruses through the cellulose support or the BLG AF alone, which suggests a unique synergetic effect of the BLG AF–Fe membranes. The filter has a capacity of ~7 × 10 3 \({\mathrm{PFU}}\, {\mathrm{mg}}_{{\mathrm{BLG}}\,{\mathrm{AF - Fe}}}^{ - 1}\) (Supplementary Fig. 4 ), as determined both by repeated cycles of filtration and by varying the total volume of the BLG AF–Fe hybrid solution used to prepare the final membrane. Fig. 2: Elimination of infectious enveloped viruses for water filtered through BLG AF–Fe membranes. a – c , Complete elimination of infectious viruses and the corresponding reduction in the genome count for Φ6 ( a ), H1N1 ( b ) and SARS-CoV-2 ( c ) when filtered through BLG AF–Fe membranes (blue, before filtration; grey, after filtration). A limited or no elimination was observed when filtering the same viruses through the cellulose support or the BLG AFs alone. The lower value of the genome count of Φ6 than that of the other two viruses is probably due to both a higher ratio of infectious viruses to genome count than those of the other two viruses and also the low efficiency of the genome extraction from these phages (Supplementary Table 2 ). Φ6 infectivity represents the plaque count from one plate of a series of dilutions that consist of at least three plates. A replicate of the Φ6 filtration experiment for which the infectivity was calculated using three technical replicas is shown in Supplementary Fig. 3 . The genome count for Φ6 represents the average of four technical replicas and the error bars represent the standard deviation (s.d.). The infectivity and genome count of H1N1 as well as the genome count of SARS-CoV-2 represent the average of two technical replicas and the error bars represent the range. The infectivity of SARS-CoV-2 represents the average of four technical replicas and the error bars represent the s.d. LOD, limit of detection; *below the LOD. Representations of the virions are reproduced from pictures on ViralZone 51 . Source data Full size image By quantifying the viral genomes, which indicate the total number of viruses (both infectious and non-infectious), before and after filtration (Fig. 2 ), we observed that most of the viruses were retained on the membrane material. Still, for H1N1 and SARS-CoV-2, a detectable amount of genomes passed through the filter. It is, however, important to reiterate that the infective viruses in the filtrate were below the detection limit for both viruses. Even in the conservative assumption that the concentration of the infective virus in the filtrate is equal to the detection limit, we still observed a remarkable decrease in the ratio of infective to total viruses after filtration (Supplementary Fig. 5 ). These results suggest that the membrane not only retains the virus, but also strongly inactivates it. Further assessment of the virucidal effect of the membrane was conducted by attempting to recover the Φ6 viruses retained on the membrane filter. This was done by incubating the membrane material used for Φ6 filtration in a beef buffer of pH 9.3—beef buffer has often been used to recover viruses adsorbed to iron oxides 36 . Keeping in mind the challenges of an efficient recovery of adsorbed viruses and potential damage to the virus in that process, we still observed a pronounced decrease in the ratio of infective to total viruses from 0.05 for the filtered solution to as low as 8.8 × 10 −4 of the recovered viruses (Supplementary Fig. 6 ). This observation provides additional evidence that the viruses not only adsorb to the membrane but also are mostly inactivated. Inactivation mechanisms To further investigate the mechanism by which the BLG AF–Fe membrane eliminates viruses, we ran experiments in which the viruses were incubated with a suspension of BLG AF–Fe hybrids in PBS buffer for one hour followed by assessing the infectivity of the viruses. No infective Φ6 viruses were detected in the solution at the end of the incubation time (Fig. 3a ). These results show that virus elimination during filtration cannot be attributed to simple retention on the filter material due to small pore sizes, that is, by size exclusion: indeed, no change in the Φ6 infectivity was observed in control experiments when the viruses were incubated with BLG AF or Fe NPs alone. The reduction in infective viruses in solution started to become noticeable when the viruses were incubated with BLG monomer–Fe; still it was neither as effective nor as reproducible as incubation with BLG AF–Fe hybrids. These results were further supported by transmission electron microscopy (TEM) images of the Φ6 viruses alone (Fig. 3b ), Φ6 incubated with BLG AF–Fe hybrids pre-adsorbed on carbon films (Fig. 3c ) and Φ6 incubated with BLG AF–Fe hybrids imaged using cryo-TEM. Figure 3b shows a high concentration of spherical and intact Φ6 virions. The top image of Fig. 3c shows that BLG AF–Fe is densely covered with Φ6 phages. It is important to note that the heavy metal staining used to produce contrast in the TEM image stabilizes single molecules, fibrils and virus particles adsorbed on the grid; bulky pieces of BLG AF–Fe are, however, too large and the stabilizing effect of the stain is not sufficient due to their spatial hydrogel nature and thus such staining cannot stop them from shrinking during the air-drying procedure after staining. The viruses attached at the surface of BLG AF–Fe hybrids (Fig. 3c ) appear to be elongated in a direction perpendicular to the surface, which suggests that such elongation was driven by the shrinkage of the membrane on drying and indicates a very strong interaction between the viruses and the BLG AF–Fe hybrids. A weak interaction would have probably resulted in the release of the viruses from the surface of BLG AF–Fe hybrids on washing and drying the samples. Such strong interactions could be sufficient to inactivate the viruses, as previously observed for Φ6 viruses that interact with montmorillonite clay minerals 37 . Additional experiments conducted under anoxic conditions showed that no oxygen-mediated inactivation took place (Supplementary Fig. 8 ), which supports again that the attachment of the viruses to the BLG AF–Fe is the main mechanism behind the virus elimination. Incubation experiments with H1N1 (Fig. 3e ) and SARS-CoV-2 (Fig. 3f ) showed comparable results to those of Φ6, which suggests a similar elimination mechanism for all three viruses. Fig. 3: Mechanism of elimination of infective viruses by BLG AF–Fe hybrids. a , Infectivity of Φ6 after 1 h of incubation with BLG AF, 30 nm Fe NPs, BLG monomer–Fe and BLG AF–Fe. A control experiment was conducted by incubating the virus in PBS buffer without any additives. Complete elimination of infectious Φ6 is only achieved when incubated with BLG AF–Fe hybrids for 60 min; no or very limited elimination was observed when the viruses were incubated with BLG AF or 30 nm Fe NPs alone; elimination became substantial, yet partial, when the virus was incubated in BLG monomer–Fe. The plotted data are the average of three technical replicas and the error bars represent the s.d., except for the BLG monomer–Fe data, which are the average of two technical replicas and the error bars represent the range. b , TEM micrographs of negative stained Φ6 showing intact phages. c , TEM micrographs of negative stained Φ6 incubated with BLG AF–Fe hybrids. d , Cryo-TEM micrograph of Φ6 incubated with BLG AF–Fe hybrids. Of the eight viruses in the field of view, two are not associated with BLG AF–Fe (left side) and six are associated (on the top right and lower end of the image) with BLG AF–Fe. No clear signs of structural damage to the viruses can be seen. e , f , Infectivity of H1N1 ( e ) and SARS-CoV-2 ( f ) after 1 h of incubation with BLG AF, BLG monomer–Fe and BLG AF–Fe. A control experiment was conducted by incubating the virus in PBS buffer without any additives. Complete-to-partial elimination of infectious H1N1 and SARS-CoV-2 occurred when incubated with BLG AF–Fe hybrids. No elimination was observed when the viruses were incubated with BLG AF alone. A substantial elimination was observed when the viruses were incubated with BLG monomer–Fe. H1N1 data represent the average of two technical replicas and the error bars represent the range. SARS-CoV-2 data are the average of three technical replicas with error bars that represent the s.d. *Below the LOD. Source data Full size image These results together show that the Fe NPs combined with the AF features are the key element of the membrane: BLG AF alone did not show any virucidal effect, whereas BLG monomer–Fe had a noticeable, yet moderate, inactivating effect on the viruses compared with that of BLG AF–Fe hybrids. Previous studies suggested that strong attraction forces between positively charged iron hydroxides 38 and negatively charged viruses 39 can partially inactivate the virus via interactions with its capsid 40 , 41 , 42 . Our results, indeed, confirm that the Fe NPs, the BLG monomer–Fe and BLG AF–Fe are all positively charged at physiological pH values (Supplementary Fig. 9 ), whereas literature values show that H1N1 43 and the spike protein of SARS-CoV-2 44 are both negatively charged at physiological pH values. What is critically important, however, is the surface-to-volume ratio at which these surfaces become available: the 30 nm Fe NP control had no virucidal effect on Φ6, the BLG monomer Fe had a noticeable, partial effect and only BLG AF–Fe had an outstanding effect. The BLG AF provide an intricate network template that supports the formation of an Fe coating a few nanometres thick onto their elongated surface, that is, at a remarkably higher surface-to-volume ratio than that offered by the spherical geometry of Fe NPs and BLG monomer–Fe NPs. We anticipate that further insights into virus inactivation mechanisms within the BLG AF–Fe membranes could be gained with techniques such as radioactive labelling of the different components of the viruses, genome-wide PCR analysis and/or a variety of mass spectroscopy techniques, and so contribute to addressing the long-standing question of the molecular mechanisms behind virus inactivation at liquid/solid interfaces. Performance of membranes on non-enveloped viruses When tested on non-enveloped viruses, the BLG AF–Fe membranes again showed an outstanding performance. An elimination efficiency of more than six orders of magnitude was found for MS2 (Fig. 4a ), a non-enveloped bacteriophage of ~28 nm diameter, which is very often used as a surrogate for human non-enveloped viruses 45 . The infectious virus concentrations went from ~10 6 PFU ml −1 before filtration to below the detection limit after filtration. Neither the cellulose support nor BLG AF alone showed any detectable effect on the infectivity of filtered MS2, which again demonstrates the unique synergistic effect of the BLG AF–Fe membranes. By quantifying the genome count before and after filtration (Supplementary Fig. 10 ), we observed that most of the viruses were retained on the membrane material, but still with a detectable amount of viral genomes that went through the filter. After filtration, the ratio of infective to total viruses (considered to be equivalent to the genome count) decreased by several orders of magnitude (Supplementary Fig. 10 ). The observed results again indicate that in the case of non-enveloped viruses, BLG AF–Fe hybrid membranes not only retain the virus but also probably inactivate it. Further assessment of the virucidal effect of the membrane was conducted by recovering the MS2 viruses retained on the membrane filter in a similar approach to that done for Φ6. The total genomes detected in the filtrate and recovered from the membrane accounted for ~65% of the total filtered viruses. Although it can neither be excluded that non-infective viruses are preferentially recovered over infective viruses nor that the viruses might be partially inactivated in the recovery process, we still observed a clear decrease in the ratio of infective-to-total viruses from 0.03 for the filtered solution to 0.01 of the recovered viruses (Fig. 4b ). Finally, we tested the membrane against enterovirus 71 (EV71), a highly robust virus that is known to retain its infectivity in the digestive tract and is also highly resistant to acidic conditions. Figure 4c shows that after filtration through BLG AF–Fe membranes the infectivity of EV71 went down to approximately one-third of the infectivity before filtration. No reduction in infectivity was observed for filtration through the cellulose support or the BLG AF alone, which demonstrates again the efficacy of the membranes developed. Fig. 4: Complete and partial elimination of infectious non-enveloped viruses for water filtered through BLG AF–Fe membranes. a , Complete elimination of infectious MS2 viruses (the corresponding reduction in the genome count is shown in Supplementary Fig. 10 ) when filtered through BLG AF–Fe membranes (blue, before filtration; grey, after filtration). No detectable elimination was observed when filtering the same viruses through the cellulose support or the BLG AFs alone. The plotted infectivity represents the plaque count from one plate of a series of dilutions that consist of at least three plates. A replicate of MS2 with an ~10 9 PFU ml −1 filtration through the BLG AF–Fe membrane showed a reduction of more than three orders of magnitude (data not shown). b , Genome count and infectivity of the total amount of MS2 viruses filtered and recovered: total genome count filtered = volume of filtered solution (ml) × genome count before filtration (RNA copies ml −1 ); total infectious viruses filtered = volume of filtered solution (ml) × infectivity before filtration (PFU ml −1 ); total genome count recovered = volume of filtered solution (ml) × genome count after filtration (RNA copies ml −1 ) + volume of beef buffer (ml) × genome count in beef buffer after 1 h of incubation (RNA copies ml −1 ); total infectious viruses recovered = volume of filtered solution (ml) × infectivity after filtration (PFU ml −1 ) + volume of beef buffer (ml) × infectivity in beef buffer after 1 h of incubation (PFU ml −1 ). Beef buffer (pH 9.3) was used to desorb the viruses from the BLG AF–Fe. The genome count for MS2 represents the average of four technical replicas. c , Substantial elimination of infectious enterovirus (EV71) viruses when filtered through BLG AF–Fe membranes. No detectable elimination was observed when the same viruses were filtered through the cellulose support or the BLG AFs alone. The results for EV71 represent the average of two technical replicas and the error bars represent the range. *Below the LOD. Representations of virions are reproduced from pictures on ViralZone 51 . Source data Full size image Sustainability footprint of the membranes To assess the performance, cost and environmental impact of the BLG AF–Fe membranes in a broader context, we evaluated its efficiency and sustainability footprint and benchmarked it against one of the most-used membrane technologies for virus removal: nanofiltration (NF). NF is a relatively simple process that retains a wide range of viruses via size exclusion, with removal efficiencies of 4–6 orders of magnitude 46 . The evaluation is based on the three pillars of sustainability, that is, techno-economic, environmental and social, reflected by eight discriminants: operating costs, investment cost, energy consumed, water recovery, removal efficiency, pressure, public acceptability and environmental friendliness. The performance of the technology in each indicator is ranked by either a low ( i = 1), medium ( i = 2) or high ( i = 3) level score 47 . By doing so, semi-qualitative factors, such as public acceptability and environmental friendliness, can also be included in the assessment. As shown in Fig. 5 , NF scores medium and low in most discriminants, due to its high price, pressure, energy consumption 48 and the release of toxic organic solvents during fabrication 49 , which is a typical problem of polymeric membranes 50 . The overall sustainability footprint is estimated by summing up the individual components as \(100\% \times \mathop {\sum }\limits_{j = 1}^8 \left( {\frac{i}{3}} \right)_j \times \frac{1}{8}\) , where each of the j = 8 factors carries a weight between 1/3 and 1 depending on the score i . The sustainability footprints for BLG AF–Fe membranes and NF were estimated to 96 and 58%, respectively, which clearly highlights the superiority of BLG AF–Fe membranes over conventional membrane processes for virus removal in terms of efficiency, cost and sustainability. Although the sustainability footprint of traditional membrane technologies can be slightly increased by considering less environmentally aggressive methods than NF, such as ultrafiltration for which the sustainability footprint is ~63% (Supplementary Fig. 11 ), this comes at the expenses of virus removal efficiency, which drops down to insufficient values for safety requirements (for example, 2–4 orders of magnitude in ultrafiltration; Supplementary Fig. 11 ). Finally, we also showed that the mechanical stability of BLG AF–Fe can be further enhanced by introducing cellulose and carbon, which extends the time of service without a loss of performance (Supplementary Fig. 12 ). Fig. 5: BLG AF–Fe membranes versus NF sustainability footprint. The factors considered are operating costs (OPEX), investment cost (CAPEX), energy consumption, water recovery, removal efficiency, pressure, public acceptability and environmental friendliness. The scores of technologies in each discriminant are shown by red (low performance), yellow (medium) and green (high performance). The overall sustainability footprint is obtained by a weighted averaging of the score in each discriminant (see the text for details). In this analysis, an industrial grade of BLG, that is, whey, is considered as the source for AF preparation. Source data Full size image Concluding remarks In summary, we have shown the general and broad efficacy of AF–Fe membranes against both enveloped and non-enveloped viruses, which include key viruses such as SARS-CoV-2, H1N1 and EV71. The membrane introduced in this work is made by combining two widely available, food-grade components: AFs obtained by fibrillization of the milk protein BLG on which iron oxyhydroxide NPs are synthesized in situ from iron salts by simple pH changes, in a straightforward fabrication procedure. This is an antiviral filtration membrane made entirely by biosourced and biodegradable components. When combined with the outstanding virucidal properties of the membrane and the inactivation of the virus within it, these characteristics may allow a disposal of used membranes that is safe for both humans and the environment. Taken together, these results make this technology of immediate importance to mitigate current and future viral pandemics, as well as to address worldwide clean water challenges associated with pathogens. Methods A full detailed description of the materials and methods used in this work is provided in the Supplementary Information . A brief summary is given below. Materials The protein BLG was purified from whey protein isolate received as a kind gift from Fonterra. For the viruses, Φ6 bacteriophage (21518) and MS2 bacteriophage (13767) were from the DSMZ culture collection; SARS-CoV-2/Switzerland/GE9586/2020 was isolated from a clinical specimen in the University Hospital in Geneva and replicated twice in Vero-E6 before the experiments. SARS-CoV-2/human/Switzerland/IMV5/2020 was isolated from a clinical specimen at the Institute of Medical Virology, University of Zurich and has been described previously 54 , 55 . Human H1N1 virus A/Netherlands/602/2009 was a gift from M. Schmolke (Department of Microbiology and Molecular Medicine). EV71 was isolated from a clinical specimen at the University Hospital of Geneva in rhabdomyosarcoma cells. Cells were infected and the supernatant was collected 2 days postinfection, clarified, aliquoted and frozen at −80 °C before titration by plaque assay in rhabdomyosarcoma cells. Methods Purification of BLG monomers and preparation of AFs The purification of BLG from the whey protein and the preparation of AFs from the purified protein are discussed in a previous report 56 . Preparation of BLG AF–Fe hybrids AFs coated with Fe NPs were obtained by mixing AF solution at pH 2 with an aqueous solution of FeCl 3 ·6H 2 O and adjusting the pH to 7 with NaOH. Preparation of BLG monomer–Fe NPs BLG monomer–Fe NPs were prepared using a similar protocol as that used to prepare the BLG AF–Fe hybrids, but using BLG monomers instead of BLG AF. Preparation of BLG AF–Fe membranes A syringe-aided filtration set-up was used to prepare BLG AF–Fe membranes. A cellulose support with a 0.45 µm pore size was inserted into the filtration set-up and placed on a glass bottle. BLG AF–Fe hybrids (8 ml) were taken with the help of a syringe and injected into the filtration system, and extra water was drained to form a membrane. Characterization of BLG AF–Fe hybrids by XPS XPS was used to determine the AF–Fe hybrid composition. Iron( ii ) and iron( iii) hydroxides were found to contribute to 26.4 and 63.2% of the total peak area, respectively, along with 10.4% of iron chloride (iron( ii) + iron( iii) chloride). Additional methods used Experimental details on inactivation of the Φ6, MS2, EV71, SARS-CoV-2 and H1N1 viruses via the incubation of different concentrations of BLG AF–Fe hybrids, filtration experiments, viral genome extraction and real-time reverse transcription polymerase chain reaction (RT-qPCR) are given in full in the Supplementary Information along with cytotoxicity tests on MDCK cells. Details on cryo-SEM, SEM, TEM and cryo-TEM are also given in full in the Supplementary Information . Data availability All the data generated and analysed during this study are included in the article and its Supplementary Information . Source data are provided with this paper.
Viruses can spread not only via droplets or aerosols like the new coronavirus, but in water, too. In fact, some potentially dangerous pathogens of gastrointestinal diseases are water-borne viruses. To date, such viruses have been removed from water using nanofiltration or reverse osmosis, but at high cost and severe impact on the environment. For example, nanofilters for viruses are made of petroleum-based raw materials, while reverse osmosis requires a relatively large amount of energy. Environmentally friendly membrane developed Now an international team of researchers led by Raffaele Mezzenga, Professor of Food & Soft Materials at ETH Zurich, has developed a new water filter membrane that is both highly effective and environmentally friendly. To manufacture it, the researchers used natural raw materials. The filter membrane works on the same principle that Mezzenga and his colleagues developed for removing heavy or precious metals from water. They create the membrane using denatured whey proteins that assemble into minute filaments called amyloid fibrils. In this instance, the researchers have combined this fibril scaffold with nanoparticles of iron hydroxide (Fe-O-HO). Manufacturing the membrane is relatively simple. To produce the fibrils, whey proteins derived from milk processing are added to acid and heated to 90 degrees Celsius. This causes the proteins to extend and attach to each other, forming fibrils. The nanoparticles can be produced in the same reaction vessel as the fibrils: the researchers raise the pH and add iron salt, causing the mixture to disintegrate into iron hydroxide nanoparticles, which attach to the amyloid fibrils. For this application, Mezzenga and his colleagues used cellulose to support the membrane. This combination of amyloid fibrils and iron hydroxide nanoparticles makes the membrane a highly effective and efficient trap for various viruses present in water. The positively charged iron oxide electrostatically attracts the negatively charged viruses and inactivates them. Amyloid fibrils alone wouldn't be able to do this because, like the viral particles, they are also negatively charged at neutral pH. However, the fibrils are the ideal matrix for the iron oxide nanoparticles. Various viruses eliminated highly efficiently The membrane eliminates a wide range of water-borne viruses, including nonenveloped adenoviruses, retroviruses and enteroviruses. This third group can cause dangerous gastrointestinal infections, which kill around half a million people—often young children in developing and emerging countries—every year. Enteroviruses are extremely tough and acid-resistant and remain in the water for a very long time, so the filter membrane should be particularly attractive to poorer countries as a way to help prevent such infections. Moreover, the membrane also eliminates H1N1 flu viruses and even the new SARS-CoV-2 virus from the water with great efficiency. In filtered samples, the concentration of the two viruses was below the detection limit, which is equivalent to almost complete elimination of these pathogens. "We are aware that the new coronavirus is predominantly transmitted via droplets and aerosols, but in fact, even on this scale, the virus requires being surrounded by water. The fact that we can remove it very efficiently from water impressively underlines the broad applicability of our membrane," says Mezzenga. While the membrane is primarily designed for use in wastewater treatment plants or for drinking water treatment, it could also be used in air filtration systems or even in masks. Since it consists exclusively of ecologically sound materials, it could simply be composted after use—and its production requires minimum energy. These traits give it an excellent environmental footprint, as the researchers also point out in their study. Because the filtration is passive, it requires no additional energy, which makes its operation carbon neutral and of possible use in any social context, from urban to rural communities.
10.1038/s41565-021-00920-5
Other
A growth mindset intervention can change students' grades if school culture is supportive
A national experiment reveals where a growth mindset improves achievement, Nature (2019). DOI: 10.1038/s41586-019-1466-y , nature.com/articles/s41586-019-1466-y Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1466-y
https://phys.org/news/2019-08-growth-mindset-intervention-students-grades.html
Abstract A global priority for the behavioural sciences is to develop cost-effective, scalable interventions that could improve the academic outcomes of adolescents at a population level, but no such interventions have so far been evaluated in a population-generalizable sample. Here we show that a short (less than one hour), online growth mindset intervention—which teaches that intellectual abilities can be developed—improved grades among lower-achieving students and increased overall enrolment to advanced mathematics courses in a nationally representative sample of students in secondary education in the United States. Notably, the study identified school contexts that sustained the effects of the growth mindset intervention: the intervention changed grades when peer norms aligned with the messages of the intervention. Confidence in the conclusions of this study comes from independent data collection and processing, pre-registration of analyses, and corroboration of results by a blinded Bayesian analysis. Main About 20% of students in the United States will not finish high school on time 1 . These students are at a high risk of poverty, poor health and early mortality in the current global economy 2 , 3 , 4 . Indeed, a Lancet commission concluded that improving secondary education outcomes for adolescents “presents the single best investment for health and wellbeing” 5 . The transition to secondary school represents an important period of flexibility in the educational trajectories of adolescents 6 . In the United States, the grades of students tend to decrease during the transition to the ninth grade (age 14–15 years, UK year 10), and often do not recover 7 . When such students underperform in or opt out of rigorous coursework, they are far less likely to leave secondary school prepared for college or university or for advanced courses in college or university 8 , 9 . In this way, early problems in the transition to secondary school can compound over time into large differences in human capital in adulthood. One way to improve academic success across the transition to secondary school is through social–psychological interventions, which change how adolescents think or feel about themselves and their schoolwork and thereby encourage students to take advantage of learning opportunities in school 10 , 11 . The specific intervention evaluated here—a growth mindset of intelligence intervention—addresses the beliefs of adolescents about the nature of intelligence, leading students to see intellectual abilities not as fixed but as capable of growth in response to dedicated effort, trying new strategies and seeking help when appropriate 12 , 13 , 14 , 15 , 16 . This can be especially important in a society that conveys a fixed mindset (a view that intelligence is fixed), which can imply that feeling challenged and having to put in effort means that one is not naturally talented and is unlikely to succeed 12 . The growth mindset intervention communicates a memorable metaphor: that the brain is like a muscle that grows stronger and smarter when it undergoes rigorous learning experiences 14 . Adolescents hear the metaphor in the context of the neuroscience of learning, they reflect on ways to strengthen their brains through schoolwork, and they internalize the message by teaching it to a future first-year ninth grade student who is struggling at the start of the year. The intervention can lead to sustained academic improvement through self-reinforcing cycles of motivation and learning-oriented behaviour. For example, a growth mindset can motivate students to take on more rigorous learning experiences and to persist when encountering difficulties. Their behaviour may then be reinforced by the school context, such as more positive and learning-oriented responses from peers or instructors 10 , 17 . Initial intervention studies with adolescents taught a growth mindset in multi-session (for example, eight classroom sessions 15 ), interactive workshops delivered by highly trained adults; however, these were not readily scalable. Subsequent growth mindset interventions were briefer and self-administered online, although lower effect sizes were, of course, expected. Nonetheless, previous randomized evaluations, including a pre-registered replication, found that online growth mindset interventions improved grades for the targeted group of students in secondary education who previously showed lower achievement 13 , 16 , 18 . These findings are important because previously low-achieving students are the group that shows the steepest decline in grades during the transition to secondary school 19 , and these findings are consistent with theory because a growth mindset should be most beneficial for students confronting challenges 20 . Here we report the results of the National Study of Learning Mindsets, which examined the effects of a short, online growth mindset intervention in a nationally representative sample of high schools in the United States (Fig. 1 ). With this unique dataset we tested the hypotheses that the intervention would improve grades among lower-achieving students and overall uptake of advanced courses in this national sample. Fig. 1: Design of the National Study of Learning Mindsets. Between August and November 2015, 82% of schools delivered the intervention; the remaining 18% delivered the intervention in January or February of 2016. Asterisk indicates that the median number of days between sessions 1 and 2 among schools implementing the intervention in the autumn was 21 days; for spring-implementing schools it was 27 days. The coin-tossing symbol indicates that random assignment was made during session 1. The tick symbol indicates that a comprehensive analysis plan was pre-registered at . The blind-eye symbol indicates that, first, teachers and researchers were kept blinded to students’ random assignment to condition, and, second, the Bayesian, machine-learning robustness tests were conducted by analysts who at the time were blinded to study hypotheses and to the identities of the variables. Full size image A focus on heterogeneity The study was also designed with the purpose of understanding for whom and under what conditions the growth mindset intervention improves grades. That is, it examined potential sources of cross-site treatment effect heterogeneity. One reason why understanding heterogeneity of effects is important is because most interventions that are effective in initial efficacy trials go on to show weaker or no effects when they are scaled up in effectiveness trials that deliver treatments under everyday conditions to more heterogeneous samples 21 , 22 , 23 . Without clear evidence about why average effect sizes differ in later-conducted studies—evidence that could be acquired from a systematic investigation of effect heterogeneity—researchers may prematurely discard interventions that yield low average effects but could provide meaningful and replicable benefits at scale for targeted groups 21 , 23 . Further, analyses of treatment effect heterogeneity can reveal critical evidence about contextual mechanisms that sustain intervention effects. If school contexts differ in the availability of the resources or experiences needed to sustain the offered belief change and enhanced motivation following an intervention, then the effects of the intervention should differ across these school contexts as well 10 , 11 . Sociological theory highlights two broad dimensions of school contexts that might sustain or impede belief change and enhanced motivation among students treated by a growth mindset intervention 6 . First, schools with the least ‘formal’ resources, such as high-quality curricula and instruction, may not offer the learning opportunities for students to be able to capitalize on the intervention, while those with the most resources may not need the intervention. Second, some schools may not have the ‘informal’ resources needed to sustain the intervention effect, such as peer norms that support students when they take on challenges and persist in the face of intellectual difficulty. We hypothesized that both of these dimensions would significantly moderate growth mindset intervention effects. Historically, the scientific methods used to answer questions about the heterogeneity of intervention effects have been underdeveloped and underused 21 , 24 , 25 . Common problems in the literature are: (1) imprecise site-level impact estimates (because of cluster-level random assignment); (2) inconsistent fidelity to intervention protocols across sites (which can obscure the workings of the cross-site moderators of interest); (3) non-representative sampling of sites (which causes site selection bias 22 , 26 ); and (4) multiple post hoc tests for the sources of treatment effect size heterogeneity (which increases the probability of false discoveries 24 ). We overcame all of these problems in a single study. We randomized students to condition within schools and consistently had high fidelity of implementation across sites (see Supplementary Information section 5 ). We addressed site selection bias by contracting a professional research company, which recruited a sample of schools that generalized to the entire population of ninth-grade students attending regular US public schools 27 (that is, schools that run on government funds; see Supplementary Information section 3 ). Next, the study used analysis methods that avoided false conclusions about subgroup effects, by generating a limited number of moderation hypotheses (two), pre-registering a limited number of statistical tests and conducting a blinded Bayesian analysis that can provide rigorous confirmation of the results (Fig. 1 ). Expected effect sizes In this kind of study, it is important to ask what size of effect would be meaningful. As a leading educational economist concluded, “in real-world settings, a fifth of a standard deviation [0.20 s.d.] is a large effect” 28 . This statement is justified by the ‘best evidence synthesis’ movement 29 , which recommends the use of empirical benchmarks, not from laboratory studies, but from the highest-quality field research on factors affecting objective educational outcomes 30 , 31 . A standardized mean difference effect size of 0.20 s.d. is considered ‘large’ because it is: (1) roughly how much improvement results from a year of classroom learning for ninth-grade students, as shown by standardized tests 30 ; (2) at the high end of estimates for the effect of having a very high-quality teacher (versus an average teacher) for one year 32 ; and (3) at the uppermost end of empirical distributions of real-world effect sizes from diverse randomized trials that target adolescents 31 . Notably, the highly-cited ‘nudges’ studied by behavioural economists and others, when aimed at influencing real-world outcomes that unfold over time (such as college enrolment or energy conservation 33 ) rather than one-time choices, rarely, if ever, exceed 0.20 s.d. and typically have much smaller effect sizes. Returning to educational benchmarks, 0.20 s.d. and 0.23 s.d. were the two largest effects observed in a recent cohort analysis of the results of all of the pre-registered, randomized trials that evaluated promising interventions for secondary schools funded as part of the US federal government’s i3 initiative 34 (the median effect for these promising interventions was 0.03 s.d.; see Supplementary Information section 11 ). The interventions in the i3 initiative typically targeted lower-achieving students or schools, involved training teachers or changing curricula, consumed considerable classroom time, and cost several thousand US dollars per student. Moreover, they were all conducted in non-representative samples of convenience that can overestimate effects. Therefore, it would be noteworthy if a short, low-cost, scalable growth mindset intervention, conducted in a nationally representative sample, could achieve a meaningful proportion of the largest effects seen for past traditional interventions, within the targeted, pre-registered group of lower-achieving students. Defining the primary outcome and student subgroup The primary outcome was the post-intervention grade point average (GPA) in core ninth-grade classes (mathematics, science, English or language arts, and social studies), obtained from administrative data sources of the schools (as described in the pre-analysis plan found in the Supplementary Information section 13 and at 35 ). Following the pre-registered analysis plan, we report results for the targeted group of n = 6,320 students who were lower-achieving relative to peers in the same school. This group is typically targeted by comprehensive programmes evaluated in randomized trials in education, as there is an urgent need to improve their educational trajectories. The justification for predicting effects in the lower-achieving group is that (1) this group benefitted in previous growth mindset trials; (2) lower-achieving students may be undergoing more academic difficulties and therefore may benefit more from a growth mindset that alters the interpretation of these difficulties; and (3) students who already have a high GPA may have less room to improve their GPAs. We defined students as relatively lower-achieving if they were earning GPAs at or below the school-specific median in the term before random assignment or, if they were missing prior GPA data, if they were below the school-specific median on academic variables used to impute prior GPA (as described in the analysis plan). Supplementary analyses for the sample overall can be found in Extended Data Table 1 , and robustness analyses for the definition of lower-achieving students are included in Extended Data Fig. 1 ( Supplementary Information section 7 ). Average effects on mindset Among lower-achieving adolescents, the growth mindset intervention reduced the prevalence of fixed mindset beliefs relative to the control condition, reported at the end of the second treatment session, unstandardized B = −0.38 (95% confidence interval = −0.31, −0.46), standard error of the regression coefficient (s.e.) = 0.04, n = 5,650 students, k = 65 schools, t = −10.14, P < 0.001, standardized mean difference effect size of 0.33. Average effects on core course GPAs In line with our first major prediction, lower-achieving adolescents earned higher GPAs in core classes at the end of the ninth grade when assigned to the growth mindset intervention, B = 0.10 grade points (95% confidence interval = 0.04, 0.16), s.e. = 0.03, n = 6,320, k = 65, t = 3.51, P = 0.001, standardized mean difference effect size of 0.11, relative to comparable students in the control condition. This conclusion is robust to alternative model specifications that deviate from the pre-registered model (Extended Data Fig. 1 ). To map the growth mindset intervention effect onto a policy-relevant indicator of high school success, we analysed poor performance rates, defined as the percentage of adolescents who earned a GPA below 2.0 on a four-point scale (that is, a ‘D’ or an ‘F’; as described in the pre-analysis plan). Poor performance rates are relevant because recent changes in US federal laws (the Every Student Succeeds Act 36 ), have led many states to adopt reductions in the poor performance rates in the ninth grade as a key metric for school accountability. More than three million ninth-grade students attend regular US public schools each year, and half are lower-achieving according to our definition. The model estimates that 5.3% (95% confidence interval = −1.7, −9.0), s.e. = 1.8, t = 2.95, P = 0.005 of 1.5 million students in the United States per year would be prevented from being ‘off track’ for graduation by the brief and low-cost growth mindset intervention, representing a reduction from 46% to 41%, which is a relative risk reduction of 11% (that is, 0.05/0.46). Average effects on mathematics and science GPAs A secondary analysis focused on the outcome of GPAs in only mathematics and science (as described in the analysis plan). Mathematics and science are relevant because a popular belief in the United States links mathematics and science learning to ‘raw’ or ‘innate’ abilities 37 —a view that the growth mindset intervention seeks to correct. In addition, success in mathematics and science strongly predicts long-term economic welfare and well-being 38 . Analyses of outcomes for mathematics and science supported the same conclusions ( B = 0.10 for mathematics and science GPAs compared to B = 0.10 for core GPAs; Extended Data Tables 1 – 3 ). Quantifying heterogeneity The intervention was expected to homogeneously change the mindsets of students across schools—as this would indicate high fidelity of implementation—however, it was expected to heterogeneously change lower-achieving students’ GPAs, as this would indicate potential school differences in the contextual mechanisms that sustain an initial treatment effect. As predicted, a mixed-effects model found no significant variability in the treatment effect on self-reported mindsets across schools (unstandardized \(\hat{\tau }=0.08\) , Q 64 = 57.2, P = 0.714), whereas significant variability was found in the effect on GPAs among lower-achieving students across schools (unstandardized \(\hat{\tau }=0.09\) , Q 64 = 85.5, P = 0.038) 39 (Extended Data Fig. 2 ). Moderation by school achievement level First, we tested competing hypotheses about whether the formal resources of the school explained the heterogeneity of effects. Before analysing the data, we expected that in schools that are unable to provide high-quality learning opportunities (the lowest-achieving schools), treated students might not sustain a desire to learn. But we also expected that other schools (the highest-achieving schools) might have such ample resources to prevent failure such that a growth mindset intervention would not add much. The heterogeneity analyses found support for the latter expectation, but not the former. Treatment effects on ninth-grade GPAs among lower-achieving students were smaller in schools with higher achievement levels, intervention × school achievement level (continuous) interaction, unstandardized B = −0.07 (95% confidence interval = 0.02, 0.13), s.e. = 0.03, z = −2.76, n = 6,320, k = 65, P = 0.006, standardized β = −0.25. In follow-up analyses with categorical indicators for school achievement, medium-achieving schools (middle 50%) showed larger effects than higher-achieving schools (top 25%). Low-achieving schools (bottom 25%) did not significantly differ from medium-achieving schools (Extended Data Table 2 ); however, this non-significant difference should be interpreted cautiously, owing to wide confidence intervals for the subgroup of lowest-achieving schools. Moderation by peer norms Second, we examined whether students might be discouraged from acting on their enhanced growth mindset when they attend schools in which peer norms were unsupportive of challenge-seeking, whereas peer norms that support challenge-seeking might function to sustain the effects of the intervention over time. We measured peer norms by administering a behavioural challenge-seeking task (the ‘make-a-math-worksheet’ task) at the end of the second intervention session (Fig. 1 ) and aggregating the values of the control group to the school level. The pre-registered mixed-effects model yielded a positive and significant intervention × behavioural challenge-seeking norms interaction for GPA among the targeted group of lower-achieving adolescents, such that the intervention produced a greater difference in end-of-year GPAs relative to the control group when the behavioural norm that surrounded students was supportive of the growth mindset belief system, B = 0.11 (95% confidence interval = 0.01, 0.21), s.e. = 0.05, z = 2.18, n = 6,320, k = 65, P = 0.029, β = 0.23. The same conclusion was supported in a secondary analysis of only mathematics and science GPAs (Extended Data Table 2 ). Subgroup effect sizes Putting together the two pre-registered moderators (school achievement level and school norms), the conditional average treatment effect (CATEs) on core GPAs within low- and medium-achieving schools (combined) was 0.14 grade points when the school was in the third quartile of behavioural norms and 0.18 grade points when the school was in the fourth and highest quartile of behavioural norms, as shown in Fig. 2 . For mathematics and science grades, the CATEs ranged from 0.16 to 0.25 grade points in the same subgroups of low- and medium-achieving schools with more supportive behavioural norms (for results separating low- and medium-achieving schools, see Fig. 2c, d and Extended Data Table 3 ). We also found that even the high-achieving schools showed meaningful treatment effects among their lower achievers on mathematics and science GPAs when they had norms that supported challenge seeking—0.08 and 0.11 grade points for the third and fourth quartiles of school norms, respectively, in the high-achieving schools ( P = 0.002; Extended Data Table 3 ). Fig. 2: The growth mindset intervention effects on grade point averages were larger in schools with peer norms that were supportive of the treatment message. a , c , Treatment effects on core course grade point averages (GPAs). b , d , Treatment effects on GPAs of only mathematics and science. a , b , The CATEs represent the estimated subgroup treatment effects from the pre-registered linear mixed-effects model, with survey weights, when fixing the racial/ethnic composition of the schools to the population median to remove any potential confounding effect of that variable on moderation hypothesis tests. Achievement levels: low, 25th percentile or lower; middle, 25th–75th percentile; high, 75th percentile or higher, which follows the categories set in the sampling plan and in the pre-registration. Norms indicate the behavioural challenge-seeking norms, as measured by the responses of the control group to the make-a-math-worksheet task after session 2. c , d , Box plots represent unconditional treatment effects (one for each school) estimated in the pre-registered linear mixed-effects regression model with no school-level moderators, as specified for research question 3 in the pre-analysis plan and described in the Supplementary Information section 7.4 . The distribution of the school-level treatment effects was re-scaled to the cross-site standard deviation, in accordance with standard practice. Dark lines correspond to the median school in a subgroup and the boxes correspond to the middle 75% of the distribution (the interquartile range). Supportive schools are defined as above the population median (third and fourth quartiles); unsupportive schools are defined as those below the population median (first and second quartiles). n = 6,320 students in k = 65 schools. Source data Full size image Bayesian robustness analysis A team of statisticians, at the time blind to study hypotheses, re-analysed the dataset using a conservative Bayesian machine-learning algorithm, called Bayesian causal forest (BCF). BCF has been shown by both its creators and other leading statisticians in open head-to-head competitions to be the most effective of the state-of-the-art methods for identifying systematic sources of treatment effect heterogeneity, while avoiding false positives 40 , 41 . The BCF analysis assigned a near-certain posterior probability that the population-average treatment effect (PATE) among lower-achieving students was positive and greater than zero, P PATE > 0 ≥ 0.999, providing strong evidence of positive average treatment effects. BCF also found stronger CATEs in schools with positive challenge-seeking norms, and weaker effects in the highest-achieving schools (Extended Data Fig. 3 and Supplementary Information section 8 ), providing strong correspondence with the primary analyses. Advanced mathematics course enrolment in tenth grade The intervention showed weaker benefits on ninth-grade GPAs in high-achieving schools. However, students in these schools may benefit in other ways. An analysis of enrolment in rigorous mathematics courses in the year after the intervention examined this possibility. The enrolment data were gathered with these analyses in mind but since the analyses were not pre-registered, they are exploratory. Course enrolment decisions are potentially relevant to all students, both lower- and higher-achieving, so we explored them in the full cohort. We found that the growth mindset intervention increased the likelihood of students taking advanced mathematics (algebra II or higher) in tenth grade by 3 percentage points (95% confidence interval = 0.01, 0.04), s.e. = 0.01, n = 6,690, k = 41, t = 3.18, P = 0.001, from a rate of 33% in the control condition to a rate of 36% in the intervention condition, corresponding to a 9% relative increase. Notably, we discovered a positive intervention × school achievement level (continuous) interaction, ( B = 0.04 (95% confidence interval = 0.00, 0.08), s.e. = 0.02, z = 2.26, P = 0.024, the opposite of what we found for core course GPAs. Within the highest-achieving 25% of schools, the intervention increased the rate at which students took advanced mathematics in tenth grade by 4 percentage points ( t = 2.37, P = 0.018). In the lower 75% of schools—where we found stronger effects on GPA—the increase in the rate at which students took advanced mathematics courses was smaller: 2 percentage points ( t = 2.00, P = 0.045). Thus an exclusive focus on GPA would have obscured intervention benefits among students attending higher-achieving schools. Discussion The National Study of Learning Mindsets showed that a low-cost treatment, delivered in less than an hour, attained a substantial proportion of the effects on grades of the most effective rigorously evaluated adolescent interventions of any cost or duration in the literature within the pre-registered group of lower-achieving students. Moreover, the intervention produced gains in the consequential outcome of advanced mathematics course-taking for students overall, which is meaningful because the rigor of mathematics courses taken in high school strongly predicts later educational attainment 8 , 9 , and educational attainment is one of the leading predictors of longevity and health 38 , 42 . The finding that the growth mindset intervention could redirect critical academic outcomes to such an extent—with no training of teachers; in an effectiveness trial conducted in a population-generalizable sample; with data collected by an independent research company using repeatable procedures; with data processed by a second independent research company; and while adhering to a comprehensive pre-registered analysis plan—is a major advance. Furthermore, the evidence about the kinds of schools where the growth mindset treatment effect on grades was sustained, and where it was not, has important implications for future interventions. We might have expected that the intervention would compensate for unsupportive school norms, and that students who already had supportive peer norms would not need the intervention as much. Instead, it was when the peer norm supported the adoption of intellectual challenges that the intervention promoted sustained benefits in the form of higher grades. Perhaps students in unsupportive peer climates risked paying a social price for taking on intellectual challenges in front of peers who thought it undesirable to do so. Sustained change may therefore require both a high-quality seed (an adaptive belief system conveyed by a compelling intervention) and conductive soil in which that seed can grow (a context congruent with the proffered belief system). A limitation of our moderation results, of course, is that we cannot draw causal conclusions about the effects of the school norm, as the norms were measured, not manipulated. It is encouraging that a Bayesian analysis, reported in the Supplementary Information section 8, yielded evidence consistent with a causal interpretation of the school norms variable. The present research therefore sets the stage for a new era of experimental research that seeks to enhance both students’ mindsets and the school environments that support student learning. We emphasize that not all forms of growth mindset interventions can be expected to increase grades or advanced course-taking, even in the targeted subgroups 11 , 12 . New growth mindset interventions that go beyond the module and population tested here will need to be subjected to rigorous development and validation processes, as the current programme was 13 . Finally, this study offers lessons for the science of adolescent behaviour change. Beliefs—and particularly beliefs that affect how students make sense of ongoing challenges—are important during high-stakes developmental turning points such as pubertal maturation 43 , 44 or the transition to secondary school 6 . Indeed, new interventions in the future should address the interpretation of other challenges that adolescents experience, including social and interpersonal difficulties, to affect outcomes (such as depression) that thus far have proven difficult to address 43 . And the combined importance of belief change and school environments in our study underscores the need for interdisciplinary research to understand the numerous influences on adolescents’ developmental trajectories. Methods Ethics approval Approval for this study was obtained from the Institutional Review Board at Stanford University (30387), ICF (FWA00000845), and the University of Texas at Austin (#2016-03-0042). In most schools this experiment was conducted as a programme evaluation carried out at the request of the participating school district 45 . When required by school districts, parents were informed of the programme evaluation in advance and given the opportunity to withdraw their children from the study. Informed student assent was obtained from all participants. Participants Data came from the National Study of Learning Mindsets 45 , which is a stratified random sample of 65 regular public schools in the United States that included 12,490 ninth-grade adolescents who were individually randomized to condition. The number of schools invited to participate was determined by a power analysis to detect reasonable estimates of cross-site heterogeneity; as many of the invited schools as possible were recruited into the study. Grades were obtained from the schools of the students, and analyses focused on the lower-achieving subgroup of students (those below the within-school median). The sample reflected the diversity of young people in the United States: 11% self-reported being black/African-American, 4% Asian-American, 24% Latino/Latina, 43% white and 18% another race or ethnicity; 29% reported that their mother had a bachelor’s degree or higher. To prevent deductive disclosure for potentially-small subgroups of students, and consistent with best practices for other public-use datasets, the policies for the National Study of Learning Mindsets require analysts to round all sample sizes to the nearest 10, so this was done here. Data collection To ensure that the study procedures were repeatable by third parties and therefore scalable, and to increase the independence of the results, two different professional research companies, who were not involved in developing the materials or study hypotheses, were contracted. One company (ICF) drew the sample, recruited schools, arranged for treatment delivery, supervised and implemented the data collection protocol, obtained administrative data, and cleaned and merged data. They did this work blind to the treatment conditions of the students. This company worked in concert with a technology vendor (PERTS), which delivered the intervention, executed random assignment, tracked student response rates, scheduled make-up sessions and kept all parties blind to condition assignment. A second professional research company (MDRC) processed the data merged by ICF and produced an analytic grades file, blind to the consequences of their decisions for the estimated treatment effects, as described in Supplementary Information section 12 . Those data were shared with the authors of this paper, who analysed the data following a pre-registered analysis plan (see Supplementary Information section 13 ; MDRC will later produce its own independent report using its processed data, and retained the right to deviate from our pre-analysis plan). Selection of schools was stratified by school achievement and minority composition. A simple random sample would not have yielded sufficient numbers of rare types of schools, such as high-minority schools with medium or high levels of achievement. This was because school achievement level—one of the two candidate moderators—was strongly associated with school racial/ethnic composition 46 (percentage of Black/African-American or Hispanic/Latino/Latina students, r = −0.66). A total of 139 schools were selected without replacement from a sampling frame of roughly 12,000 regular US public high schools, which serve the vast majority of students in the United States. Regular US public schools exclude charter or private schools, schools serving speciality populations such as students with physical disabilities, alternative schools, schools that have fewer than 25 ninth-grade students enrolled and schools in which ninth grade is not the lowest grade in the school. Of the 139 schools, 65 schools agreed, participated and provided student records. Another 11 schools agreed and participated but did not provide student grades or course-taking records; therefore, the data of their students are not analysed here. School nonresponse did not appear to compromise representativeness. We calculated the Tipton generalizability index 47 , a measure of similarity between an analytic sample and the overall sampling frame, along eight student demographic and school achievement benchmarks obtained from official government sources 27 . The index ranges from 0 to 1, with a value of 0.90 corresponding to essentially a random sample. The National Study of Learning Mindsets showed a Tipton generalizability index of 0.98, which is very high (see Supplementary Information section 3 ). Within schools, the average student response rate for eligible students was 92% and the median school had a response rate of 98% (see definitions in Supplementary Information section 5 ). This response rate was obtained by extensive efforts to recruit students into make-up sessions if students were absent and it was aided by a software system, developed by the technology vendor (PERTS), that kept track of student participation. A high within-school response rate was important because lower-achieving students, our target group, are typically more likely to be absent. Growth mindset intervention content In preparing the intervention to be scalable, we revised past growth mindset interventions to focus on the perspectives, concerns and reading levels of ninth-grade students in the United States, through an intensive research and development process that involved interviews, focus groups and randomized pilot experiments with thousands of adolescents 13 . The control condition, focusing on brain functions, was similar to the growth mindset intervention, but did not address beliefs about intelligence. Screenshots from both interventions can be found in Supplementary Information section 4 , and a detailed description of the general intervention content has previously been published 13 . The intervention consisted of two self-administered online sessions that lasted approximately 25 min each and occurred roughly 20 days apart during regular school hours (Fig. 1 ). The growth mindset intervention aimed to reduce the negative effort beliefs of students (the belief that having to try hard or ask for help means you lack ability), fixed-trait attributions (the attribution that failure stems from low ability) and performance avoidance goals (the goal of never looking stupid). These are the documented mediators of the negative effect of a fixed mindset on grades 12 , 15 , 48 and the growth mindset intervention aims to reduce them. The intervention did not only contradict these beliefs but also used a series of interesting and guided exercises to reduce their credibility. The first session of the intervention covered the basic idea of a growth mindset—that an individual’s intellectual abilities can be developed in response to effort, taking on challenging work, improving one’s learning strategies, and asking for appropriate help. The second session invited students to deepen their understanding of this idea and its application in their lives. Notably, students were not told outright that they should work hard or employ particular study or learning strategies. Rather, effort and strategy revision were described as general behaviours through which students could develop their abilities and thereby achieve their goals. The materials presented here sought to make the ideas compelling and help adolescents to put them into practice. It therefore featured stories from both older students and admired adults about a growth mindset, and interactive sections in which students reflected on their own learning in school and how a growth mindset could help a struggling ninth-grade student next year. The intervention style is described in greater detail in a paper reporting the pilot study for the present research 13 and in a recent review article 12 . Among these features, our intervention mentioned effort as one means to develop intellectual ability. Although we cannot isolate the effect of the growth mindset message from a message about effort alone, it is unlikely that the mere mention of effort to high school students would be sufficient to increase grades and challenge seeking. In part this is because adolescents often already receive a great deal of pressure from adults to try hard in school. Intervention delivery and fidelity The intervention and control sessions were delivered as early in the school year as possible, to increase the opportunity to set in motion a positive self-reinforcing cycle. In total 82% of students received the intervention in the autumn semester before the Thanksgiving holiday in the United States (that is, before late November) and the rest received the intervention in January or February; see Supplementary Information section 5 for more detail. The computer software of the technology vendor randomly assigned adolescents to intervention or control materials. Students also answered various survey questions. All parties were blind to condition assignment, and students and teachers were not told the purpose of the study to prevent expectancy effects. The data collection procedures yielded high implementation fidelity across the participating schools, according to metrics listed in the pre-registered analysis plan. In the median school, treated students viewed 97% of screens and wrote a response for 96% of open-ended questions. In addition, in the median school 91% students reported that most or all of their peers worked carefully and quietly on the materials. Fidelity statistics are reported in full in Supplementary Information section 5.6 ; Extended Data Table 2 shows that the treatment effect heterogeneity conclusions were unchanged when controlling for the interaction of treatment and school-level fidelity as intended. Measures Self-reported fixed mindset Students indicated how much they agreed with three statements such as “You have a certain amount of intelligence, and you really can’t do much to change it” (1, strongly disagree; 6, strongly agree). Higher values corresponded to a more fixed mindset; the pre-analysis plan predicted that the intervention would reduce these self-reports. GPAs . Schools provided the grades of each student in each course for the eight and ninth grade. Decisions about which courses counted for which content area were made independently by a research company (MDRC; see Supplementary Information section 12 ). The GPAs are a theoretically relevant outcome because grades are commonly understood to reflect sustained motivation, rather than only prior knowledge. It is also a practically relevant outcome because, as noted, GPA is a strong predictor of adult educational attainment, health and well-being, even when controlling for high school test scores 38 . School achievement level The school achievement level moderator was a latent variable that was derived from publicly available indicators of the performance of the school on state and national tests and related factors 45 , 46 , standardized to have mean = 0 and s.d. = 1 in the population of the more than 12,000 US public schools. Behavioural challenge-seeking norms of the schools The challenge-seeking norm of each school was assessed through a behavioural measure called the make-a-math-worksheet task 13 . Students completed the task towards the end of the second session, after having completed the intervention or control content. They chose from mathematical problems that were described either as challenging and offering the chance to learn a lot or as easy and not leading to much learning. Students were told that they could complete the problems at the end of the session if there was time. The school norm was estimated by taking the average number of challenging mathematical problems that adolescents in the control condition attending a given school chose to work on. Evidence for the validity of the challenge-seeking norm is presented in the Supplementary Information section 10 . Norms of self-reported mindset of the schools A parallel analysis focused on norms for self-reported mindsets in each school, defined as the average fixed mindset self-reports (described above) of students before random assignment. The private beliefs of peers were thought to be less likely to be visible and therefore less likely to induce conformity and moderate treatment effects, relative to peer behaviours 49 ; hence self-reported beliefs were not expected to be significant moderators. Self-reported mindset norms did not yield significant moderation (see Extended Data Table 2 ). Course enrolment to advanced mathematics We analysed data from 41 schools who provided data that allowed us to calculate rates at which students took an advanced mathematics course (that is, algebra II or higher) in tenth grade, the school year after the intervention. Six additional schools provided tenth grade course-taking data but did not differentiate among mathematics courses. We expected average effects of the treatment on challenging course taking in tenth grade to be small because not all students were eligible for advanced mathematics and not all schools allow students to change course pathways. However, some students might have made their way into more advanced mathematics classes or remained in an advanced pathway rather than dropping to an easier pathway. These challenge-seeking decisions are potentially relevant to both lower- and higher-achieving students, so we explored them in the full sample of students in the 41 included schools. Analysis methods Overview We used intention-to-treat analyses; this means that data were analysed for all students who were randomized to an experimental condition and whose outcome data could be linked. A complier average causal effects analysis yielded the same conclusions but had slightly larger effect sizes (see Supplementary Information section 9 ). Here we report only the more conservative intention-to-treat effect sizes. Standardized effect sizes reported here were standardized mean difference effect sizes and were calculated by dividing the treatment effect coefficients by the raw standard deviation of the control group for the outcome, which is the typical effect size estimate in education evaluation experiments. Frequentist P values reported throughout are always from two-tailed hypothesis tests. Model for average treatment effects Analyses to estimate average treatment effects for an individual person used a cluster-robust fixed-effects linear regression model with school as fixed effect that incorporated weights provided by statisticians from ICF, with cluster defined as the primary sampling unit. Coefficients were therefore generalizable to the population of inference, which is students attending regular public schools in the United States. For the t distribution, the degrees of freedom is 46, which is equal to the number of clusters (or primary sampling units, which was 51) minus the number of sampling strata (which was 5) 45 . Model for the heterogeneity of effects To examine cross-school heterogeneity in the treatment effect among lower-achieving students, we estimated multilevel mixed effects models (level 1, students; level 2, schools) with fixed intercepts for schools and a random slope that varied across schools, following current recommended practices 39 . The model included school-centred student-level covariates (prior performance and demographics; see the Supplementary Information section 7 ) to make site-level estimates as precise as possible. This analysis controlled for school-level average student racial/ethnic composition and its interaction with the treatment status variable to account for confounding of student body racial/ethnic composition with school achievement levels. Student body racial/ethnic composition interactions were never significant at P < 0.05 and so we do not discuss them further (but they were always included in the models, as pre-registered). Bayesian robustness analysis A final pre-registered robustness analysis was conducted to reduce the influence of two possible sources of bias: awareness of study hypotheses when conducting analyses and misspecification of the regression model (see the Supplemental Information, section 13, p. 12 ). Statisticians who were not involved in the study design and unaware of the moderation hypotheses re-analysed a blinded dataset that masked the identities of the variables. They did so using an algorithm that has emerged as a leading approach for understanding moderators of treatments: BCF 40 . The BCF algorithm uses machine learning tools to discover (or rule out) higher-order interactions and nonlinear relations among covariates and moderators. It is conservative because it uses regularization and strong prior distributions to prevent false discoveries. Evidence for the robustness of the moderation analysis in our pre-registered model comes from correspondence with the estimated moderator effects of BCF in the part of the distribution where there are the most schools (that is, in the middle of the distribution), because this is where the BCF algorithm is designed to have confidence in its estimates (Extended Data Fig. 3 ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Technical documentation for the National Study of Learning Mindsets is available from ICPSR at the University of Michigan ( ). Aggregate data are available at . Student-level data are protected by data sharing agreements with the participating districts; de-identified data can be accessed by researchers who agree to terms of data use, including required training and approvals from the University of Texas Institutional Review Board and analysis on a secure server. To request access to data, researchers should contact mindset@prc.utexas.edu. The pre-registered analysis plan can be found at . The intervention module will not be commercialized and will be available at no cost to all secondary schools in the United States or Canada that wish to use it via . Selections from the intervention materials are included in the Supplementary Information. Researchers wishing to access full intervention materials should contact mindset@prc.utexas.edu and must agree to terms of use, including non-commercialization of the intervention. Code availability Syntax can be found at or by contacting mindset@prc.utexas.edu.
Boosting academic success does not have to derive from new teachers or curriculum; it can also come from changing students' attitudes about their abilities through a short online intervention, according to the latest findings from the National Study of Learning Mindsets published in Nature on Aug. 7. The experimental study involved more than 12,000 ninth graders in a national, representative sample of 76 public high schools across the United States. It showed that an intervention emphasizing a growth mindset—the belief that intellectual abilities are not fixed but can be developed—can improve key predictors of high school graduation and college success, especially when a school's culture supports the treatment message. "The research cemented a striking finding from multiple earlier studies: A short intervention can change the unlikely outcome of adolescents' grades many months later," said David Yeager, the study's lead author and an associate professor of psychology at The University of Texas at Austin. "It also showed us something new: Higher-achieving students don't get higher grades after the program, but they are more likely to take harder classes that set them up for long-term success." According to U.S. federal government statistics, nearly 20% of students in the U.S. do not finish high school on time. These students are also at an increased risk of poverty, poor health and early mortality. The transition to high school represents an important transition point in adolescents' paths toward high school completion. Building on prior research, researchers found that two 25-minute online sessions, administered at the beginning of high school, can help students develop a growth mindset by reshaping their attitudes about their abilities. Researchers found that both lower- and higher-achieving students benefited academically from the program, even into their sophomore year. On average, lower-achieving students who took the program earned 0.10 higher grade points in core academic subjects such as math, English, science and social studies. Additionally, the intervention reduced the proportion of these students with a D or F average in these courses by more than 5 percentage points. The intervention also increased the likelihood students took Algebra II or higher in 10th grade by 3 percentage points among both higher- and lower-achieving students. "These effects are substantial when compared to the most successful large-scale, lengthy and rigorously evaluated interventions with adolescents in the educational research literature," Yeager said. "They are particularly notable given the low cost and high fidelity of the online program. But the growth mindset program isn't a magic bullet. Its effectiveness depends a lot on the school context." In medium- to low-performing schools with norms that encouraged students to take on more challenging coursework, lower-achieving students who received the intervention improved 0.15 grade points in core courses and 0.17 grade points in STEM courses. "Motivation and learning don't just happen in a student's head; they depend on the resources and learning opportunities present in the school's environment, including the extent to which challenging coursework is available to students," Yeager said. "A mindset intervention is like planting a seed; it grows to fruition in fertile soil. Now that we have shown this in a national study, it will propel us into a new era of mindset research. That era will focus on both the mindset of the student and the culture and climate of the classroom. We have our eyes set on preparing teachers to support students' beliefs that they can grow and learn."
10.1038/s41586-019-1466-y
Medicine
Eye tests predict Parkinson's-linked cognitive decline 18 months ahead
Angeliki Zarkali et al, Organisational and neuromodulatory underpinnings of structural-functional connectivity decoupling in patients with Parkinson's disease, Communications Biology (2021). DOI: 10.1038/s42003-020-01622-9 Journal information: Communications Biology
http://dx.doi.org/10.1038/s42003-020-01622-9
https://medicalxpress.com/news/2021-01-eye-parkinson-linked-cognitive-decline-months.html
Abstract Parkinson’s dementia is characterised by changes in perception and thought, and preceded by visual dysfunction, making this a useful surrogate for dementia risk. Structural and functional connectivity changes are seen in humans with Parkinson’s disease, but the organisational principles are not known. We used resting-state fMRI and diffusion-weighted imaging to examine changes in structural-functional connectivity coupling in patients with Parkinson’s disease, and those at risk of dementia. We identified two organisational gradients to structural-functional connectivity decoupling: anterior-to-posterior and unimodal-to-transmodal, with stronger structural-functional connectivity coupling in anterior, unimodal areas and weakened towards posterior, transmodal regions. Next, we related spatial patterns of decoupling to expression of neurotransmitter receptors. We found that dopaminergic and serotonergic transmission relates to decoupling in Parkinson’s overall, but instead, serotonergic, cholinergic and noradrenergic transmission relates to decoupling in patients with visual dysfunction. Our findings provide a framework to explain the specific disorders of consciousness in Parkinson’s dementia, and the neurotransmitter systems that underlie these. Introduction Dementia associated with Parkinson’s disease (PD) is characterised by changes in cognition and perception, including visual hallucinations, delusions and fluctuations in attention 1 , 2 . It is often preceded and accompanied by visual dysfunction 3 , 4 , 5 and linked to hypometabolism in posterior brain regions 6 . High-order visual dysfunction, in particular, is associated with worse cognition at 1-year follow up 7 . Although PD is characterised by Lewy body inclusions, the neural correlates of cognitive impairment in PD and specifically the structural and functional changes remain unclear 8 . Perception and action, whether in health or disease, depends on connections between brain regions. In general, it is assumed that there is a relationship between the strength of a structural connection between two brain areas and the strength of the corresponding functional connection 9 . However, it has recently emerged that this relationship between structural–functional connectivity is not uniform across the healthy human brain but organised with clear hierarchical and cyto-architectural principles 9 . Specifically, there is close structural–functional coupling (SC–FC coupling) in primary sensory (unimodal) cortices, with divergence at the apices of processing hierarchies (transmodal association cortices), in networks such as the default mode network (DMN) 10 , 11 , 12 . One theory for this is that relative decoupling in higher-order areas allows abstract reasoning, protected from the more granular signalling in earlier stages of sensory processing 13 . Changes in SC–FC coupling occur during brain maturation 10 but also in psychiatric 14 , 15 and neurological disease 16 , 17 , 18 , 19 , and maybe particularly relevant to cognition: individual differences in coupling reflect differences in cognition 20 , 21 and higher SC–FC coupling in prefrontal cortex is associated with improved executive function 10 . Therefore, loss of SC–FC coupling might be expected in PD, especially in subtypes linked with higher risk of dementia. Neuroimaging studies have provided important insights separately into structural and functional connectivity alterations in PD 22 , 23 , 24 . Diffusion-weighted imaging revealed structural alterations in tracts including the corpus callosum and thalamo-cortical connections in PD with cognitive impairment 25 , 26 , 27 , 28 , 29 and those with visual dysfunction (higher dementia risk) 30 . Resting-state functional MRI (rsfMRI) studies have identified changes in functional connectivity between frontal and visuospatial regions 31 , 32 and frontal regions and the posterior cingulate 7 , 32 in PD with cognitive impairment. These studies provide useful insights into the network-level dysfunction contributing to cognitive impairment in PD, however, the question of how structural changes impact on brain function is unresolved. We hypothesised that the relationship between structural–functional coupling across the brain would be systematically modified in PD and that this pattern of decoupling would occur along with one of two hypothesised directions: (1) across the unimodal–transmodal hierarchical gradient of SC–FC decoupling that is seen in health with more transmodal regions becoming even more decoupled in PD 10 , 11 , 12 , 33 ; or (2) along the anterior-to-posterior (A–P) axis with decoupling more prominent in posterior regions. This hypothesis was based on the posterior distribution of metabolic and connectivity changes seen in PD 25 , 30 , 34 , 35 , 36 . We used rsfMRI and diffusion-weighted imaging to investigate changes in whole-brain structural connectivity–functional connectivity coupling (SC–FC coupling) in 88 patients with PD (of whom 33 had visual dysfunction and higher dementia risk) and 30 age-matched controls. We found widespread decoupling in PD compared to controls but a more focal pattern affecting the insula in PD with visual dysfunction compared with those with normal visual function. Next, we evaluated the specific pattern of decoupling in PD and found that this occurred across both a unimodal–transmodal and anterior–posterior axes. Finally, we examined whether changes in SC–FC coupling are related to underlying differences in expression of specific neurotransmitters in an exploratory analysis. Although PD is classically associated with the altered dopaminergic transmission, recent evidence implicates other neurotransmitter systems: cholinergic transmission 37 , 38 , 39 is affected in PD dementia and both reduced occipital GABA levels 40 and altered noradrenergic transmission 41 have been implicated in cognitive impairment in PD. We show that dopamine transmission, although central to motor aspects of PD, may have a less important role in PD dementia, as neurotransmitter systems other than dopamine were correlated with the SC–FC decoupling found in PD with visual dysfunction. Results To characterise how structural–functional connectivity (SC–FC) coupling changes in PD, we quantified the degree to which a brain region’s structural connectivity relates to coordinated fluctuations in neural activity between-regions. For each participant, two weighted, undirected connectivity matrices were derived using the same parcellation 42 comprised of 400 cortical brain regions: a structural connectivity matrix derived from diffusion-weighted imaging and a functional connectivity matrix derived from resting-state functional MRI (rsfMRI) data. SC–FC coupling was measured as the Spearman rank correlation between the structural and functional connectivity profiles of each region. An overview of the study methodology is seen in Fig. 1 . Fig. 1: Overview of the study methodology. A Analyses were conducted using a whole-brain parcellation including 400 cortical regions 42 . B Structural connectivity (SC) and functional connectivity (FC) matrices were derived for each participant from diffusion-weighted imaging (DWI) and resting-state functional MRI (rsfMRI) data, respectively. SC: Darker colours indicate higher normalised streamline counts; FC: lighter colours indicate higher Fisher-z normalised Spearman correlation values between every possible pair of brain regions. C For each participant, regional connectivity profiles were extracted from each row of the structural or functional connectivity matrix (example here shown by green dashed line) and represented as vectors of connectivity strength from a single network node to all other nodes in the network. Structural–functional connectivity coupling (SC–FC coupling) was then measured as the Spearman rank correlation between non-zero elements of regional structural and functional connectivity profiles. SC–FC coupling was then compared between groups. D Gradients of connectivity covariance were constructed for each individual’s structural and functional connectivity matrices using diffusion map embedding, a non-linear compression algorithm that sorts nodes based on affinity (normalised angle was used as a measure of affinity). We focused our analyses on the first 2 principal structural and functional gradients; the scores of each node for the first 2 gradients are shown in the kernel density plot (blue: structural, red: functional gradients). Gradient scores and SC–FC coupling may be projected back to the cortical surface. We then correlated functional and structural gradient scores with SC–FC coupling for each region. Full size image A total of 118 participants were included: 88 patients with PD and 30 controls. Patients with PD were further classified according to their performance in two higher-order computer-based visual tasks which have been previously shown to correlate with worsening cognition over time 7 . This resulted in 33 PD low visual performers and 55 PD high visual performers. MRI quality and pre-processing were visually and quantitatively evaluated. Excluding cases with low-quality structural MRI or high head motion on rsfMRI resulted in the exclusion of 14 subjects from our original cohort, leading to the final sample of 88 PD and 30 controls. Importantly, the three groups did not significantly differ in scan quality, gender and years in education (Table 1 ). As in previous work 43 , 44 , performance in visual tasks correlated with cognition but not on low-level vision tests such as visual acuity. Details of neuropsychological performance in Supplementary Table 1 . PD low and high visual performers were well-matched in disease duration, severity and levodopa equivalent dose (Table 1 ). Table 1 Demographics and clinical assessments. Full size table Widespread structural–functional connectivity decoupling occurs in PD First, we examined how the relationship between structural and functional connectivity changes in PD. All participants showed statistically significant correlations between structural and functional connectivity (correlation coefficient range = 0.28–0.74, all p spin < 0.001). Similarly to other studies 10 , 45 , controls showed variation in SC–FC coupling across the cortex, with higher coupling in primary sensory and medial prefrontal cortex and lower coupling in lateral temporal and frontoparietal regions (Fig. 2A ). This pattern was preserved in PD; however, SC–FC coupling was globally reduced in PD participants compared to controls (mean 0.484 in PD vs 0.544 in controls, p = 0.002) (Fig. 2B, C ). Fig. 2: Structural–functional connectivity coupling in controls and changes in patients with Parkinson’s disease (PD). A Spatial pattern of structural–functional connectivity (SC–FC) coupling in controls. The coupling between regional structural and functional connectivity profiles varied widely across the cortex. Primary sensory and medial prefrontal cortex exhibited relatively high structure–function coupling, while lateral temporal and parietal regions showed relatively low coupling. B Spatial pattern of SC–FC decoupling in PD. Regional changes in SC–FC coupling (correlation coefficient plotted, with age and gender correction) are presented in PD vs controls (top) and PD low visual performers vs PD high visual performers (bottom). C SC–FC coupling changes averaged across all nodes of the network. Average SC–FC coupling (Spearman’s rank correlation) across the whole-brain network (400 nodes) is compared between controls, PD high and PD low visual performers. S–F: structural connectivity–functional connectivity. * denotes statistically significant results ( p -spin < 0.05). Both PD low and PD high visual performers showed significantly reduced global coupling than controls (PD low visual performers mean 0.469 vs 0.544 in controls, p = 0.002; PD high visual performers mean 0.492 vs 0.544 in controls, p = 0.005). There was no significant difference between PD low and PD high visual performers ( p = 0.415). D SC–FC coupling changes for each node across the brain. Whole-brain comparisons of SC–FC coupling were performed for every node across the whole brain between PD vs controls (top) and PD low visual performers vs PD high visual performers (bottom), age and gender included as covariates. Only nodes surviving FDR correction ( q < 0.05) are presented. Full size image When we examined SC–FC coupling in all nodes across the whole brain, 8 nodes showed significantly reduced coupling in PD compared to controls (adjusting for age and gender, FDR-corrected over 400 nodes q < 0.05). The nodes showing SC–FC decoupling in PD had a posterior distribution: bilateral superior and middle occipital gyri and right cuneus, precuneus and calcarine gyrus (Fig. 2D and Table 2 ). Table 2 Nodes showing structural–functional connectivity decoupling in PD compared with controls and in PD low vs high visual performers. Full size table When we compared overall coupling, averaged across the whole of the brain network, PD low visual performers did not show significant decoupling compared to PD high visual performers (mean 0.469 in PD low visual performers vs 0.492 in PD high visual performers, p = 0.415) (Fig. 2C ). In contrast, changes in PD low visual performers were more focal (Fig. 2B ) with bilateral insula and the right calcarine gyrus showing significant decoupling compared to high visual performers (Fig. 2D and Table 2 ). Higher SC–FC coupling within the right calcarine gyrus was related to higher MOCA scores in PD participants ( r = 0.307, q = 0.011) (Supplementary Fig. 3 ). There was no significant correlation between MOCA scores and SC–FC coupling in the left or right insula (left: r = 0.099, q = 0.361; right: r = 0.062, q = 0.567). To ensure that results were not influenced by parcellation choice, we replicated our SC–FC analysis in another parcellation with similar results (Supplementary Figs. 4 and 5 ). Group differences in PD vs controls and PD low vs PD high visual performers in our cohort for structural and functional connectivity separately are found in Supplementary Fig. 6 . Defining structural and functional gradients of macroscale cortical organisation in health Next, we assessed whether the spatial variability in structure–function decoupling aligns with fundamental properties of cortical organisation. Using diffusion map embedding for non-linear dimensionality reduction 46 , we derived structural and functional gradients of cortical organisation for each control participant’s structural and functional connectivity matrix respectively. Similar to the previous studies 33 , 45 , 47 , 48 , we focused our analyses on the first two principal gradients. The first principal gradient explained 14.3% of the variance for structural and 27.5% for functional gradients and the second principal gradient 11.9% for structural and 17.5% for functional gradients. We assessed the dimension of variance in connectivity that the first two gradients represented in healthy controls. The first principal gradients (structural and functional) were anchored on one end in frontal and the other end on occipital regions (Fig. 3A : structural and Fig. 3 B: functional gradients). To confirm this A–P alignment, we performed correlations (df = 400) between the weighting of each brain region in the first gradient (using the mean value across the control group only) and the corresponding A–P axis coordinate for that region. This showed a significant negative correlation for the first structural [ ρ = −0.626 (interindividual range: −0.651, −0.572), p spin < 0.001] and functional gradient [ ρ = −0.592 (interindividual range: −0.684, −0.267), p spin < 0.001] (Fig. 3A, B ). Fig. 3: Structural and functional gradients of cortical organisation in controls. The first two principal gradients derived from the averaged control structural and functional connectivity matrices are presented. Gradient scores are projected back onto the cortical surface. The first principal structural ( A ) and functional ( B ) gradients showed a dissociation between the posterior and anterior regions. The second principal structural ( C ) and functional ( D ) gradients showed a dissociation between unimodal and transmodal regions. Top and bottom 10% of the average control gradients highlight regions with similar (same colour) and distinct (red vs blue) connectivity profiles. For the first structural and functional gradients, top 10% of regions are more posterior and bottom 10% more anterior. For the second structural and functional gradients, top 10% of regions are more transmodal and bottom 10% more unimodal. On the right, we plot the correlation between the gradient score (control-averaged) and the A–P axis coordinate for the first principal gradients, and the Network Hierarchy level for the second principal gradients (each dot represents a single region of the average control connectome). A–P: Anterior–Posterior (lower values representing more posterior regions, higher values more anterior regions), Network Hierarchy level: level 1 sensory and sensorimotor networks, level 2 dorsal attention and salience networks, level 3 frontoparietal and limbic networks, level 4 default mode network (DMN). Full size image In contrast, the second principal gradients in control participants were anchored in unimodal regions (primary sensory cortex) on one end and transmodal regions on the other end (Fig. 3C structural and 3D functional gradients). To confirm this, we assigned each brain region to a level of hierarchy according to its corresponding functional network, moving from unimodal (level 1) to transmodal areas (level 4) 49 . We then performed correlations (df = 400) between the weighting of each brain region in the second principle gradient and its hierarchy level. Both the structural [ ρ = 0.478 (interindividual range: 0.372, 0.518), p spin = 0.003] and functional second principal gradients [ ρ = 0.663 (interindividual range: 0.239, 0.749), p spin = 0.001] significant correlated with this unimodal–transmodal axis (Fig. 3C, D ). Structure–function decoupling occurs across gradients of macroscale organisation in health and is accelerated in PD Next, we examined the relationship between macroscale gradients and SC–FC coupling using a spatial permutation test. This generates a null distribution of randomly rotated brain maps that preserve the spatial covariance structure of the original data (the resulting p -values are denoted p spin ) 50 . In controls, variation in SC–FC coupling significantly correlated with the first principal gradients, with stronger coupling in posterior regions and weaker in anterior ones (structural: ρ = −0.169, p spin = 0.011, functional: ρ = −0.2, p spin = 0.042; Fig. 4A, B ). Coupling also significantly correlated with the second principal gradients: unimodal sensory regions exhibited relatively strong SC–FC coupling but transmodal regions exhibited weaker coupling (structural: ρ = −0.144, p spin = 0.007, functional: ρ = −0.203, p spin = 0.009; Fig. 4C, D ). Fig. 4: Structural–functional connectivity decoupling in PD follows macroscale cortical gradients. Structural–functional connectivity (SC–FC) coupling is significantly associated with the first principal structural ( A ) and functional gradients ( B ), which align with the anterior–posterior axis (visualised on the top: lower gradient values represent more anterior regions, higher gradient values more posterior regions). The correlation between mean SC–FC coupling and gradient value is plotted for each brain region in controls (grey), PD high visual performer (pink) and PD low visual performers (red) with ρ denoting the Spearman correlation coefficient. This correlation was seen in all groups but was more pronounced in PD than control participants and even more so in PD low visual performers (who are at higher risk of dementia). SC–FC coupling also reflected a brain region’s position along the second principal structural ( C ) and functional gradients ( D ), which reflect a unimodal-to-transmodal axis (visualised on the top: lower gradient values represent more unimodal regions, higher gradient values more transmodal regions). The correlation between mean SC–FC coupling and gradient value is plotted for each brain region in controls (grey), PD high visual performer (pink) and PD low visual performers (red) with ρ denoting the Spearman correlation coefficient. Again this relationship was more pronounced in PD low visual performers then PD high visual performers followed by control participants. The significance of regional correlations was evaluated using nonparametric spatial permutation testing. Full size image This gradual decoupling in SC–FC across the A–P and unimodal–transmodal axes seen in controls, was amplified in PD and even more so in low visual performers. Greater SC–FC decoupling was seen along the A–P axis for both structural (PD high visual performers ρ = −0.276, p spin < 0.001; PD low visual performers ρ = −0.307, p spin < 0.001; Fig. 4A ) and functional gradients (PD high visual performers ρ = −0.271, p spin < 0.001; PD low visual performers ρ = −0.349, p spin < 0.001; Fig. 4B ). Similarly, greater decoupling was seen along the unimodal–transmodal axis (structural: PD high visual performers ρ = −0.161, p spin = 0.040; PD low visual performers ρ = −0.207, p spin = 0.001; Fig. 4C and functional gradients: PD high visual performers ρ = −0.241, p spin = 0.005; PD low visual performers ρ = −0.268, p spin = 0.001; Fig. 4D ). Relationship between structural–functional connectivity decoupling in PD and neurotransmitter receptor gene expression Finally, to assess the role that neuromodulatory systems may have in SC–FC decoupling in PD, we investigated the relationship between maps of gene expression for neurotransmitter receptor genes (derived from post-mortem human brains) and SC–FC coupling changes in: (1) PD vs controls and (2) PD low vs high visual performers. We found that decoupling in PD showed a statistically significant moderate correlation with regional differences in gene expression of dopaminergic, serotoninergic and cholinergic receptors (Fig. 5A and Table 3 ). Specifically, decoupling in PD was associated with reduced expression of DRD2 and three serotonin receptors ( HTR2A, HTR2C, HTR4) and increased expression of a cholinergic ( CHRNA4) and serotoninergic receptor (HTR1E) (Table 3 ). Fig. 5: Correlation between regional cortical expression of neurotransmitter receptor genes and structural–functional connectivity decoupling in PD. Spearman correlations between regional cortical expression of adrenergic, cholinergic (muscarinic and nicotinic), and dopaminergic receptors and difference in structural–functional connectivity coupling seen between PD and controls (left) and PD low visual performers vs PD high visual performers (right). Full gene names in Supplementary Table 2 . Results colour coded according to receptors: red: adrenergic, green: cholinergic, purple: dopaminergic, blue: serotoninergic receptors. Bars with stronger (rather than fainter) colours indicate statistically significant relationships (FDR-corrected p > 0.05). Full size image Table 3 Neurotransmitter receptor genes correlating with the change in structural–functional connectivity coupling in PD. Full size table In contrast, changes in SC–FC coupling in PD low visual performers (compared to high visual performers) were not significantly correlated with changes in dopaminergic but rather to cholinergic ( CHRNA2, CHRNA3, CHRNA4 ), serotoninergic ( HTR1A, HTR5A ) and noradrenergic ( ADRA2A ) receptors (Fig. 5B , Table 3 and Supplementary Table 3 for the full neurotransmitter gene expression results.). Discussion We provide evidence of significant differences in SC–FC coupling in patients with PD and shed light onto the organisational and neuromodulatory principles that drive this decoupling. In patients with PD, we found a spatially widespread de-coupling of SC–FC correlations. In contrast, PD low visual performers, who are at higher risk of dementia, exhibited more focal decoupling compared to PD high visual performers, with the insula preferentially affected. SC–FC decoupling in PD follows specific gradients of hierarchical organisation: anterior–posterior and unimodal–transmodal. These same gradients governed spatial variation in SC–FC coupling in healthy controls but became more pronounced in PD and even more so in PD low visual performers. We found that structural–functional connectivity decoupling in PD follows a unimodal-to-transmodal gradient. Several studies in health have shown stronger SC–FC coupling in unimodal sensory cortex and relative decoupling in transmodal association cortex coinciding with improvements in executive ability and abstract reasoning 10 , 11 , 33 . Our second principal gradients similarly reflected a unimodal-to-transmodal hierarchy and were correlated with SC–FC in controls. This provides further support to the tethering hypothesis that association cortex is untethered from molecular gradients of early sensory cortex 51 , now using for the first time, gradients derived from diffusion-weighted imaging. We show that in PD, structural and functional connectivity became more decoupled in regions higher along the unimodal–transmodal hierarchy. This supports the central role of the DMN in PD-associated cognitive impairment which has been highlighted by rsfMRI studies 52 , 53 , 54 , pathological evidence 55 and, more recently, network lesion mapping 56 . Transmodal regions, such as the DMN, where SC–FC are, normally, less closely aligned may be more vulnerable to the presence of neurodegeneration. Decoupling in these higher-order regions could explain the higher prevalence of neurocognitive deficits seen in PD, such as hallucinations and delusions, with a release of these regions from the normal constraints of sensory processing. Although in health a weaker SC–FC coupling may be beneficial allowing for more adaptive and flexible cognition, in the presence of neurodegeneration it may make transmodal regions more vulnerable. The numbers of patients with hallucinations and delusions in our cohort were too low to formally test whether these symptoms correlate with greater decoupling, but this would be an avenue of interest for future work. In addition, we saw a striking increase of SC–FC decoupling along the A–P axis (first principle gradients) in PD. This correlation was observed in controls but became more pronounced in PD and even more so in low visual performers. An anterior–posterior spatial gradient has been observed at the gene expression level in the adult human brain 57 , 58 , 59 and prenatally 60 , 61 . Specific gene expression patterns across this gradient could confer vulnerability in the presence of degeneration. The A–P gradient however does not only reflect transcriptional differences but also changes in cortical microstructure with increase in neuronal number and density and decrease in neuron and arbour size across the A–P axis 58 . The increased neuronal population in more posterior regions may make them more vulnerable to transneuronal alpha-synuclein spread. Finally, we shed light onto the neuromodulatory systems associated with SC–FC coupling in PD overall and in those individuals at higher risk of cognitive decline. Unsurprisingly, the reduced dopaminergic transmission was associated with SC–FC decoupling observed in PD compared to controls. In contrast, we found no correlation of dopaminergic receptor expression and decoupling in PD low visual performers, suggesting that neuromodulators other than dopamine may have a more important role in the development of cognitive impairment. Altered serotoninergic transmission was also associated with SC–FC decoupling in PD participants, in keeping with evidence from positron emission tomography 62 , biochemical 63 and post-mortem studies 64 showing serotoninergic degeneration in PD. In contrast, in PD low visual performers SC–FC decoupling was more prominent in regions with increased serotoninergic receptor expression, specifically HTR1E and HTR5A . Although the function of these receptors is not yet fully described, HTR5A is thought to have a role in cognition 65 , with 5HT-5a antagonists improving cognition in animal models 66 . In addition, regional differences in nicotinic cholinergic receptors were associated with SC–FC decoupling with changes in both PD overall and PD low visual performers. Cholinergic cell involvement is well recognised in PD and linked to the development of dementia, with a progressive reduction in nicotinic receptors in parallel to dementia severity 67 . This reduction in PD could be more prominent in regions typically rich in nicotinic receptors in health. Finally, we found that SC–FC decoupling in PD low visual performers was more pronounced in regions with reduced expression of the noradrenergic receptor ADRA2 in health ( q = 0.041). Interestingly, ADRA2 gene polymorphisms were recently identified in a genome-wide association study of PD patients (associated with increased insomnia at baseline) 68 . Norepinephrine and its receptors have also been linked to PD 69 , 70 , 71 , although not previously in relation to cognitive impairment. Several methodological considerations need to be taken into account when interpreting the results of our study. Structural connectivity was estimated using streamlines from diffusion tractography which is susceptible to false positives and false negatives 72 . To provide the best possible estimate of structural connectivity, we used multi-shell data and improved post-processing, including constrained spherical deconvolution 73 and SIFT2 74 . Functional connectivity estimates were derived from rsfMRI data which are also susceptible to the artefact, particularly motion. To mitigate this, we adopted rigorous quality assurance and strict exclusion criteria 75 . Time of day and medication usage influence rsfMRI 76 ; all participants were scanned in the ON state, receiving their usual dopaminergic medications and at the same time. Although we optimised both our structural and functional connectivity estimates, these remain indirect measures of brain structure and function which needs to be taken into account when interpreting the results of our study. We used parcellated data to allow for group comparisons, however functional boundaries vary across individuals 77 which could lead to misalignments when comparing structural–functional connectivity relationships. We used gene expression data from healthy human brains, therefore results relating to neurotransmitter receptor gene expression should be interpreted with caution. In addition, although significantly correlated, regional variation in gene expression explained only a moderate fraction of the variance in SC–FC coupling (absolute value of correlation coefficients between 0.133 and 0.308), suggesting that additional factors other than neurotransmitter receptor gene expression have a role in the changes in SC–FC coupling in PD. However, our study could provide insights informing subsequent validation studies in PD brains or animal models. Finally, our study examines cross-sectional data, using visual dysfunction as a surrogate marker for dementia risk. Although this provides useful insights, longitudinal studies in PD patients who progress to dementia are likely to provide further insights into the temporal order of structural–functional connectivity decoupling in PD. Our findings show that structural–functional connectivity coupling is severely disrupted in PD across the cortex, with even more pronounced decoupling in temporal lobe structures in low visual performers (who are at higher risk of dementia). We show that structural–functional connectivity decoupling in PD follows the same macroscopic organisational principles that guide SC–FC coupling in healthy individuals but with accelerated decoupling. Finally, we clarify the neuromodulatory correlates of SC–FC decoupling in PD. Altogether, our findings propose a framework to explain SC–FC decoupling in PD and offer insights to possible therapeutic targets. Methods Participants We included 88 patients with PD and 30 unaffected controls, recruited to our London centre. All patients with PD fulfilled the Queen Square Brain Bank Criteria 78 . All participants with diffusion-weighted imaging and rsfMRI scans passing predefined quality control criteria (see “Methods: Data acquisition & Quality assurance” section) were included. The study was approved by the local ethics committee and participants provided written informed consent. Participants with PD were classified according to their performance in two computer-based higher-order visual tasks. The Cats and Dogs task measures tolerance to visual skew, with images of cats and dogs distorted by varying skew along the X axis and threshold of visual skew determined using psychophysical testing (two-alternative forced-choice, 90 repetitions) (as described previously 4 , 7 , 43 and see example stimulus in Supplemental Fig. 1 ). The biological motion task measures sensitivity to the perception of a moving person from moving dots at the position of the major joints. Increasing the number of moving dots makes the task more difficult, and the number of additional dots tolerated is determined psychophysically, as previously described 44 and see Supplemental Fig. 1 for example stimulus. These visual tasks were chosen as they provide robust measures of higher-order visual function and have been shown by our group to be associated with a higher risk of PD dementia and worsening cognition over time 4 , 7 , 44 . To capture patients with consistently poor performance in these high-level visual tasks, we classified patients as poor visual performers if they performed worse than the group median in both tasks ( n = 33 low visual performers). All other patients with PD were classified as high visual performers ( n = 58) as in the previous work 4 , 79 , 80 . Details on task performance in the two experimental tasks are seen in Supplementary Fig. 2 . Thirty unaffected age-matched controls were recruited from spouses and a volunteer database; controls were matched to the PD group as a whole. The Mini-Mental State Examination (MMSE) 81 and Montreal Cognitive Assessment (MoCA) 82 were used as measures of general cognition. Additionally, two tests per cognitive domain were performed 83 : Digit span 84 and Stroop colour 85 for attention, Stroop interference 85 and Category fluency 86 for executive function, Word recognition task 87 and Logical memory 84 for memory, Graded naming task 88 and Letter fluency for language, and Visual object and space perception battery 89 , and Hooper visual organisation test 90 for visuospatial function. Visual acuity was assessed using LogMAR 91 , colour vision using Farnsworth D15 92 , and contrast sensitivity using Pelli–Robson 93 . The Hospital Anxiety and Depression Scale (HADS) was used to assess mood 94 . PD participants underwent assessments of motor function using MDS-UPDRS 95 , sleep using the REM Sleep Behaviour Disorder Questionnaire 96 and smell using Sniffin’ Sticks 97 . Levodopa dose equivalence scores (LEDD) were calculated for PD participants 98 . Data acquisition and quality assurance All MRI data were acquired on a 3T Siemens Magnetom Prisma scanner (Siemens) with a 64-channel head coil. Diffusion-weighted imaging (DWI) was acquired with the following parameters: b0 in both AP and PA directions, b = 50 s/mm 2 /17 directions, b = 300 s/mm 2 /8 directions, b = 1000 s/mm 2 /64 directions, b = 2000 s/mm 2 /64 directions, 2 × 2 × 2 mm isotropic voxels, TE = 3260 ms, TR = 58 ms, 72 slices, 2 mm thickness, acceleration factor = 2. DWI acquisition time was ~10 min. Resting-state functional MRI (rsfMRI) was acquired with the following parameters: gradient-echo EPI, TR = 70 ms, TE = 30 ms, flip angle = 90°, FOV = 192 × 192, voxel size = 3 × 3 × 2.5 mm, 105 volumes, 7-min session. During rsfMRI, participants were instructed to lie quietly with their eyes closed and avoid falling asleep; this was confirmed by monitoring and post-scan debriefing. A 3D MPRAGE (magnetisation prepared rapid acquisition gradient-echo) image (voxel size = 1 × 1 × 1 mm, TE = 3.34 ms, TR = 2530 ms, flip angle = 7°) was also obtained. Imaging for all participants was performed at the same time of day, with PD participants receiving their normal medications. Both modalities underwent rigorous quality assurance. Prior to diffusion processing, all volumes of raw datasets were visually inspected and each volume evaluated for the presence of artefact; only scans with <15 volumes containing artefacts 99 were included. As a result, 3 PD and 1 control participants were excluded from the original patient cohort. Quality of rsfMRI data was assessed using the MRI Quality Control tool 100 . As rsfMRI can be particularly susceptible to motion effects we adopted stringent exclusion criteria 75 . Specifically, participants were excluded if any of the following was met: (1) mean framewise displacement (FD) > 0.3 mm, (2) any FD > 5 mm, or (3) outliers >30% of the whole sample. This led to 12 participants being excluded (11 PD, of whom 5 low visual performers, and 1 control), resulting in 88 patients included in the dataset presented here. Parcellation An overview of the study methodology is seen in Fig. 1 . 400 cortical regions of interest (ROIs) were generated by segmenting each participant’s T1-weighted image using the Schaefer parcellation 42 . We replicated SC–FC coupling analyses using the Glasser parcellation 101 . Parcellations over 200 nodes increase reliability in gradient construction, particularly those derived from functional connectivity 102 . We used the same parcellation to construct functional and structural connectivity matrices for each participant (Fig. 1A ). Structural connectome construction Pre-processing of DWI images was performed in MRtrix3.0 103 . Diffusion-weighted images underwent denoising 104 , removal of Gibbs artefacts 105 , eddy-current and motion correction 106 , and bias field correction 107 . Diffusion tensor metrics were calculated and constrained spherical deconvolution performed 108 . The raw T1-weighted images were registered to the diffusion-weighted image using NiftyReg 109 and five-tissue anatomical segmentation performed using the 5ttgen script in MRtrix. Subsequently, we performed anatomically constrained tractography with 10 million streamlines 110 using the iFOD2 tractography algorithm 111 and dynamic seeding with streamlines truncated at the grey-white matter interface. We applied the spherical deconvolution informed filtering of tractograms (SIFT2) algorithm 74 to reduce biases. The resulting set of streamlines was used to construct the structural brain network. Connections were weighted by streamline count and a cross-sectional area multiplier 74 and combined to a 400 × 400 undirected, weighted matrix (Fig. 1B ). As recommended by the authors of SIFT2, we did not apply a threshold to structural connectivity matrices 74 . Functional connectome construction rsfMRI data underwent standard pre-processing using fMRIPrep 1.5.0 112 . The first 4 volumes were discarded to allow for steady-state equilibrium. Functional data was slice-time corrected using 3dTshift from AFNI 113 and motion-corrected using mcflirt 114 . Distortion correction was performed using a TOPUP implementation 115 . This was followed by co-registration to the corresponding T1-weighted image using boundary-based registration with six degrees of freedom 116 . Motion correcting transformations, field distortion correcting warp, BOLD-to-T1w transformation and T1w-to-template (MNI) warp were concatenated and applied in a single step using antsApplyTransforms (ANTs v2.1.0) using Lanczos interpolation. Physiological noise regressors were extracted applying CompCor 117 . Sources of spurious variance were removed through linear regression (six motion parameters, mean signal from white matter and cerebrospinal fluid), followed by calculation of bivariate correlations and application of Fisher transform. Given the contentiousness of global signal regression 118 and potential to distort group differences 119 , we did not regress global signal. Functional connectivity between ROIs was quantified as the Pearson correlation coefficient between mean regional BOLD time series. To minimise the effect of spurious connections whilst avoiding arbitrary thresholds, we used structural connectivity to inform functional connectome construction. Specifically, we discarded functional connections between ROIs that were solely based on time series correlation in the absence of anatomical connection. For each participant, a 400 × 400 weighted adjacency matrix was constructed representing the functional connectome (Fig. 1B ). Structural–functional connectivity coupling analysis We extracted regional connectivity profiles for each participant’s structural and functional connectivity matrix, as vectors of connectivity strength from a single node to all other nodes in the network. SC–FC coupling for each node was then measured as the Spearman rank correlation between the non-zero elements of the regional structural and functional connectivity profiles 10 , 120 , 121 (Fig. 1C ). Gradient analysis We derived cortical gradients separately from structural and functional connectivity matrices, using diffusion map embedding. This identifies spatial axes of variation in connectivity across different areas, whereby cortical vertices that are strongly interconnected are closer together and vertices with little or no inter-connectivity are farther apart 45 , 46 . We used normalised angle as a metric of similarity (values between 0 and 1, with 1 denoting identical angles, and 0 opposing angles). The normalised angle between two nodes i and j ( A ( i , j )) is calculated as shown in equation 1 below: $$A\left( {i,\,j} \right) = 1 - \frac{{{\mathrm{cos}}^{ - 1}\left( {{\mathrm{cos}}\,{\mathrm{sim}}\left( {x_i,x_j} \right)} \right)}}{\pi }$$ (1) where cos sim is the cosine similarity function. First, we generated a group-level gradient component template from the average structural and functional connectivity matrices of all participants. We performed Procrustes alignment to align the gradient components of each individual to the group template 122 . Gradient components defined in connectivity space were mapped back onto the cortical surface (Fig. 1D ). For each derived gradient, we calculated the variance explained by dividing the gradient’s eigenvalue with the sum of the eigenvalues for all gradients 102 . Gradient analyses were performed using BrainSpace 102 . To assess the correspondence of the first structural and functional gradients with the A–P axis, we calculated the correlation between A–P axis coordinates for each brain region 42 and its corresponding gradient coefficient. To ensure that the second structural and functional gradients represented a unimodal–transmodal gradient we assigned functional communities to levels of hierarchy (level 1: sensory and sensorimotor networks, level 2: dorsal attention and salience networks, level 3: frontoparietal and limbic networks, level 4: default mode network (DMN)) 45 , 47 , 49 . We then calculated the Spearman correlation coefficient between a node’s level of hierarchy and gradient coefficient. Neurotransmitter receptor gene expression Expression profiles for genes of noradrenergic, cholinergic (nicotinic and muscarinic), dopaminergic and serotoninergic receptors were obtained using data from the Allen Human Brain Atlas (AHBA) 57 . We used the recently described rigorous method of pre-processing by Arnatkevic̆iūtė et al. 123 to extract gene expression data from AHBA and map them to the 400 cortical regions of our parcellation, using abagen 124 . Each tissue sample was assigned to an anatomical structure of the 400 cortical regions, using the AHBA MRI data for each donor. Data were pooled between homologous cortical regions to ensure adequate coverage of both the left (data from six donors) and right hemisphere (data from two donors). Distances between samples were evaluated on the cortical surface with a 2 mm distance threshold. Probe to gene annotations were updated in Re-Annotator 125 . Only probes where expression measures were above a background threshold in more than 50% of samples were selected. A representative probe for a gene was selected based on highest intensity. Gene expression data were normalised across the cortex using scaled, outlier-robust sigmoid normalisation. 15,745 genes (of 20,737 initially included in the Allen atlas gene expression data) survived these pre-processing and quality assurance steps. Expression profiles for 31 pre-selected genes (Supplementary Table 2 ) encoding receptors for norepinephrine, acetylcholine, dopamine and serotonin were then extracted for each of the 400 cortical regions of our parcellation. Statistics and reproducibility Demographics, clinical and imaging characteristics were compared between PD high visual performers, low visual performers and controls using ANOVA for normally distributed and Kruskal–Wallis for non-normally distributed variables (Shapiro–Wilk test for normality), with post-hoc testing using t -tests and Mann–Whitney respectively. Statistical significance defined as p < 0.05. For group comparisons between SC–FC coupling and gradient component scores we used general linear model, with age and gender as covariates and comparisons of interest: (1) PD vs controls and (2) PD low visual performers vs PD high visual performers. We controlled for multiple comparisons using the False Discovery Rate (Benjamini–Hochberg method, q < 0.05) across 400 nodes. The significance of correspondence between SC–FC coupling and gradient coefficients was estimated using a spatial permutation test, which generates randomly rotated brain maps whilst preserving spatial covariance 50 . We performed 1000 random spatial permutations 126 and calculated Spearman correlation coefficient between extracted regional SC–FC values and gradient coefficient to build a null distribution. The permutation-based p -value ( p spin ) was calculated as the proportion of times that the null correlation coefficients were greater than the empirical coefficients 50 , 126 . Spearman correlations were performed between regional differences in SC–FC coupling between (1) PD vs controls and (2) PD low vs high visual performers. This was expressed as the vector of the difference in SC–FC coupling between groups (PD vs controls and PD low visual performers vs PD high visual performers for each of the 400 cortical nodes), visualised in Fig. 2B , and the regional expression level of each of the chosen 31 neurotransmitter receptor genes at each of the 400 cortical nodes. Results were FDR-corrected for multiple comparisons, q < 0.05, across 31 genes. Spatial permutation testing, as described above (1000 spatial permutations of the SC–FC regional differences for both PD vs controls and PD low vs PD high visual performers) were performed to ensure that the correlation between gene expression levels and SC–FC coupling was higher than expected by chance and had not arisen spuriously due to spatial autocorrelation 127 . Analyses were performed in Python 3 (Jupyter Lab v1.2.6). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Imaging and clinical data used in this study will be shared upon reasonable request to the corresponding author. All data and statistics generated from this study are presented in the manuscript and Supplementary Data 1 – 5 . Code availability All methods used open source software, and all links to the relevant software are included in Supplementary Methods (URLs). Code used in the analyses described in this paper is available here: .
Simple vision tests can predict which people with Parkinson's disease will develop cognitive impairment and possible dementia 18 months later, according to a new study by UCL researchers. The study, published in Movement Disorders, adds to evidence that vision changes precede the cognitive decline that occurs in many, but not all, people with Parkinson's. In another new study published today in Communications Biology, the same research team found that structural and functional connections of brain regions become decoupled throughout the entire brain in people with Parkinson's disease, particularly among people with vision problems. The two studies together show how losses and changes to the brain's wiring underlie the cognitive impairment experienced by many people with Parkinson's disease. Lead author Dr. Angeliki Zarkali (Dementia Research Centre, UCL Queen Square Institute of Neurology) said: "We have found that people with Parkinson's disease who have visual problems are more likely to get dementia, and that appears to be explained by underlying changes to their brain wiring. "Vision tests might provide us with a window of opportunity to predict Parkinson's dementia before it begins, which may help us find ways to stop the cognitive decline before it's too late." For the Movement Disorders paper, published earlier this month, the researchers studied 77 people with Parkinson's disease and found that simple vision tests predicted who would go on to get dementia after a year and a half. Dementia is a common, debilitating aspect of Parkinson's disease, estimated to affect roughly 50% of people within 10 years of a Parkinson's diagnosis. These longitudinal findings add weight to previous studies that were done at one time point, which had suggested that performance in vision tests, involving commonly used eye charts and skewed images of cats and dogs, was linked to the risk of cognitive decline. The new study also found that those who went on to develop Parkinson's dementia had losses in the wiring of the brain, including in areas relating to vision and memory. The researchers used recently developed methods to analyse finely detailed MRI scans, enabling them to pick up the damage to the brain's white matter. The researchers identified white matter damage to some of the long-distance wiring connecting the front and back of the brain, which helps the brain to function as a cohesive whole network. The Communications Biology study involved 88 people with Parkinson's disease (33 of whom had visual dysfunction and were thus judged to have a high risk of dementia) and 30 healthy adults as a control group, whose brains were imaged using MRI scans. In the healthy brain, there is a correlation between how strong the structural (physical) connections between two regions are, and how much those two regions are connected functionally. That coupling is not uniform across the brain, as there is some degree of decoupling in the healthy brain, particularly in areas involved in higher-order processing, which might provide the flexibility to enable abstract reasoning. Too much decoupling appears to be linked to poor outcomes. The researchers found that people with Parkinson's disease exhibited a higher degree of decoupling across the whole brain. Areas at the back of the brain, and less specialized areas, had the most decoupling in Parkinson's patients. Parkinson's patients with visual dysfunction had more decoupling in some, but not all brain regions, particularly in memory-related regions in the temporal lobe. The research team also found changes to the levels of some neurotransmitters (chemical messengers) in people at risk of cognitive decline, suggesting that receptors for those transmitters may be potential targets for new drug treatments for Parkinson's dementia. Notably, while dopamine is known to be implicated in Parkinson's, the researchers found that other neurotransmitters—acetylcholine, serotonin and noradrenaline—were particularly affected in people at risk of cognitive decline. Dr. Angeliki said: "The two papers together help us to understand what's going on in the brains of people with Parkinson's who experience cognitive decline, as it appears to be driven by a breakdown in the wiring that connects different brain regions." Dr. Rimona Weil (UCL Queen Square Institute of Neurology), senior author of both papers, said: "Our findings could be valuable for clinical trials, by showing that vision tests can help us identify who we should be targeting for trials of new drugs that might be able to slow Parkinson's. And ultimately if effective treatments are found, then these simple tests may help us identify who will benefit from which treatments."
10.1038/s42003-020-01622-9
Nano
Researchers watch layers of buckyballs grow in real time
"Unravelling the multilayer growth of the fullerene C60 in real-time"; Sebastian Bommel, Nicola Kleppmann et al.; Nature Communications, 2014; DOI: 10.1038/ncomms6388 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms6388
https://phys.org/news/2014-11-layers-buckyballs-real.html
Abstract Molecular semiconductors are increasingly used in devices, but understanding of elementary nanoscopic processes in molecular film growth is in its infancy. Here we use real-time in situ specular and diffuse X-ray scattering in combination with kinetic Monte Carlo simulations to study C 60 nucleation and multilayer growth. We determine a self-consistent set of energy parameters describing both intra- and interlayer diffusion processes in C 60 growth. This approach yields an effective Ehrlich–Schwoebel barrier of E ES =110 meV, diffusion barrier of E D =540 meV and binding energy of E B =130 meV. Analysing the particle-resolved dynamics, we find that the lateral diffusion is similar to colloids, but characterized by an atom-like Schwoebel barrier. Our results contribute to a fundamental understanding of molecular growth processes in a system, which forms an important intermediate case between atoms and colloids. Introduction Understanding the growth of molecular materials such as the prototypical molecular semiconductor fullerene C 60 (refs 1 , 2 ) on surfaces is an indispensable prerequisite for the rational design of complex nanomaterials from molecular building blocks, as well as for optimizing the performance in thin-film-based applications such as solar cells 3 , 4 , 5 and organic light-emitting diodes 6 , 7 . So far, molecular self-assembly and growth 8 has often been described by scaling laws to describe surface roughening and evolving island densities 9 , 10 . On a molecular level, a range of studies have elucidated the kinetics of diffusion and nucleation (see, for example, refs 11 , 12 , 13 , 14 , 15 , 16 ) and the Ehrlich–Schwoebel barrier for interlayer transport across a molecular step edge 11 , 17 , 18 (see Fig. 1). In the last decades, the energy barriers for atomic growth have been refined to take into account the local neighbourhood during multilayer growth, for example, by including concerted gliding of islands or by distinguishing between different step-edge orientations 19 , 20 , 21 , 22 . Yet to date, there is no organic compound for which even the ‘minimal’ set of the three parameters diffusion barrier, lateral binding energy and Ehrlich–Schwoebel barrier have been simultaneously quantified to describe multilayer molecular growth. Therefore, predictive simulations of the rate- and temperature-dependent morphology in molecular multilayer growth have so far been impossible, contrary to the situation for elemental atomic systems 23 , 24 , 25 and colloids 26 , 27 , 28 . Importantly, C 60 exhibits properties in between those of atoms and colloids, which makes it a test case of fundamental relevance. On one hand, its van der Waals diameter of 1 nm 29 is closer to atomic dimensions than to the μm length scale of colloidal systems. On the other hand, C 60 resembles colloids with its short-range nature of the effective centre-of-mass interactions 30 , which decay as −1/ r 9 with r being the centre-of-mass separation stemming from the averaged van der Waals interactions (approximately −1/ r 6 ) between the individual carbon interaction sites 31 . These forces between atomic, molecular or colloidal building blocks are of prime importance for kinetic growth processes, similar to their role in equilibrium phase behaviour and self-assembly 32 , 33 . For example, C 60 lacks a stable equilibrium liquid phase 30 , contrary to most elemental atomic systems. C 60 is therefore not only relevant for device applications, but also an important, fundamentally unique material bridging atoms and colloids. From the experimental side, a particular challenge in studying C 60 growth is that post-growth changes make the interruption of this non-equilibrium process to image different growth stages potentially misleading. It is therefore essential to use in situ real-time techniques. In this article, we employ the combination of specular X-ray growth oscillations 34 with real-time diffuse X-ray scattering 35 , 36 to simultaneously follow the vertical and lateral morphology during growth. Further understanding on a nanoscale level is provided by kinetic Monte Carlo (KMC) simulations of coarse-grained C 60 molecules without internal degrees of freedom. Then, the three relevant parameters determined by a fit of the data are the Ehrlich–Schwoebel barrier, the surface diffusion barrier and the lateral binding energy (see Fig. 1 ). With these parameters alone, we achieve quantitative agreement with the experimental data, enabling us to predict the rate-, temperature- and thickness dependency of the film morphology. Moreover, our analysis demonstrates that the short interaction range of C 60 as compared with atoms affects the relative heights of diffusion barrier and binding energy and results in comparatively long diffusion times. However, unlike the colloidal systems, C 60 has a true energetic Ehrlich–Schwoebel barrier, rather than the pseudobarrier that colloids display 26 . Figure 1: Surface processes in C 60 growth. The diffusion barrier E D , binding energy E B and Ehrlich–Schwoebel barrier E ES determine island nucleation and interlayer transport in multilayer growth. Included are numerical values determined by fitting the experiment using KMC simulations. Full size image Results Experimental results for the layer-by-layer growth of C 60 on mica For a comprehensive understanding of the processes during growth, the surface morphology has to be measured on the molecular length scale with an experimental time resolution that is fast compared with the minute timescale of the deposition of a monolayer. Interrupting growth to take a series of real-space microscopy images can be problematic, as the kinetics can be altered. For our system of C 60 on top of a closed first C 60 layer on mica, this route is indeed impossible because of quick dewetting effects characterized by a time constant of ~10 min. Also, in situ low-energy electron microscopy unfortunately—while very successfully used in a range of studies 37 , 38 —cannot be applied due to charging effects on mica. Therefore, we use X-ray scattering that can be performed non-invasively during growth and yields time-resolved information about the layer formation. This is extracted through specular reflectivity measurements at the so-called anti-Bragg position of C 60 (see Fig. 2a ) corresponding to half the Bragg value of the C 60 (111) reflection. Lateral information is available through simultaneous measurement of the diffuse scattering (grazing incidence small-angle X-ray scattering (GISAXS)), giving information about the island distance ( Fig. 2a ). Figure 2: Specular and diffuse X-ray scattering during C 60 growth. ( a ) Scattering geometry: both the specular reflected X-ray beam and the diffuse scattering are detected. The two-dimensional scattering pattern contains both lateral (transfer momentum q ‖ ) and vertical ( q ⊥ ) information on the surface morphology. ( b ) The specular X-ray reflectivity at the anti-Bragg point q ⊥ =0.38 Å −1 oscillates with increasing molecular exposure (time × growth rate) during growth of C 60 on mica indicating layer-by-layer growth ( T =60 °C). ( c ) The diffusely scattered intensity oscillates with the nucleation and coalescence of every layer and exhibits a characteristic peak-splitting Δ q ‖ . The latter corresponds to the inverse average island distance, which changes with film thickness. Full size image The time-dependent specular X-ray reflectivity as a function of molecular exposure, which is time × deposition rate, is shown in Fig. 2b for growth at T =60 °C substrate temperature and a deposition rate of f =0.1 ML min −1 . The anti-Bragg intensity oscillates with a period of two monolayers (ML) as the X-rays are reflected from consecutive C 60 layers and alternately interfere destructively and constructively with an intensity modulation of up to 90%. Here, the diffusely scattered intensity can be neglected in an analysis of the specular reflectivity, as it represents <1% of the total intensity. The oscillations are indicative of a layer-by-layer growth and from the change in oscillation period, a variation of the sticking coefficient is deduced (see Methods). Only after the first three layers, one observes a damping of the oscillations, reflecting the onset of slight roughening. An additional discussion on the anti-Bragg intensity during the growth of the first monolayer of C 60 on mica is given in Supplementary Note 1 and illustrated in Supplementary Fig. 1 . While the diffuse scattering is weak, it nevertheless contains important lateral information. Figure 2c shows a map of the diffusely scattered intensity as a function of q ‖ and molecular exposure (see Supplementary Fig. 2 for a graph of the diffusely scattered intensity at a molecular exposure of 0.3 nm). In contrast to the anti-Bragg oscillations, the diffusely scattered intensity oscillates with a period of one monolayer. As the first molecules are deposited in a monolayer, the surface roughness and therefore the diffusely scattered intensity rises due to nucleation of islands. Eventually, as the islands coalesce, the roughness and diffuse intensity decrease again, before reaching a minimum for a smooth complete layer. For each C 60 layer, the diffusely scattered intensity has two maxima along q ‖ , because the characteristic average island distance D causes an increase in the diffuse scattered intensity at Δ q ‖ ≈±2 π / D (refs 39 , 40 ). From a crystallographic perspective, we find the established 41 epitaxial order of C 60 on top of mica(001) as confirmed by grazing incidence X-ray diffraction experiments shown in Supplementary Fig. 3 and explained in Supplementary Note 2 . KMC simulations of the growth process To understand the morphological evolution on a molecular level, we employ KMC simulations, which are capable of describing the entire growth process of (coarse-grained) C 60 molecules into a face-centred cubic (fcc) lattice. KMC models the growth as a stochastic process, in which the molecules adsorb with a constant net adsorption rate f = f adsorb − f desorb . The molecules are treated on a coarse-grained level, that is, we do not take into account any internal (rotational or vibrational) degrees of freedom. This coarse-graining approach is supported by the fact that for the temperatures studied here, C 60 rotates freely both in bulk crystals 42 and in one-dimensional confinement 43 . Once adsorbed, a particle at site i then can diffuse to a neighbouring fcc site j via an activated process with Arrhenius-type rate r i,j . We follow the Clarke–Vvedensky bond-counting approach 44 , 45 , where the rate is defined as The pre-factor v 0 =2 k B T / h is chosen in accordance with previous KMC studies for atomic systems 46 , 47 , 48 , consistent with our coarse-grained description of C 60 as a sphere. The total energy barrier for molecular hopping consists of a barrier for free diffusion, E D , and contributions determined through the local neighbourhood of the particle. The neighbour binding energy E B contributes with a number of lateral neighbours n i . The sum of E D and n i E B then determines the lateral diffusion ( s i,j =0) and thus, the growth of islands. Other pre-factors to the neighbour binding energy have been suggested in literature 19 , 24 , which increase the diffusion rate of particles along island edges. As a consequence, the islands become more compact. In our C 60 system, however, the islands are quite compact from the very onset of the growth (see Fig. 3 ). Therefore, the details of the pre-factor of E B do not significantly influence the results at the parameters considered. If a particle at site i crosses an up- or downward step to reach site j , an additional Ehrlich–Schwoebel contribution E ES is added to the total energy barrier ( s i,j =1). As a result, a particle diffusing onto an island from an edge site with two neighbours has to overcome the activation energy Δ E = E D +2 E B + E ES , while a particle on the island has to overcome only Δ E = E D + E ES to diffuse downwards over the island edge. The step-edge energy barrier used in our simulations is, by construction, an average energy barrier. For this, we recall that our energy barriers are exclusively gained by comparison with experiment, and that the experimental (X-ray scattering) data are intrinsically averaged in lateral direction. Therefore, we did not take into account the orientation of the step edge in this study. The KMC input parameters T (substrate temperature) and f (adsorption rate) are taken directly from experiment. The KMC simulations have been performed from the second layer onwards as we concentrate on the C 60 –C 60 interactions and do not model C 60 –mica interactions. This strategy is justified, as we know from the experiment that the first C 60 layer is completely filled and that there is no lattice strain; thus, we can assume a smooth C 60 (111) surface as initial surface in simulations. Furthermore, we assume defect-free growth without cavities or overhangs. We also note that we do not take collective diffusion mechanisms into account. Different concepts for collective diffusion have been suggested in the literature, one example being dimer shearing 49 . More recently, approaches have been suggested for shearing, reptation and concerted gliding of islands 50 . These phenomena are certainly worth studying in more detail, however, it would not have been possible to simulate the time and length scale required in our study if these effects were included. Figure 3: Experimental and simulated measures of surface morphology. ( a ) Island density (inset: 2D island growth regimes as simulated by KMC; scale bar, 100 nm), ( b ) anti-Bragg growth oscillations and ( c ) layer coverages are shown as a function of the molecular exposure for a C 60 film grown at T =60 °C and f =0.1 ML min −1 . Parts b , c include data from an analytical growth model. ( d ) Maximal island density for the third layer for both a low deposition rate of 0.1 ML min −1 and a high deposition rate of 1 ML min −1 as a function of temperature. The KMC simulations have been performed from the second layer onwards. The confidence interval in a and the error bars in d are calculated from the systematic experimental uncertainties. For the complete morphology evolution during growth for T =60 °C, f =0.1 ML min −1 as well as 40 °C, 0.1 ML min −1 and 60 °C, 1 ML min −1 simulated by KMC, see Supplementary Movies 1–3 . Full size image Energy barriers for surface processes in C 60 growth For the comparison of experiment and simulations, we use the time-dependent layer coverages from KMC simulations to calculate anti-Bragg oscillations using kinematic scattering theory 51 (see Methods). The energy barriers E D , E B and E ES (see equation (1)) are then adjusted until both the simulated anti-Bragg oscillations and island densities fit the experiment. Figure 3a,b shows experimental (black dots) and KMC simulation data (red solid line) for the island density and the anti-Bragg intensity for the temperature T =60 °C. The experimental island density is directly extracted from the data in Fig. 2c , using the average island distance D ≈2 π /Δ q ‖ , assuming a hexagonal island arrangement (see also Supplementary Fig. 4 and Supplementary Note 3 for a comparison with real-space atomic force microscope data). Both experiment and simulation predict that the island density changes markedly during the deposition of each monolayer. Initially, in the nucleation regime, the island density increases. Then, lateral island growth sets in, where the island density stays constant. Finally, the island density drops again as islands merge in the coalescence regime. The inset in Fig. 3a shows the corresponding KMC simulation snapshots for the three growth regimes. In all cases, we observe compact island shapes in the simulations as well as in the experiments. A more detailed comparison of the morphology is given in Supplementary Note 4 and shown in Supplementary Fig. 5 . The sequence of growth regimes is observed for the first five layers at each temperature and deposition rate employed. As it is clearly seen from Fig. 3a,b , there is excellent agreement between the experimental and simulated data regarding the island density and anti-Bragg growth oscillations. The minima and the maxima in the island density, as well as the trend of decreasing density for the different layers (increase in island size), are clearly reproduced. The apparent increase in the island density in the fifth layer, which starts to differ slightly from the true island density, indicates the limits of our data analysis. The analysis takes into account only the islands in a single, currently growing layer, however, due to the roughening of the film, both islands in the simultaneously growing 4th and the 5th layer contribute to the diffuse scattering at that stage. The vertical layer filling and roughening are also highly consistent, as can be seen from the good agreement between experimental and simulated evolution of anti-Bragg intensity in Fig. 3b . As an independent confirmation of the KMC results, we have employed a mean-field analytical model for thin-film growth (see refs 52 , 53 , 54 ), the results of which agree with the layer coverages of the KMC simulations, as can be seen in Fig. 3c . Even beyond the specific experimental parameters chosen in Fig. 3a–c , KMC simulations show a good agreement with the experimental findings for all studied rates (0.1 and 1 ML min −1 ) and the full experimental temperature range of 40–80 °C (see Supplementary Note 5 and Supplementary Fig. 6 for a comparison of 40 °C and 0.1 ML min −1 ). This is seen in Fig. 3d , where we compare the experimental and simulated values for the maximum island density in the third monolayer. In accordance with growth theories predicting a scaling of island density with deposition rate/diffusivity 10 , 23 , we find that the island density decreases for higher substrate temperature and lower deposition rate by an order of magnitude. Furthermore, KMC simulations correctly predict the change in island density by an order of magnitude when changing deposition rate and temperature. Notably, this comprehensive agreement of temperature-, rate- and time-dependent data was achieved with a physical model of surface processes that contains only three parameters for the nanoscopic energy barriers for diffusion, nucleation and step-edge crossing. The resulting values are E D =(540±40) meV for the diffusion energy, E B =(130±20) meV for the lateral binding energy and E ES =(110±20) meV for the step-edge/Ehrlich–Schwoebel barrier (see also Fig. 1 ). For a more detailed discussion of the mutual correlations between energy parameters, see Supplementary Note 6 . Discussion It is instructive to compare the self-consistent parameter set obtained in this study to energy values reported earlier. The height of the C 60 Ehrlich–Schwoebel barrier (110 meV) is comparable to atomic systems, such as Pt/Pt(111) (80 meV) 24 and is close to the value of 100 meV for C 60 from recent density functional theory calculations by Goose et al. 55 Our value for the binding energy, E B =130 meV, is smaller than that related to the minimum of the pair interaction potential of two C 60 molecules, in particular the Girifalco potential, E C60–C60 =270 meV, which has been derived theoretically 56 , 57 and has recently been measured in atomic force microscope experiments 58 . There are several factors contributing to this difference: first, we are considering molecules close to a substrate, which has not been taken into account in refs 56 , 57 but has already been shown to weaken the interaction 58 . Second, we are considering dense and thus strongly correlated systems, not two molecules in vacuum as assumed in refs 56 , 57 . Third, and maybe most importantly, our value for the binding energy has been obtained such that experimental data are fitted over a range of temperatures. It is well known that effective potentials (and thus binding energies) can strongly depend on the temperature 59 ; thus our value has to be considered as a temperature average. Finally, we stress that our value for E B is very close to an estimate gained from the cohesion energy per neighbour of C 60 in its bulk fcc crystal, E C =133 meV (1.6 eV is the total cohesion energy 60 , 61 divided by the 12 bulk lattice neighbours). Regarding our value for the diffusion barrier ( E D =540 meV), we note that this is significantly larger than the corresponding value derived from a potential landscape analysis, E pot =168 meV (ref. 62 ). This is likely due to the fact that in our KMC simulations, we do not consider all energy minima as lattice sites. Thus, the travelled distances across several minima are larger, leading effectively to a larger barrier. In addition, we cannot exclude stacking faults and domains in the epitaxial C 60 adlayers, which could contribute to a larger effective diffusion barrier in our calculation as transport across domain boundaries is hindered. A more detailed comparison of our value for the diffusion barrier with values derived from pair potential calculations and molecular dynamics simulations is given in Supplementary Note 7 . Without this strategy, the simulation of the full multilayer growth would have been impossible. Furthermore, the same strategy is used in simulations of metallic growth 24 , 63 , 64 enabling a comparison with these studies. In addition to the quantities discussed so far, KMC simulations allow us to extract single-particle trajectories and, thus, to study the dynamics on a particle level, which is not yet possible with current experimental techniques. An example of a single C 60 particle trajectory (red) on top of a third monolayer island (light blue) is shown in Fig. 4a . Clearly, the Ehrlich–Schwoebel barrier leads to a ‘caging’ of the C 60 molecule close to the borders of the island, that is, the standard random walk behaviour is restricted by the step edge of the island. Figure 4: Particle-resolved dynamics during C 60 growth. ( a ) Trajectory of a single molecule in the 4th ML ( T =40 °C and f =1 ML min −1 ; scale bar, 5 nm). The influence of the Ehrlich–Schwoebel barrier can be clearly seen as a caging of the single C 60 molecule on the island. The letters A and B denote the adsorption of one molecule on the surface (A) and the formation of a dimer (B). ( b ) MSD=‹| r ( t )− r (0)| 2 › of C 60 on C 60 (111), for T =60 °C and f =0.1 ML min −1 as a function of time spent on the surface. Results are averaged over 500 realizations. The particles considered arrive in the 2nd ML after the growth of 1.5 monolayers. For comparison, we show data for a system with atom-like ratio E D / E D + E B =0.34. Note that the quasi-free diffusion of C 60 extends substantially further than for atom-like systems, even if scaled by the lattice parameter, signifying the qualitatively different behaviour of C 60 . ( c ) Schematic illustration of energy landscape for atoms, colloids and the fullerene C 60 near an island step edge: The interaction range of the different materials clearly affects the character of step-edge barrier as one can distinguish between real and a diffusion-mediated pseudobarrier 26 . Full size image Importantly, the particle-resolved dynamics reveal crucial differences in the diffusion behaviour of C 60 and atomic systems. For C 60 on C 60 (111), the diffusion barrier E D is relatively large compared with the binding energy E B . Specifically, the ratio R = E D /( E D + E B ) is R =0.83. This is significantly larger than in typical atomic systems, such as Pt on Pt(111) where R ≈0.29–0.34, or Ag on Ag(111) with R ≈0.29–0.39 (refs 23 , 24 ). We suggest that this pronounced difference is related to the relatively short attractive interaction range of C 60 , as compared with the attraction range of atoms, if normalized to their respective size (for details see Supplementary Fig. 7 and Supplementary Notes 8 and 9 ). The comparatively large ratio R for C 60 has a profound impact on the mobility of the particles. This is shown in Fig. 4b , where we plot the mean-squared displacement, MSD=‹| r ( t )− r (0)| 2 › for particles arriving between islands after the growth of 1.5 monolayers for C 60 and for a system with an atom-like ratio R =0.34. The linear increase with time of the C 60 MSD in the very beginning corresponds to free diffusion, depicted in grey, as the molecules perform a random walk on the underlying fcc(111) surface. After a time of about 0.1 ms, encountering an upward island edge as well as interactions with neighbours hinder the diffusion of the molecules, the MSD saturates. Similar sub-diffusive behaviour also occurs in the atom-like system, but at much shorter times. This is because atoms can form new bonds more quickly due to the longer range of atomic interactions and the stronger binding energy. As a result, a C 60 molecule is able to explore an area that is nearly two orders of magnitude larger than in the atom-like system before it is immobilized. The different diffusion behaviour of C 60 prompts the question on the nature of the Ehrlich–Schwoebel barrier in comparison with atomic and colloidal growth. Indeed, regarding their narrow interaction range, C 60 ‘nanocolloids’ are more similar to colloids than atoms. In colloids, the range of attractive interactions is so small that the reduced coordination associated with an edge is not ‘sensed’. This effectively leads to the vanishing of an energetic barrier at the edge. Instead, one observes a purely diffusive Ehrlich–Schwoebel barrier in colloids, arising from a lower diffusion probability along the geometrically longer path across the step edge 26 . In contrast, atoms crossing an island edge have to overcome an energetic Ehrlich–Schwoebel barrier, as bonds are missing at the step-edge. For C 60 , we can estimate an upper bound for a diffusive barrier based on the waiting time of a typical hopping process. Multiplying this time by a geometric factor (see ref. 26 ), which accounts for the longer path of a step-edge crossing, we obtain a diffusive pseudobarrier of E ES,geo =ln( F ) k B T <50 meV (see Supplementary Note 10 and Supplementary Fig. 8 for details). This is markedly smaller than the value of 110 meV obtained from the KMC simulations. We thus conclude that the Ehrlich–Schwoebel barrier in C 60 surface growth is, at least partially, of energetic character, consistent with the intermediate range of the C 60 interactions (which lies between the range of colloidal and atomic interactions). This is schematically shown in the energy landscapes for atoms, colloids and C 60 in Fig. 4c . In conclusion, the present experimental and theoretical study yields, for the first time, a quantitative description of molecular thin-film growth for the important case of C 60 , as an intermediate between atoms and colloids. We have demonstrated that in situ specular X-ray reflectivity and diffuse GISAXS oscillations are powerful tools for non-invasive real-time studies of the morphological evolution during molecular growth. Relating the experimental data to results from KMC simulations, we have been able to determine a consistent set of energy parameters determining the growth kinetics on the molecular level. This way we can quantitatively predict C 60 deposition at different temperatures and rates, including the evolution of island density and surface roughening with film thickness. Thus, our combined analysis provides a detailed understanding of C 60 in terms of molecular-scale processes. Moreover, our study sheds new light on various dynamical aspects accompanying the growth. In particular, we show that the colloid-like, short-ranged character of C 60 interactions leads to relatively long surface diffusion times before immobilization occurs at existing islands. Nevertheless, the step-edge crossing barrier of C 60 differs from colloids in that it is not a pseudo-step-edge barrier arising from lower diffusion probability at a step edge, but a true energetic barrier as observed for atoms. Since C 60 features aspects of both atomic and colloidal systems, our findings will help to gain insight into island nucleation and surface growth processes for van der Waals-bound molecules between the scales of atomic and colloidal systems. This quantitative, scale-bridging understanding enables predictive simulations and a rational choice of growth conditions, which, together with molecular design and synthesis, ultimately leads to optimized design of functional materials. Methods X-ray surface scattering and thin-film preparation The X-ray surface scattering experiments during growth were carried out at the MiNaXS beamline P03 (ref. 65 ) of PETRA III (DESY, Hamburg) at an X-ray wavelength of 0.946 Å. The growth was performed in a portable ulta-high vacuum (UHV) chamber designed for molecular beam deposition, equipped with a Be window for X-ray access, C 60 effusion cell and a quartz crystal microbalance, at a base pressure of 10 −8 mbar. Fullerene C 60 (Sigma Aldrich, >99.5% purity) was thermally deposited on cleaved mica (diameter: 10 mm, Plano GmbH) for two different deposition rates (0.1 and 1 ML min −1 ) and for three different substrate temperatures (40, 60 and 80 °C) to study rate-, temperature- as well as time- and thickness dependency of the island density and layer coverage. Films were grown repeatedly on the same substrate after heating the mica substrate to ~450 °C, resulting in a clean substrate, as confirmed by specular and diffuse X-ray scattering before every growth run. The high brilliance of the beamline and high dynamic range of the PILATUS 300 K (Dectris) area detector enable a simultaneous measurement of the strong specular X-ray reflectivity and weak diffuse X-ray scattering. An incident angle of α i =1.65°, the so-called anti-Bragg position of C 60 corresponding to half the Bragg value of the (111) reflection, was chosen. Here the reflectivity shows time-dependent oscillations during layer growth, which provide information on the vertical layer filling 16 , 53 . Lateral information is available through simultaneous measurement of the diffuse scattering (GISAXS), giving information about the island distance 39 , as a function of the lateral momentum of transfer q ‖ at a resolution in q ‖ of 0.001 Å −1 . We avoided beam damage due to the high photon flux at PETRA III by laterally moving the substrate during the real-time growth experiments and confirmed that pristine and previously exposed spots gave the same scattering pattern in post-growth experiments. Anti-Bragg intensity and sticking coefficient The time-dependent anti-Bragg intensity can be calculated in kinematic approximation using with the layer coverages θ n for the n th layer. The substrate amplitude A sub , the substrate phase ϕ sub and the molecular form factor f ( q z ) are determined by maximal, minimal and saturation intensity of the real-time experiment 51 . The anti-Bragg intensity for the KMC simulations was calculated using equation (2) and the simulated layer coverages shown in Fig. 4c . Furthermore, we have fitted the experimental data according to analytical growth models 51 , 53 to extract the coverage evolution for each layer. In addition, we can extract the sticking coefficient from the anti-Bragg growth oscillations, which is found to decrease during the growth of the first four layers for all studied temperatures. Quantitatively, we find for a temperature of 60 °C that with respect to the growth of the first monolayer, the sticking coefficient decreases by 5% in the 2nd ML, 25% in the 3rd ML and 30% from the 4th layer onwards. This decrease is due to the different mica–C 60 and C 60 –C 60 interactions. It is further influenced by a different island density in each layer, which leads to a change in the free diffusion times and aggregation behaviour. In our KMC simulations, which otherwise assume complete condensation, we have accounted for the changing sticking coefficient by scaling the molecular exposure axis accordingly. The same sticking coefficients have also been included in our analytical mean-field modelling. Time step in KMC simulations Assuming that exactly one process takes place in one simulated time step, we can define an average time-step length as This time unit allows us to compare simulated and experimental timescales. The simulation is carried out on a triangular lattice. In this way, the growth process generates a fcc structure in accordance with the C 60 bulk crystal (see the studies of Cox et al. 22 for a similar simulation strategy for the growth of Ag on Ag(111), and of Heinrichs et al. 66 for corresponding theoretical considerations). Starting point of the simulation is a completely filled, defect-free layer of C 60 molecules (corresponding to the C 60 (111) surface). Within the subsequent growth process, we exclude the formation of overhangs. To achieve this, we assume that particles on overhang sites relax instantaneously (with a relaxation probability proportional to the corresponding diffusion rate) until they reach a stable site. Typical simulations involve a lattice with 1,000 × 1,000 unit cells, and they cover a time range up to 4,000 s, corresponding to (10 11 –10 12 ) events. Additional information How to cite this article : Bommel, S. et al. Unravelling the multilayer growth of the fullerene C 60 in real-time. Nat. Commun. 5:5388 doi: 10.1038/ncomms6388 (2014).
Using DESY's ultrabright X-ray source PETRA III, researchers have observed in real-time how football-shaped carbon molecules arrange themselves into ultra-smooth layers. Together with theoretical simulations, the investigation reveals the fundamentals of this growth process for the first time in detail, as the team around Sebastian Bommel (DESY and Humboldt Universität zu Berlin) and Nicola Kleppmann (Technische Universität Berlin) reports in the scientific journal Nature Communications. This knowledge will eventually enable scientists to tailor nanostructures from these carbon molecules for certain applications, which play an increasing role in the promising field of plastic electronics. The team consisted of scientists from Humboldt-Universität zu Berlin, Technische Universität Berlin, Universität Tübingen and DESY. The scientists studied so called buckyballs. Buckyballs are spherical molecules, which consist of 60 carbon atoms (C60). Because they are reminiscent of American architect Richard Buckminster Fuller's geodesic domes, they were christened buckminsterfullerenes or "buckyballs" for short. With their structure of alternating pentagons and hexagons, they also resemble tiny molecular footballs. Using DESY's X-ray source PETRA III, the researchers observed how buckyballs settle on a substrate from a molecular vapour. In fact, one layer after another, the carbon molecules grow predominantly in islands only one molecule high and barely form tower-like structures.."The first layer is 99% complete before 1% of the second layer is formed," explains DESY researcher Bommel, who is completing his doctorate in Prof. Stefan Kowarik's group at the Humboldt Universität zu Berlin. This is how extremely smooth layers form. "To really observe the growth process in real-time, we needed to measure the surfaces on a molecular level faster than a single layer grows, which takes place in about a minute," says co-author Dr. Stephan Roth, head of the P03 measuring station, where the experiments were carried out. "X-ray investigations are well suited, as they can trace the growth process in detail." "In order to understand the evolution of the surface morphology at the molecular level, we carried out extensive simulations in a non-equilibrium system. These describe the entire growth process of C60 molecules into a lattice structure," explains Kleppmann, PhD student in Prof. Sabine Klapp's group at the Institute of Theoretical Physics, Technische Universität Berlin. "Our results provide fundamental insights into the molecular growth processes of a system that forms an important link between the world of atoms and that of colloids." Through the combination of experimental observations and theoretical simulations, the scientists determined for the first time three major energy parameters simultaneously for such a system: the binding energy between the football molecules, the so-called "diffusion barrier," which a molecule must overcome if it wants to move on the surface, and the Ehrlich-Schwoebel barrier, which a molecule must overcome if it lands on an island and wants to hop down from that island. "With these values, we now really understand for the first time how such nanostructures come into existence," stresses Bommel. "Using this knowledge, it is conceivable that these structures can selectively be grown in the future: How must I change my temperature and deposition rate parameters so that an island of a particular size will grow. This could, for example, be interesting for organic solar cells, which contain C60." The researchers intend to explore the growth of other molecular systems in the future using the same methods.
10.1038/ncomms6388
Other
New horned dinosaur reveals unique wing-shaped headgear
Michael J. Ryan, David C. Evans, Philip J. Currie, and Mark A. Loewen. 2014. "A new chasmosaurine from northern Laramidia expands frill disparity in ceratopsid dinosaurs" DOI: 10.1007/s00114-014-1183-1 Journal information: Naturwissenschaften
http://dx.doi.org/10.1007/s00114-014-1183-1
https://phys.org/news/2014-06-horned-dinosaur-reveals-unique-wing-shaped.html
Abstract A new taxon of chasmosaurine ceratopsid demonstrates unexpected disparity in parietosquamosal frill shape among ceratopsid dinosaurs early in their evolutionary radiation. The new taxon is described based on two apomorphic squamosals collected from approximately time equivalent (approximately 77 million years old) sections of the upper Judith River Formation, Montana, and the lower Dinosaur Park Formation of Dinosaur Provincial Park, Alberta. It is referred to Chasmosaurinae based on the inferred elongate morphology. The typical chasmosaurine squamosal forms an obtuse triangle in dorsal view that tapers towards the posterolateral corner of the frill. In the dorsal view of the new taxon, the lateral margin of the squamosal is hatchet-shaped with the posterior portion modified into a constricted narrow bar that would have supported the lateral margin of a robust parietal. The new taxon represents the oldest chasmosaurine from Canada, and the first pre-Maastrichtian ceratopsid to have been collected on both sides of the Canada–US border, with a minimum north–south range of 380 km. This squamosal morphology would have given the frill of the new taxon a unique dorsal profile that represents evolutionary experimentation in frill signalling near the origin of chasmosaurine ceratopsids and reinforces biogeographic differences between northern and southern faunal provinces in the Campanian of North America. Access provided by DEAL DE / Springer Compact Clearingstelle Uni Freiburg _ Working on a manuscript? Avoid the common mistakes Introduction Ceratopsidae is a diverse clade of large bodied horned dinosaurs known from a well-sampled fossil record that spans the last 20 million years of the Mesozoic (Dodson et al. 2004 ). Ceratopsids are common in the Campanian–Maastrichtian deposits of western North America, where they range geographically from Coahuila, Mexico in the south ( Coahuilaceratops magnacuerna ; Loewen et al. 2010 ), to northern Alaska in the North ( Pachyrhinosaurus perotorum ; Fiorillo and Tykoski 2012 ). Marked latitudinal differences between the dinosaur faunas of the late Campanian has led to the recognition of northern (Wyoming and north) and southern (Utah and south) faunal provinces adjacent to the Western Interior Seaway during this time (Lehman 1987 , 1997 ; Sampson et al. 2010 ). Ceratopsians reached their maximum diversity during the Campanian and provide some of the strongest evidence for provinciality in Late Cretaceous dinosaurs (Sampson et al. 2010 , 2013 ; Farke 2013 ; Ryan 2013 ). Ceratopsidae consists of two major subclades that form a basal dichotomy of Centrosaurinae, or the ‘short-frilled’ ceratopsids, and Chasmosaurinae, which generally have relatively longer, less adorned frills. The two clades have traditionally been easily distinguished by the size and shape of their squamosals, which form the lateral segments of the parieto squamosal frill. Centrosaurines have broad, rectangular squamosals with typically concave contact surfaces (in adult-sized elements) for the parietal on the medial margin of the element. In contrast, chasmosaurines have an elongate, triangular squamosal with a corresponding contact surface for the parietal on the medioventral margin of the posterior squamosal. A recent morphometric study has confirmed that the distinctive squamosal shape of each subfamily is very conservative, with some minor proportional differences within each subfamily (Maiorino et al. 2013 ). Chasmosaurinae, in particular, exhibit little variation in squamosal shape, with little clear differentiation among taxa. Despite the importance of the parietosquamosal frill in ceratopsian systematics, the shape and ornamentation of the parietal appears to have been the locus of evolution within Ceratopsidae, with many taxa being diagnosed exclusively from this element. The squamosal defines the lateral shape of the frill and reflects more general differences in frill structure within the subfamilies; its conservative morphology suggests that this aspect of the frill was generally stable within each subfamily. A remarkable new chasmosaurine ceratopsid is described based on material collected from approximately time equivalent middle Campanian exposures of the Judith River Formation of Montana and the Dinosaur Park Formation of Alberta (Fig. 1 ). The new taxon is represented by two well-preserved squamosals and reveals previously unknown disparity in ceratopsid frill shape. It departs significantly from the conservative frill shape currently known in ceratopsids by an unusual modification of the squamosal that results in a hatchet-shaped lateral frill margin, rather than the convex-to-straight lateral margins that characterise all other known taxa. The new, apparently rare, taxon provides additional support for the faunal differentiation of northern and southern biogeographic provinces on Laramidia during the late Campanian. It also suggests that the evolutionary origin of the derived elongate triangular squamosal of chasmosaurs may have been more complex than previously believed. Fig. 1 Locality map of UALVP 54559 from the Dinosaur Park Formation of Dinosaur Provincial Park, Alberta, and ROM 64222 from the Judith River Formation of Montana. Alberta and Montana are silhouetted in black on the North America inset map. Google Earth image insets: Google, Digital Globe (Dinosaur Provincial Park) and Google, USDA Farm Service Agency (Montana) Full size image Institutional Abbreviations ROM, Royal Ontario Museum, Toronto, Canada UALVP, University of Alberta, Laboratory for Vertebrate Paleontology, Edmonton, Canada Systematic Paleontology Ornithischia Seeley 1888 Ceratopsia Marsh 1890 Neoceratopsia Sereno 1986 Ceratopsidae Marsh 1888 Chasmosaurinae Lambe 1915 Mercuriceratops gen. nov. urn:lsid:zoobank.org:act:70D4C099-1A46-4192-B8A1-D134E49D4861 Type species Mercuriceratops gemini sp. nov. Diagnosis Same as for species, by monotypy Mercuriceratops gemini sp. nov. urn:lsid:zoobank.org:act:70D4C099-1A46-4192-B8A1-D134E49D4861 Holotype ROM 64222: An almost complete right squamosal (Fig. 2a–d ) Fig. 2 Mercuriceratops gemini squamosal. ROM 64222 (holotype) in a , c line drawing and photograph of dorsal view; b , d line drawing and photograph of ventral view. Inset reconstruction of M. gemini in lateral view. es# episquamosal #. Scale bar = 20 cm Full size image Etymology Mercuri , in reference to winged helmet of the Roman messenger god Mercury, and ‘ceratops’, meaning horned face, a common suffix for genera of ceratopsid dinosaurs. Close carriage return. Gemini , from mythology, the twins Castor and Pollux were transformed into the constellation Gemini, referring to the twin specimens from Alberta and Montana. Referred material UALVP 54559, an incomplete right squamosal (Fig. 3a and b ). Fig. 3 Mercuriceratops gemini squamosal. UALVP 54559 (paratype) in a dorsal and b ventral views. es# episquamosal #. Scale bar = 10 cm Full size image Locality and horizon ROM 64222 was derived from the upper Judith River Formation, Fergus County, Montana, SW 1/4 Sec 9, T22N R21E (Fig. 1 ). Detailed locality data on file at the Royal Ontario Museum. UALVP 54559 was found on the north side of the Red Deer River, approximately 1 km east of ‘Happy Jack’s cabin’, 12 U0471467, 5624176, Dinosaur Provincial Park, Alberta, lower Dinosaur Park Formation (Fig. 1 ), approximately 2 m above the contact with the Oldman Formation. Diagnosis Differs from all other chasmosaurines in having a squamosal with a constricted posterior ramus that is rod-shaped rather than having a tapering, obtuse triangular shape (most chasmosaurines) or a broadly rounded triangular shape ( Diceratops , Ojoceratops , and Triceratops ) in dorsal view. The expanded anterolateral flange (posterior to the otic notch in dorsal view and posterior to the quadrate groove ventrally) is similar to the same region in most ceratopsids and bears four large, tab-shaped episquamosals. Elongate episquamosals also extend along the lateral margin of the posteriorly projecting bar. Comments Of note is the presence of a fragment of what would have been an elongate, robust postorbital horncore found in the wash channel below UALVP 54559. Although this element cannot be definitely associated with the UALVP 54559, multiple fragments of the squamosal were collected from the same area, suggesting a possible association between the squamosal and the horncore fragments. Description The new taxon is represented by two incomplete right squamosals, ROM 64222 (Fig. 2a–d ) and UALVP 54559 (Fig. 3a and b ). The specimens are similar in size and would have been derived from large, adult-sized animals. ROM 64222 is relatively gracile compared to UALVP 54559. ROM 64222 preserves most of the relatively thin anterior (prequadrate groove) blade that contacts the postorbital anteriomedially and the jugal anterolaterally, and both specimens preserve at least a portion of the jugal projection. Each of the squamosals are apomorphic in being constricted just posterior to the quadrate groove such that most of the posterior blade is modified from a dorsoventrally thin, posteriorly tapering blade seen on all other chasmosaurines as a mediolaterally compressed shaft with a subrectangular cross section. The preconstriction, anterior portion of the squamosal with four large, well-fused episquamosals present on UALVP 54559 superficially resembles the flange-like, posterior portion of a centrosaurine squamosal. The posterolateral margin of this flange is broken away on ROM 64222, but loci for (or the fused) episquamosals 1 and 2 are preserved. More of the posterior squamosal shaft is preserved on ROM 64222, with at least one low, long-based episquamosal present on the lateral edge. None are preserved on the posterior part of the shaft of UALVP 54559, but this shaft is broken just anterior to where the first episquamosal on the shaft would be expected based on the morphology of ROM 64222. The medioventral portion of the shaft of UALVP 54559 has deeply incised, longitudinal grooves that would have tightly interdigitated with the parietal. The same surface of ROM 64222 is relatively smooth and flat and lacks any deep grooving, although it does preserve the elongate contact surface for the parietal. ROM 64222 (Fig. 2a–d ) is an almost complete right squamosal (maximum preserved length = 793 mm) that preserves most of the anterior contact for the postorbital, the medial surface that forms the margin of the supraorbital fenestra anteriorly and contacts the parietal posteriorly, and most of the lateral margin. The posterior portion of the modified shaft is missing, as is part of the posterior margin of the expanded anterolateral flange. Although the specimen is somewhat fractured, both dorsal and ventral surfaces are well preserved. In dorsal view, the medial margin of the element forms a broad concave arc with the constricted posterior shaft forming the posterior one half of the element. The base of the shaft (111 mm in length and 31.6 mm in thickness) is medially offset from the anterolaterally projecting flange portion of the squamosal by a deep notch that gives the body of the element a hatchet shape. Although the posterior margin of this flange is partially broken, loci for episquamosals 1 and 2 are present. The jugal notch is wide and most of the jugal flange is preserved. The anterodorsal margin of the infratemporal fenestra appears to preserve a contact surface for the jugal suggesting that it formed most of the anterior margin of this opening. Ventrally, the element resembles other ceratopsids in the arrangements for the contacts with the quadrate and the exoccipital. The relatively narrow, elongate contact surface for the parietal is lightly inscribed along the medioventral margin of the posterior shaft. UALVP 54559 (Fig. 3a and b ) is a partial right squamosal (maximum preserved length = 470 mm) missing the anterior portion of the blade, and most of the elongate squamosal ‘bar’ that contacts the parietal medially. The element is robust and most likely came from a larger, more mature individual than ROM 64222. The proximal portion of the jugal notch (base of the jugal flange) is partially preserved and indicates that the notch would have resembled those of most chasmosaurines. The dorsal surface of the anterolateral flange is convex, similar to the dorsal surface of most ceratopsid squamosals. This surface is rugose with several deeply inscribed vascular grooves. The anterolateral flange has four dorsally reflected scallops (loci) each capped by a dorsoventrally compressed, well-fused episquamosal. The narrow, crescent-shaped episquamosal 1 is the largest episquamosal with a basal length of 188 mm and the height of 45 mm at midpoint. Its base is only visible on the ventral surface. The almost indistinguishable episquamosals 2 and 3 are both low and relatively long-based (61.5 and 51 mm basal lengths, 15 and 20 mm heights, respectively). The large, tab-like scallop forming episquamosal locus 4 is capped by a crescentic episquamosal that covers the apex and anterior margin on the dorsal surface. The episquamosal 4 process has a basal length of 65 mm and a height of 43 mm; the base of episquamosal cannot be identified on the ventral surface. The entire lateral margin of the anterolateral flange is thick, but the blade thins medially towards the midpoint of the body of the flange. There is a pronounced, small, concave depression (~45 mm in diameter) on its dorsal surface proximal to the base of episquamosal 3 and between episquamosals 2 and 3, possibly due to crushing. The ventral surface of each episquamosal is rugose and crossed by several deeply imprinted vascular grooves. The shaft-like posteromedial extension of the anterior blade is separated from the anteromedial flange by a deep embayment, as in ROM 64222. The preserved shaft is approximately rectangular in cross-section, although the medial and ventromedial surfaces have been modified by several long, deep, longitudinal grooves representing the contact surfaces for the parietal that are up to one half the thickness of the shaft. Based on the size and depth of the grooves, and the overall robustness of the shaft, the contacting lateral portion of the parietal can be inferred to have been massive. These grooves narrow and converge anteriorly onto the thin ventromedial margin of the anterior blade (margin of the supratemporal fenestra) at the level of the perpendicular quadrate groove. On the dorsal surface of the shaft, three narrow vascular grooves anastomose adjacent to the break and form a single groove parallel to the margin of the remaining shaft. On the ventral surface of the flange, a thin sheet of bone adjacent to the quadrate groove extends anteriorly forming the ventral wall of a deep, broad cavity that would have been confluent with the supratemporal fenestra. The preserved dorsal surface of the anteromedial flange adjacent to the postorbital is thick and has two low, rounded bumps, as in most ceratopsids. Discussion Mercuriceratops can be unequivocally referred to Chasmosaurinae based on the elongate posterior region of the squamosal, with a distinct medioventral contact for the lateral parietal process. These characters have been shown to unambiguously diagnose Chasmosaurinae in all recent phylogenetic analyses of the group (e.g., Sampson et al. 2010 ; Mallon et al. 2011 ), and occur in both ROM 64222 and UALVP 54449. Mercuriceratops is distinct from all other known chasmosaurines and is diagnosed by the apomorphic, hatchet-shaped lateral margin of the squamosal, which differs considerably from the typically straight-margined, triangular squamosals of most chasmosaurines, and the more rounded lateral margins of Nedoceratops , Ojoceratops , and Triceratops . Whereas the anterior blade (anterolateral flange) of the squamosal of Mercuriceratops is of typical ceratopsid shape, a strap-like posterolateral bar offset from the lateralmost margin of the anterior blade by a deep embayment is unique within chasmosaurines. The lateral margin of the anterior flange has four low, long-based episquamosals, and at least one well-fused episquamosal is preserved at the base of the posterolateral bar on ROM 64222 followed by the broken base of at least one more; a complete squamosal may well have had at least eight episquamosals per side, typical of most chasmosaurines. The unusual morphology of the squamosal cannot be explained by pathology or modification through bone resorption. Although pathological elements are not uncommon on putatively old, mature individuals (Rega et al 2010 ), there is no indication on either element of injury and rehealing, or bone loss/growth due to pathology. Some chasmosaurines are known to develop fenestrae in their squamosals (Tanke and Farke 2007 ; Tanke and Rothschild 2010 ), but no known specimens that are pathological have morphologies similar to that manifested in ROM 64222 and UALVP 54559. The presence of episquamosals on the lateral margin of the posterior bar of ROM 64222 in the area of the notch confirms that this bar is not a result of the modification or loss of the lateral margin of the squamosal. The presence of almost identical morphologies on squamosals from two different formations from two widely separated geographic areas strongly suggests that ROM 64222 and UALVP 54559 represent a previously undescribed taxon of chasmosaurine ceratopsid. Although ROM 64222 and UALVP 54559 are strikingly similar, the two specimens do show variation in several features including the size (angle) of the jugal notch, the shape and size of the parietal contact, and the orientation of the posterolateral bar; however, this is conservatively interpreted as individual and/or ontogenetic variation within a single taxon. The angle of the jugal notch is highly variable in all ceratopsid taxa (Maiorino et al. 2013 ) and probably does not have any taxonomic utility (contra Sullivan and Lucas 2010 ). The size and thickness of the posterior bar on UALVP 54559 is consistent with size- and shape-related changes seen in progressively older Marginocephalia growth stages (e.g., Scannella and Horner 2010 ; Horner and Goodwin 2009 ). Although both specimens pertain to individuals that were similar in size to adult Chasmosaurus specimens from Alberta, we consider UALVP 54559 to be from a more mature, more robust individual. This would explain the long, deep, interdigitating suture for the parietal that, by inference, must have also been robust. The more gracile ROM 64222, while close to, or possibly at, full adult size, may not have reached full maturity at the time of death. The presence of a well-fused episquamosal near the base of the shaft of ROM 64222 does not contradict this suggestion because the ontogenetic pattern of episquamosal fusion is anterior to posterior on chasmosaurines (Sampson et al. 1997 ), and the unpreserved posteriorly positioned episquamosals of this specimen may not have been fused. Given the limited material available, and the presence of considerable individual and ontogenetic variation in frill morphology within ceratopsids, ROM 64222 and UALVP 54559 are both referred to the same taxon; however, the differences in the specimens may be recognised as taxonomically significant when more material is collected. Dodson ( 1993 ) examined phylogenetic shape changes in ceratopsian skulls, while Maiorino et al. ( 2013 ) quantified shape differences in the squamosals of ceratopsids; both confirmed the well-established truism that Centrosaurinae and Chasmosaurinae each have a distinctive conservative shape that distinguish them at the subfamily level. With its hatchet-shaped lateral margin, the structure of the frill of Mercuriceratops demonstrates unexpected disparity in cranial shape among ceratopsid dinosaurs (Fig. 4 ). In some ways, the morphology bridges the morphological gap between the plesiomorphic rectangular squamosal of basal neoceratopsians and centrosaurine ceratopsids and the elongate triangular ones in chasmosaurines (Fig. 4 ; Maiorino et al. 2013 ). However, this hypothetical evolutionary transition series is not reflected in the available ontogenetic series for Chasmosaurus from the Dinosaur Park Formation, in which even the smallest squamosals (e.g., TMP 1998.128.1) have a triangular shape similar to those of more mature individuals. We therefore infer that the unique frill morphology of Mercuriceratops is apomorphic for this taxon and sets it apart from all other ceratopsids. Fig. 4 Line diagrams of ceratopsid parietosquamosal frills in dorsal view illustrating the extreme difference in the squamosal shape of b the chasmosaurine Mercuriceratops gemini compared to the basal centrosaurine a Xenoceratops foremostensis from the Foremost Formation, Alberta, and a typical chasmosaurine c Chasmosaurus russelli from the Dinosaur Park Formation of Alberta. Frills are approximately to scale Full size image Mercuriceratops is the oldest known chasmosaurine in the well-documented succession of chasmosaur taxa from the Campanian of Alberta (Godfrey and Holmes 1995 ; Holmes et al. 2001 ; Ryan and Evans 2005 ; Mallon et al. 2012 ). UALVP 54559 was collected from the Dinosaur Park Formation of Alberta, approximately 2 m above the contact with the Oldman Formation making it approximately 77 Ma (Eberth 2005 ). The Canadian Campanian chasmosaurine taxa are well established, with most taxa known from multiple skulls (Ryan and Evans 2005 ); UALVP 54559 is clearly morphologically distinct from all chasmosaurines recovered from this formation including Chasmosaurus belli , Chasmosaurus russelli , and Vagaceratops irvinensis , which all have long, triangular squamosals (Maiorino et al. 2013 ). ROM 64222 was collected from a locality above the SD2 disconformity (Rogers and Kidwell 2000 ) of the upper Judith River Formation of Montana. The locality of ROM 64222 is approximately time equivalent to the lower portion of the Dinosaur Park Formation (Rogers, personal communication), allowing us to infer an approximate time equivalency for the two specimens. The Campanian record of Chasmosaurinae in Montana is relatively depauperate, with only two putative taxa being based on a small amount of material from the Judith River Formation. The fragmentary, nonassociated specimens referred to Judiceratops tigris (Longrich 2013 ) come from the lower portion of the formation, making the material significantly older than Mercuriceratops . The material probably represents a distinct taxon, although it is equivocal whether the reported diagnostic characters are supportable. The squamosal YPM VPPU 023262 referred to Judiceratops is characterised as having a typical chasmosaurine shape, thus eliminating its referral to Mercuriceratops. Medusaceratops lokii (Ryan et al. 2010 ) was described based on material collected from a bone bed in the lower Judith River Formation of Montana in the Kennedy Coulee system just south of the Alberta border. Material from the bone bed was originally referred to the contemporaneous Albertaceratops nesmoi from equivalent beds in the lowermost Oldman Formation of Alberta (Ryan 2007 ) making it approximately 1 Ma older than UALVP 54559. Ryan et al. ( 2010 ) referred two anomalous parietals from the locality to the new chasmosaurine M. lokii , while leaving the remainder of the material tentatively referred to Albertaceratops . Although squamosals are poorly represented from the Montanan bone bed, the most complete specimens possess an elongate, concave sutural surface for the parietal on the medial surface of the posterior blade that is typically centrosaurine; no typical chasmosaurine or Mercuriceratops -like squamosals are known from the locality (Ryan 2007 ; Ryan et al. 2010 , Ryan unpublished data). Ceratops montanus Marsh 1888 ( nomen dubium ), based on an occipital condyle and a pair of large postorbital horncores (USNM 2411), may represent a chasmosaurine, but is currently regarded as a nomen dubium due to lack of diagnostic characters in the holotype specimen (Dodson et al. 2004 ). Campanian dinosaurs have been hypothesised to occur in two distinct paleobiogeographic provinces within the Western Interior of North America (Lehman 1987 , 1997 ; Sampson and Loewen 2010 ; Farke 2013 ; Loewen et al. 2013 ; Ryan 2013 ). The northern zone is typified by fossils collected in the Belly River Group of southern Alberta (Eberth and Hamblin 1993 ) and the Two Medicine/Judith River clastic wedge in Montana. The southern zone is less well characterised, due, in part, to a less intense and shorter collection history, but includes taxa from the Kaiparowits, Fruitland, and Aguja formations of Utah, New Mexico, and Texas, respectively. Centrosaurine ceratopsids are well known from both the north and south biogeographic zones and provide some of the strongest evidence in support of provinciality (Sampson et al. 2010 , 2013 ). The recognition of the new, anatomically unique chasmosaurine, Mercuriceratops , in the well-sampled Dinosaur Park and Judith River formations will have important implications for characterising dinosaur provinciality within Laramidia.
Scientists have named a new species of horned dinosaur (ceratopsian) based on fossils collected from Montana in the United States and Alberta, Canada. Mercuriceratops (mer-cure-E-sare-ah-tops) gemini was approximately 6 meters (20 feet) long and weighed more than 2 tons. It lived about 77 million years ago during the Late Cretaceous Period. Research describing the new species is published online in the journal Naturwissenschaften. Mercuriceratops (Mercuri + ceratops) means "Mercury horned-face," referring to the wing-like ornamentation on its head that resembles the wings on the helmet of the Roman god, Mercury. The name "gemini" refers to the almost identical twin specimens found in north central Montana and the UNESCO World Heritage Site, Dinosaur Provincial Park, in Alberta, Canada. Mercuriceratops had a parrot-like beak and probably had two long brow horns above its eyes. It was a plant-eating dinosaur. "Mercuriceratops took a unique evolutionary path that shaped the large frill on the back of its skull into protruding wings like the decorative fins on classic 1950s cars. It definitively would have stood out from the herd during the Late Cretaceous," said lead author Dr. Michael Ryan, curator of vertebrate paleontology at The Cleveland Museum of Natural History. "Horned dinosaurs in North America used their elaborate skull ornamentation to identify each other and to attract mates—not just for protection from predators. The wing-like protrusions on the sides of its frill may have offered male Mercuriceratops a competitive advantage in attracting mates." This image depicts Mercuriceratops gemini skull fossils from the right side of the frill. Credit: Naturwissenschaften "The butterfly-shaped frill, or neck shield, of Mercuriceratops is unlike anything we have seen before," said co-author Dr. David Evans, curator of vertebrate palaeontology at the Royal Ontario Museum. "Mercuriceratops shows that evolution gave rise to much greater variation in horned dinosaur headgear than we had previously suspected." The new dinosaur is described from skull fragments from two individuals collected from the Judith River Formation of Montana and the Dinosaur Park Formation of Alberta. The Montana specimen was originally collected on private land and acquired by the Royal Ontario Museum. The Alberta specimen was collected by Susan Owen-Kagen, a preparator in Dr. Philip Currie's lab at the University of Alberta. "Susan showed me her specimen during one of my trips to Alberta," said Ryan. "When I saw it, I instantly recognized it as being from the same type of dinosaur that the Royal Ontario Museum had from Montana." The Alberta specimen confirmed that the fossil from Montana was not a pathological specimen, nor had it somehow been distorted during the process of fossilization," said Dr. Philip Currie, professor and Canada research chair in dinosaur paleobiology at the University of Alberta. "The two fossils—squamosal bones from the side of the frill—have all the features you would expect, just presented in a unique shape." Artist reconstruction of Mercuriceratops gemini, a new species of horned dinosaur that had wing-shaped ornamentation on the sides of its skull. Credit: Danielle Dufault "This discovery of a previously unknown species in relatively well-studied rocks underscores that we still have many more new species of dinosaurs to left to find," said co-author Dr. Mark Loewen, research associate at the Natural History Museum of Utah. This dinosaur is just the latest in a series of new finds being made by Ryan and Evans as part of their Southern Alberta Dinosaur Project, which is designed to fill in gaps in our knowledge of Late Cretaceous dinosaurs and study their evolution. This project focuses on the paleontology of some of oldest dinosaur-bearing rocks in Alberta and the neighbouring rocks of northern Montana that are of the same age.
10.1007/s00114-014-1183-1
Medicine
Antibodies may reveal timing of previous influenza infection
Nguyen Thi Duy Nhat et al. Structure of general-population antibody titer distributions to influenza A virus, Scientific Reports (2017). DOI: 10.1038/s41598-017-06177-0 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-06177-0
https://medicalxpress.com/news/2017-08-antibodies-reveal-previous-influenza-infection.html
Abstract Seroepidemiological studies aim to understand population-level exposure and immunity to infectious diseases. Their results are normally presented as binary outcomes describing the presence or absence of pathogen-specific antibody, despite the fact that many assays measure continuous quantities. A population’s natural distribution of antibody titers to an endemic infectious disease may include information on multiple serological states – naiveté, recent infection, non-recent infection, childhood infection – depending on the disease in question and the acquisition and waning patterns of immunity. In this study, we investigate 20,152 general-population serum samples from southern Vietnam collected between 2009 and 2013 from which we report antibody titers to the influenza virus HA1 protein using a continuous titer measurement from a protein microarray assay. We describe the distributions of antibody titers to subtypes 2009 H1N1 and H3N2. Using a model selection approach to fit mixture distributions, we show that 2009 H1N1 antibody titers fall into four titer subgroups and that H3N2 titers fall into three subgroups. For H1N1, our interpretation is that the two highest-titer subgroups correspond to recent and historical infection, which is consistent with 2009 pandemic attack rates. Similar interpretations are available for H3N2, but right-censoring of titers makes these interpretations difficult to validate. Introduction The distribution of antibodies in a human population is a fossil imprint of the population’s past exposure to infectious disease. If individuals’ antibody concentrations can be measured accurately, they can be used to infer both the size and timing of past epidemics. The two key post-epidemic processes that need to be measured to make this inference possible are the rate of antibody acquisition and the rate of antibody waning. The rate of antibody acquisition post-infection is rapid (weeks) for most viral pathogens, but more difficult to measure for more complex pathogens that present the immune system with a diverse set of antigens. The rate of antibody waning, however, is rarely measured even for viral pathogens. To correctly translate a population’s antibody titer distribution to its epidemic history, accurate measures of both these rates are necessary. To validate that this reconstruction has been done correctly, a large cohort with long-term follow-up and precise antibody measurements would be required. Studies like these are difficult to run and difficult to find in the scientific literature – both in methodological development and field implementation. Further complicating the issue is that antibody measurements are rarely 100% specific, and that low-level cross-reactive antibodies often are ignored by setting a cut-off for positivity. To begin investigating what an antibody distribution can tell us about a population’s epidemic history, we initiated a large-scale time-structured serological survey 1 , 2 and an observational clinical study that includes repeat patient follow-ups to measure rates of antibody waning 3 ; the results of the serological survey are presented here. Influenza A virus was chosen as the pathogen of interest as ( i ) it is an important, globally-circulating human pathogen, ( ii ) influenza is well characterized antigenically, ( iii ) a precise and repeatable serological assay was available, and ( iv ) the human population receives almost no influenza vaccination in our study location of southern Vietnam. The first aim of this study was to move away from the binary approach to serology – which classifies individuals as seropositive or seronegative 4 , 5 , 6 , 7 , 8 – and to describe the underlying structure of a general-population antibody-titer distribution by assuming that an individual can belong to any number of serological states. The rationale for a detailed descriptive analysis of antibody titer distributions is that titer groups or titer ranges may be able to provide differentiating information on the type of infection, e.g. recent versus non-recent infections, or primary versus non-primary. The binary approach of classifying individuals as seropositive and seronegative is not as informative as it could be given the richness of some serological datasets, and it is already known to have two practical drawbacks. First, the cutoff value for seropositivity is typically calibrated from a group of patients with confirmed acute infection, by collecting convalescent serum samples a few weeks or a few months after symptoms onset. This means that the correct application of the cut-off value is the identification of recent symptomatic infections rather than any past infections. Thus, applying this threshold to a population-wide serological cross-section will likely result in an underestimate of the seroprevalence. Second, binary classification in serology results in incorrect or inconclusive classifications for samples with borderline measurements 8 , 9 , 10 . Non-binary analyses of serological data are present in the literature for a range of pathogens 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 including influenza virus 19 , 20 , but very few of these studies are able to look at non-vaccinated populations and none have the scale and precision presented here. In the present study, we analyze a large set of general-population serum samples collected as residual serum from biochemistry and haemotology labs in four hospitals in southern Vietnam, from 2009 to 2013. Using a zero-inflated mixture modeling approach, we allow for up to seven serological states. To account for the large sample size in our model selection procedure, we use the Bayesian Information Criterion, and to avoid inference of spurious serological states we set additional criteria to ensure that inferred titer groups are epidemiologically meaningful. We hypothesized that serological classification of influenza antibody titers would be non-binary and that age and lineage exposure (H1N1 only) would be associated with certain titer groups. We found that H1N1 antibody titer distribution are best classified into four titer groups, that H3N2 is best classified into three groups, and that censoring may have prevented a complete classification of H3N2 titers. Results A total of 20,152 sera were collected and tested for antibody concentrations by protein microarray. The samples represent patients attending hospitals in four cities – Ho Chi Minh City (n = 5788), Nha Trang (n = 5630), Buon Ma Thuot (n = 4144), and Hue (n = 4590) in central and southern Vietnam. Titer distributions varied by age, as expected (Fig. 1 ) but did not vary by site (Figures S8 and S9 ). Figure 1 shows the age-stratified titer distributions to the HA1 component of the 2009 H1N1 virus and the most recently circulating H3N2 variants. If individuals truly represented seropositive (exposed) and seronegative (unexposed or naïve) categories, a mixture model of two components would classify samples into two subgroups. Visually, this does not appear to be the case as a broad range of titers was observed for both subtypes across all age groups. Thus, a mixture distribution fitting approach was employed to determine the appropriate number of components necessary to accurately describe the titer data. Figure 1 Antibody titer histograms for n = 20,152 individuals, plotted for all ages (top panels) and by age group (bottom four panels). Titers shown are to the HA1 components of the 2009 H1N1 pandemic influenza virus (left column) and to recently circulating H3N2 viruses (right column). The fractions of individuals with titers below the detection limit of 20 and above 1280 that were out of the plotting ranges are given next to the respective bar. Histograms were weighted to adjust for age and gender according to the Vietnam national housing census in 2009 for the four collection sites. Full size image Mixture distribution fits for up to six components, with an additional weight at a log-titer of one (“zero inflation”), are shown in Fig. 2 for H1N1 and Fig. S10 for H3N2. For both subtypes, it is clear that a binary classification of titer is not the most informative interpretation of the titer distribution, as both the one- and two-component models (top two rows) did not capture the underlying structure of the dataset adequately. When stratifying the data by site (sample size >4,000), the Bayesian Information Criterion (BIC) selected four components as the best model for the H1N1 data (five for Hue, but the ΔBIC = 18 here was relatively small compared to other changes between nested models) and three components as the best model for H3N2. The five- and six-component models either overfit the data (according to the BIC) or included low-variance/low-weight components, which would correspond to an implausible population subgroup with a very specific antibody titer (Figs. 2 and 3 ). This was readily seen in the aggregate data which is why the BIC-selected models of the by-site data are likely to be better explanations of the structure of these titer distributions. BIC improvement from n mixture components to n + 1 components is shown in Table 1 for 2009 H1N1 and Table S7 for H3N2. The means and variances were allowed to be free in these analyses, and the confidence intervals for the inferred parameters (Appendix Section 7 ) suggest that the structure of the distributions and the inferred values were robust across the four sites in our analysis. Figure 2 Titer histograms for 2009 H1N1, showing fit results for mixture models with different numbers of normal components (top to bottom; the label to the left of the y ‐axis is the number of mixture components) and grouped by collection sites. Histograms are weighted to adjust for age and gender according to the Vietnam national housing census in 2009 for each of the four collection sites. The blue lines in each panel are the normalized probability density functions of the component distributions with darker colors used for increasing μ . The black lines show the full mixture distribution density, and the black dots are the estimated cumulative distribution of the mixture models at 7.0 (titer of 1280). The numbers in the upper right corner of each panel are the BIC scores of the model fits. The fractions of individuals with titers below the detection limit of 20 and above 1280 that were out of the plotting ranges were given next to their respective bars. Full size image Figure 3 Visualization of model selection process for 2009 H1N1 titer-distribution models from Fig. 2 . The y -axes show the fitted values of w i (mixture weights), μ i (means), and σ i (standard deviations). Components’ shades were ranked from lightest to darkest in the order of increasing μ . In the top panel, the “0th component” represents the point mass w 0 placed at 20 for titers below the lower detection limit of 20. Note that in many cases for five or six components, the weights or standard deviation parameters are close to zero; for some cases, two of the inferred mean parameters are very close to each other. Full size image Table 1 Change in BIC values as the number of normal distributions in the mixture increases from one to six for 2009 H1N1, for the aggregated data as well as the individual collection sites. Full size table The three- and four-component mixtures indicate that these data can be used to develop a more informative serological classification for influenza. Using known results for this microarray assay 3 , 20 , 21 , titers below 100 would be classified as negative or ‘not previously exposed to this particular influenza strain’. For H1N1, this indicates that titers in the first component μ 1 = 29.8 (95% CI 29.1–30.5) and in the second component μ 2 = 75.0 (95% CI 73.4–76.7) would both correspond to seronegative individuals. Similarly, for H3N2, seronegative individuals would be represented by the first component μ 1 = 80.2 (95% CI 76.7–83.4). The second-highest titer component has mean μ 3 = 247.3 (95% CI 240.8–261.7) for H1N1 and μ 2 = 213.3 (95% CI 209.7–216.6) for H3N2. The highest titer component has mean μ 4 = 670.9 (95% CI 519.8–787.9) for H1N1 and μ 3 = 455.0 (95% CI 428.1–483.7) for H3N2. The natural interpretation of these high-titer subgroups – based on antibody titers measured as a function of time since infection 3 – is that they represent more recent infections. As it is known that the influenza antibody decay rate is fast enough to be observed in the first six to twelve months after an acute infection 22 , 23 , for H1N1 the highest titer subgroup may be an approximate designation for recently infected individuals, and the second highest titer subgroup may correspond to ‘historically infected’ individuals, i.e. individuals infected at some point in the non-recent past. For H1N1, these interpretations are able to be validated using post-pandemic sera. Assuming that the highest-titer component ( w 4 ) of the mixture distribution corresponds to recently infected individuals and the second highest-titer component ( w 3 ) corresponds to historic infection, one would expect to be able to use the weights w 3 and w 4 as proxies for the pandemic attack rate. Looking at samples collected from January 2010 to June 2010 – i.e. after the first wave of the 2009 influenza pandemic in Vietnam 24 , 25 – the proportions of individuals that were recently infected with 2009 H1N1 were highest among younger individuals (0.14, 0.23, 0.08, and 0.16, for the 0.5–9, 10–19, 20–44, and ≥45 age groups, respectively), while the proportions of historically-infected individuals were approximately equal among age groups (0.16, 0.22, 0.23, and 0.20; same age groups); see Table S6 for confidence intervals. The estimates of 14% of children aged 0.5–9 and 23% of children aged 10–19 falling into the recently infected category are likely to be slight underestimates 5 , 6 , 24 of pandemic attack rate as the post-pandemic sample here includes samples collected through June 2010. Nevertheless, these are within the expected ranges of the attack rate of the first year of the 2009 pandemic. For older individuals, pandemic attack rates are more difficult to validate but it is important to remember that older individuals had measurable antibody titers to 2009 H1N1 prior to the arrival of the new pandemic virus 9 . The pattern of attack rates observed in our samples is consistent with the two highest titer categories representing recent and historical infections with the H1N1 subtype. The epidemiological interpretation of the H3N2 mixture components cannot be validated at present. The best-fit mixture models for H3N2 had larger variances than those for H1N1. The log-titer ranges (±2σ) for the three H3N2 titer groups were 26–240, 114–456, and 68–3045. Thus, the discriminatory power between the last two components was not as good as for H1N1 (see Fig. S10 ). The large standard deviations of the last component for H3N2 may have been the result of the high fraction of right-censored samples with titers ≥1280. In addition, the proportions of individuals in the highest titer group (third component) are 0.46 (ages 0.5–9), 0.49 (ages 10–19), 0.43 (ages 20–44), and 0.34 (≥45). These are unlikely to represent recent attack rates of H3N2 epidemics and are more likely to represent historical infection, i.e. individuals who have been exposed to the currently circulating H3N2 strain at some point in the past. One possible explanation for these observations is the existence of an additional fourth peak for the H3N2 titers describing individuals with titers above the upper limit of detection (≥1280). In our sample set, the proportions of individuals with H3 titers equal to 1280 were two to three times higher than those for H1N1 in the same age category. This is consistent with the existence of a fourth titer group with mean titer >1280, but we cannot confirm this with the current data as the samples were not diluted past 1:1280. For both subtypes, the individual components in the mixture models did not correspond to any specific age groups, and stratifying the samples by age did not explain any particular component of the mixture (Figs. 4 , S12 , and S15 ). All age groups included individuals with high, medium, and low titer levels. H1N1 has a more complex lineage history than H3N2, with three different lineages circulating since 1918. This suggests that separating the samples into H1N1 lineage-exposure groups (pre-1957, post-1977, post-2009) may account for certain titer groups or categories. However, separating the samples by birth year – 0.5–50 years-old and ≥60 years-old, to distinguish individuals that could and could not have been infected by the 1918-lineage H1N1 – did not provide any evidence for this effect (Fig. S14 and Table S9 ). Figure 4 Titer histograms and fit results for mixture models with different numbers of components (label on the left is the number of mixture components) and grouped by different age groups recommended by the CONCISE ( ) consortium for 2009 H1N1 influenza. Histograms are weighted to adjust for age and gender according to the Vietnam national housing census in 2009. The numbers in the upper right corner of each panel are the fitted BIC scores of the respective model. For each panel, the blue lines are the normalized probability density of the component distributions with darker colors used for increasing μ . Black lines are the total mixture distribution density; and the black dots are estimated probability weight of the mixture model for titers ≥7.0. The fractions of individuals with titers below the detection limit of 20 and above 1280 that were out of the plotting ranges are shown next to their respective bars. Full size image Discussion Using a large collection of serum samples and a continuous measurement of antibody titer, we were able to describe the natural distribution of antibody titers to the 2009 H1N1 and H3N2 subtypes of influenza virus. As there is almost no influenza vaccination in Vietnam and as influenza in Vietnam is characterized by a combination of local persistence and annual/biannual outbreaks 26 , 27 , 28 , characterization of titer distribution in this context is a useful general approach for looking at the immune status of a population at quasi-equilibrium with an endemic infectious disease. With a mixture model approach, we were able to identify the presence of multiple exposure groups in the population according to their titers. Our interpretation of these multiple exposure groups – according to titers measured for confirmed cases 3 , 21 and past measurements of the rate of antibody waning 22 , 23 – is that they represent recently infected individuals, historically (i.e. non-recently) infected individuals, and naïve individuals. Note that for influenza, a naïve individual is one who has not been exposed to the currently circulating strain, which means that there will be naïve individuals in all age groups. This study used an atypical seroepidemiological design as the samples were collected continuously, and not specifically in a post-epidemic or post-pandemic scenario. In addition, the serum samples were collected in the tropics where continuous circulation of influenza virus is believed to occur 28 , 29 , 30 , 31 , 32 , 33 and where populations are much less likely to be vaccinated for influenza (less than 0.8% annual coverage for Vietnam). Therefore, the present data set is the first to show the natural distribution of influenza antibody titers in a human population. One useful application of these results in future serological studies is to encourage, by default, the inclusion of multiple serological states in the data analysis phase, which may result in a more informative classification of antibody titer than a separation into seropositive and seronegative. The classification proposed here uses antibody levels as proxies for recency of infection, and if correct, this should allow for a more informative reconstruction of the population’s epidemic history. In general, knowing the IgG antibody waning rate is essential for interpreting the titers measured in serological cross-sections 34 , 35 , and using waning rates to estimate the time of past infection has already been attempted for some infectious diseases 36 , 37 , 38 , 39 , but not for influenza virus. Longitudinal follow-up studies that are able to provide accurate estimates of antibody waning rates are crucial for this type of analysis, but they are rare 3 , 23 , 40 , 41 . Two major limitations of serological classification systems will need to be better understood. First, a mixture distribution approach does not guarantee that individuals can be easily classified into one of several titer subgroups. With substantial overlap in some mixture components, individuals can have approximately equal probabilities of belonging to two or three different titer categories. In addition, individual variation will have a large effect on titer interpretations. A high-titer sample could represent a recent infection, but individuals can maintain high titers longer than the mean duration observed in clinical studies. This would normally, but not exclusively, be observed in children. Likewise, lower antibody titers (in the 200–250 range) could indicate historical past infection, a low response to a recent infection 42 , or a recent but mild infection. With serological data alone, these scenarios cannot be distinguished. For subtype H3N2 specifically, low titer levels could indicate cross-reactions between antibodies generated to an older influenza variant than the recent H3N2 HA1 proteins spotted on the protein microarray. Second, a major challenge in influenza seroepidemiology is that it is difficult to take into account the effects of original antigen sin 42 , 43 or age-dependent seroconversion 40 (ADS). Age-dependent seroconversion is distinct from original antigenic sin in that ADS assumes that individuals of different ages seroconvert to different titer levels irrespective of the individual’s infection history. In principle, the effect of ADS should be detectable for 2009 H1N1 infections in individuals younger than 50, as for these individuals an exposure to the 2009 virus would have been a first exposure. However, the mixture component means ( μ i parameters) and the component weights ( w i ) are not separately identifiable in the mixture model. Thus, we cannot state that the ‘recently infected’ titer subgroups are comparable across age groups, as the inferential process will make the exact definition of recency different for the 10–19 age group than for the 20–44 age group. Even if we were to assume that the fourth mixture components should be comparable across age groups, the titer means denoted by μ 4 in Fig. 4 do differ but are within one standard deviation of one another. Thus, there is a lack of evidence for ADS in our titer data. As we only considered recent antigens in this analysis, effects of original antigenic sin were not able to be investigated. The next critical step in this analysis will be using titer data from follow-up on confirmed cases 3 to determine if the natural distribution of antibody titers conforms to the recent, historical, and naïve categories as presented here. If antibody waning rates can be measured with a high degree of precision, these may allow for a detailed description of individuals’ recency of infection and possibly a reconstruction of past epidemic history in human populations. Large-scale serological studies like the one presented here are labor-intensive and slow to generate results. Nevertheless, the long follow-up and the large sample size will be worth it if seroepidemiology can be pushed forward to maximize the amount of biological information that can be extracted from population-level serology studies. Materials and Methods Residual serum samples were collected from four hospital laboratories in southern Vietnam: the Hospital for Tropical Diseases in Ho Chi Minh City (urban, densely populated), Khanh Hoa Provincial Hospital in Nha Trang city (small urban, central coast), Dak Lak Provincial Hospital in Buon Ma Thuot city (central highlands, rural), and Hue Central hospital in Hue City (small urban, central coast). Samples were collected from July 2009 to December 2013 on a bimonthly basis; 200 were included in each collection from all age groups (neonates to elderly individuals in their 90s). Samples were anonymized, delinked, and labeled with age, gender, originating hospital ward (HIV wards were excluded), and date of collection. Samples were collected from both inpatients and outpatients and are believed to represent the hospital-going population in their respective cities. This assumption is currently being tested and will continue to be tested as different antibody assays are performed on the sample set. Two early analyses (one unpublished and one published 44 ) suggest that when looking at hospital presentation with hepatitis, the younger age range (<20) in the sample set may represent a sub-population more vulnerable to infectious disease exposure than the general population. The sample collection described here is part of a large ongoing study in serial seroepidemiology 1 , 2 , 34 aimed at describing the dynamics of influenza circulation in southern Vietnam. The study was approved by the Scientific and Ethical Committee of the Hospital for Tropical Diseases in Ho Chi Minh City and the Oxford Tropical Research Ethics Committee at the University of Oxford. The samples were tested for presence of influenza antibodies using a protein-microarray (PA) method 45 , at serial four-fold dilutions from 20 to 1280, to test for IgG antibody to the HA1 component of 16 different influenza viruses 1 . Two-fold dilutions were used in some instances; see validation of this approach in Appendix Section 2 . A sample of the international standard (IS) for testing antibody response to influenza A H1N1 Pandemic 2009 (H1-09) was included on every slide to correct for inter-laboratory, inter-technician, and inter-slide variations 45 (Appendix Section 1.2 ). Assay repeatability was assessed using a positive control and replicates of patient samples (Appendix Section 3 ). Titers were defined as the dilution at which samples yield a median response between the minimum and maximum luminescence values of 3000 and 65535. Titers of all human samples on each slide are normalized based on the IS titers of the reference antigen against its geometric mean (Table S2 ). In this analysis, titers to the 2009 H1N1 virus (A/California/6/2009) and recently circulating H3N2 viruses (geometric mean titer to A/Victoria/210/2009 and A/Victoria/361/2011) were analyzed. To describe the distribution of influenza antibody titers in the Vietnamese population, titer values were separated by site, adjusted to their province’s age and gender distribution 46 (Appendix Section 4 ), and plotted as a simple weighted histogram (Fig. 1 ). A series of mixture models was used to fit this distribution, with the assumption being that individual samples have one of several immune statuses which are represented by the different components in the mixture model. Our hypothesis was that the sample population consists of different subpopulations with different antibody levels depending on their infection history and that each of these components can be represented by a single parametric distribution. Titers were log-transformed and assumed to come from a C -component mixture distribution with the corresponding likelihood: $$ {\mathcal L} ({\rm{x}}|\,{\boldsymbol{\theta }}\,)=\,\prod _{i=1}^{n}\,\sum _{j=1}^{C}{w}_{j}\cdot {f}_{j}(\,{{\rm{x}}}_{i}\,|\,{\theta }_{j}\,)$$ (1) where f is the probability density function of a normal distribution with parameters θ j and w = ( w 1 , w 2 , …, w C ) is the vector of component weights in the mixture. The log-likelihood was defined as: $$\ell ({\rm{x}}|{\boldsymbol{\theta }}\,)=\,\sum _{i=1}^{{\boldsymbol{n}}}{s}_{i}\cdot \,\mathrm{log}[\sum _{j=1}^{C}{w}_{j}\cdot {f}_{j}(\,{{\rm{x}}}_{i}\,|\,{\theta }_{j}\,)]$$ (2) in which the s i parameters are sampling corrections to adjust the sample age and sex distribution to the population’s true demographic distribution; f j ( x i | θ j ), j = 1, 2, .., C is the probability density function that a given sample x i belongs to the j th-component in the mixture. C is the number of mixture components 47 , 48 . The microarray assay produces continuous log-titer results between 1.0 (titer of 20) and 7.0 (titer of 1280). To account for these detection limits, an extra probability weight w 0 was added at 20 to account for samples that had antibody concentrations at or below the detection limit of 20. This can be considered a zero-inflated mixture model, where titers of 20 are the “zeroes”. Because of this added probability mass, we discretized the probability mass functions to make the entire distribution discrete; hence the distributions f formally represents discretized versions of continuous density functions (Appendix Section 5 ). At the upper detection limit of 7.0, the mixture distribution was censored assuming that individuals with titers of 7.0 represented a class of seropositive individuals with a real titer value if the assays had been continued to be diluted until the real titer was found. Censoring on the right and truncating on the left gave the best fit (according to BIC) among the four combinations. Truncating on the left means that the extra weight on the left-hand side of the probability density function (the portion below 20) was simply discarded when performing the fits, as “zero-inflation” on the left-hand side was used to fit the number of samples that had titers of 20 or below. Thus, the log-likelihood in (2) was modified as: $${\ell }^{interval}({\rm{x}}|\,{\boldsymbol{\theta }}\,)=\,\sum _{i=1}^{{\boldsymbol{n}}}{s}_{i}\cdot \,\mathrm{log}[{w}_{0}\,+\,\sum _{j=1}^{C}{w}_{j}\cdot {f}_{j}(\,{{\rm{x}}}_{i}\,|\,{\theta }_{j})]$$ (3) Maximum likelihood estimation was carried out using the Nelder-Meade algorithm implemented in Java 8.0 (Apache Commons Math 3.3). Global optima and convergence were assessed by starting searches from different sets of the initial conditions. Weibull, Gamma, and normal distributional forms were tested for the mixture components, and as there was little difference in the fits (Appendix Section 6 ), normal distributions were chosen for the analysis. Confidence intervals for means, variances, and weight parameters w j were computed using likelihood profiles 48 . For multi-component mixture models, the likelihood ratio test between a specific model and its immediate predecessor (e.g. n components versus n -1 components) is not a valid statistical comparison. Since interchanging the components’ identity gives the same mixture likelihood 48 , the regularity conditions do not hold for the likelihood ratio test to have its usual χ 2 distribution. Thus, the most appropriate number of mixture components was chosen by (1) Bayesian Information Criterion to take into account the number of samples, and (2) a qualitative inspection of the means and variances of the components to ensure that ( a ) multiple means did not overlap and ( b ) variances and weights were not too small, which would make them not epidemiologically meaningful. Data on influenza vaccine imports in Vietnam were obtained from Vietnam’s Customs and Imports Department via IMS Health Vietnam. Annual influenza vaccine imports for 2014–2016 are sufficient to cover approximately 0.8% of the Vietnamese population. As wealth and access to medicines are growing in Vietnam, the coverage level for 2009 to 2013 was likely to be lower than 0.8%. Data Availability Statement Data are available from the authors upon request.
The amount of influenza-specific antibodies present in an individual's blood can indicate not only if they experienced the flu, but potentially when—a finding that could improve disease monitoring in the tropics, where flu season is unending. In the largest study of its kind, an international team, led by researchers from the Oxford University Clinical Research Unit in Ho Chi Minh City, Vietnam, the Erasmus Medical Centre in Rotterdam, Netherlands, and Penn State University, identified antibody concentrations that correspond to recent and past exposure to the flu strain H1N1—the strain involved in the 2009 flu pandemic. A paper describing the research is published in a July issue of the journal Scientific Reports. "Disease outbreaks and epidemics are often monitored by counting individuals who show symptoms of infection, but this only captures people who are sick enough to be identified," said Maciej Boni, associate professor of biology at Penn State and a lead author of the paper. "With blood samples, you can capture everyone that ever was infected because individuals are not able to hide their antibody signals." Antibodies defend against viral attack, and their numbers spike in the presence of an infection like influenza. Approximately one month after infection, the number of flu-specific antibodies in the blood begins to decrease, but some antibodies continue to circulate long after the virus has cleared. In the past, scientists have measured the concentration of antibodies remaining to identify whether an individual has been exposed to the virus, but the results of these tests have typically been limited to describing the presence or absence of past infection. "In this study we showed that there is a lot more information in measurements of antibody concentration than just presence or absence," said Boni. "Our results show that antibody concentration should be able to provide information about the timing of past influenza infection." This information is especially valuable in tropical climates. "In temperate regions like the United States, we might collect blood samples when the flu season is over to see what percentage of people were infected during that flu season," Boni explains. "But in the tropics there is no flu season—it may be constantly circulating or it could come in waves. If all you measure is the presence or absence of antibodies, you can't determine when those individuals were infected." Colorized micrograph of particles of the influenza strain H1N1, which was involved in the 2009 flu pandemic. Credit: National Institute of Allergy and Infectious Diseases, National Institutes of Health The research team analyzed over 20,000 blood samples from four hospitals in southern Vietnam, taken every two months between 2009 to 2013. "This is the largest study of its kind, and custom statistical methods needed to be developed for this analysis," said Nguyen Thi Duy Nhat, graduate student at Oxford University Clinical Research Unit at the time of the study and first author of the paper. This immense undertaking will allow the team to map out the H1N1 flu strain's dynamics in the tropics in the next phase of their research. "The 2009 influenza pandemic taught us the importance of understanding the history of exposure in the community as a factor of a pandemic's impact," said Marion Koopmans, head of the Department of Viroscience at the Erasmus Medical Centre and a lead author of the study. "Here, we introduce a novel approach that measures a population's exposure history to currently circulating viruses. This work will help us assess who is most at risk during a new influenza outbreak." The research team defined four categories of H1N1-specific antibody concentrations. The highest concentrations indicate exposure to H1N1 within the last six months, the second highest concentrations indicate exposure greater than six months prior, and the lowest two categories of concentrations indicate no previous exposure to the virus. Use of these categories could allow public health officials in other tropical locations to determine infection rates of H1N1 with systematic sampling, for example, by screening a subset of the population every January to determine the previous year's infection rate. The researchers used a protein microarray—a high-throughput large-scale test that measures interactions of large numbers of proteins in parallel—to measure antibody concentrations. Developed in the Netherlands, this relatively new technique allows precise antibody measurements with very small volumes of blood. "This protein microarray has high reproducibility and can provide specificity to 16 different influenza strains," said Erwin de Bruin, senior laboratory technician at Erasmus Medical Centre and an author of the study. "The small volume of blood required provides a simpler way to perform large epidemiological studies." "This microarray, and the additional information about time of infection from antibody concentrations, could change how we monitor disease in the tropics," adds Boni. "Currently, public health systems monitor antibodies after an outbreak or for the purpose of research, but most of the monitoring effort focuses on symptoms through hospital-based surveillance. By next decade, we may be able to perform regular surveillance of blood, which would give us a better picture of the diseases circulating through a population. This kind of surveillance is especially important in tropical countries where a lot of novel viruses emerge."
10.1038/s41598-017-06177-0
Physics
Researchers develop a two-photon microscope that provides unprecedented brain-imaging ability
Che-Hang Yu et al, Diesel2p mesoscope with dual independent scan engines for flexible capture of dynamics in distributed neural circuitry, Nature Communications (2021). DOI: 10.1038/s41467-021-26736-4 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-26736-4
https://phys.org/news/2021-12-two-photon-microscope-unprecedented-brain-imaging-ability.html
Abstract Imaging the activity of neurons that are widely distributed across brain regions deep in scattering tissue at high speed remains challenging. Here, we introduce an open-source system with Dual Independent Enhanced Scan Engines for Large field-of-view Two-Photon imaging (Diesel2p). Combining optical design, adaptive optics, and temporal multiplexing, the system offers subcellular resolution over a large field-of-view of ~25 mm 2 , encompassing distances up to 7 mm, with independent scan engines. We demonstrate the flexibility and various use cases of this system for calcium imaging of neurons in the living brain. Introduction Two-photon microscopy 1 has enabled subcellular resolution functional imaging of neural activity deep in scattering tissue, including mammalian brains 2 . However, conventional microscopes provide subcellular resolution over only small fields-of-view (FOVs), ~Ø0.5 mm. This limitation precludes measurements of neural activity distributed across cortical areas that are millimeters apart (Fig. 1a ). Obtaining subcellular resolution over a large FOV involves scaling up the dimensions of the objective lens and other optics, due to the Smith-Helmholtz invariant, also known as the optical invariant 3 , 4 , 5 , 6 . However, that is only half of the solution. Since high light intensities are required for efficient multiphoton excitation, two-photon imaging is typically implemented as a point-scanning approach, where an excitation laser beam is scanned over the tissue. Thus, each voxel sampled entails a time cost, and the scan engine design constrains the temporal resolution 7 . Temporal multiplexing of simultaneously scanned beams can increase throughput 8 , and these can have either a fixed configuration 9 , 10 , 11 , or can be reconfigured during the experiment axially 12 , 13 , 14 or axially and laterally 15 , 16 . However, these simultaneously scanned, or “yoked”, multi-beam configurations strongly constrain sampling, because they preclude varying the scan parameters among the multiplexed beams. Optimal scan parameters (e.g., frame rate, scan region size) vary across distributed neural circuitry and experimental requirements, but yoked scanning requires using the same scan parameters for all beams. Therefore, a system featuring both a large imaging volume and independent multi-region imaging is needed, and can enable new experiments. Fig. 1: Diesel2p system features, layout, and performance benchmarks. a Functional cortical areas in the mouse brain are widely distributed. A field-of-view (FOV) of Ø5 mm can encompass multiple brain areas, and independent scan engines can capture ongoing neural activity in multiple cortical areas simultaneously with optimized scan parameters. b Two imaging beams are temporally multiplexed and independently positioned in XY using two sets of resonant-galvo-galvo scan engines. First, overall power is attenuated using a half-wave plate (λ/2 WP) and a polarizing beam splitting cube (PBS). A 2X beam expander (1:2 BE) enlarges the beam for the clear aperture of the deformable mirrors adaptive optics (AO). A custom single-prism pre-chirper offsets system dispersion to maintain transform-limited pulses at the focal plane. A second λ/2 WP and PBS pair divides the beam into two pathways. Pathway 2 (s-polarization in orange) passes to a delay arm where it travels 1.87 m further than Pathway 1 using mirrors, thus delaying it by 6.25 ns relative to Pathway 1 (p-polarization in blue). Both pathways each proceed to deformable mirrors for adjusting the focal plane and correcting optical aberrations before being directed to resonant-galvo-galvo scan engines. All scanning mirrors are optically relayed to each other. Each pathway then passes through a scan lens before being combined with a beam recombination relay. A tube lens and an infrared-reflective dichroic mirror relay the two multiplexed beams onto the back aperture of the objective. Fluorescence (green) is directed to a photomultiplier tube (PMT) via an assembly of collection lenses (CL1, CL2, CL3). c An oblique view of a 3-D model of the system and its footprint. d A top view with the arrangement of the two scan engines highlighted. e Plot of the model Strehl ratio across the scan area indicates diffraction limited performance (Strehl ratio > 0.8) across ~25 mm 2 , significantly larger than the area of the dashed 5-mm diameter circle (~19.6 mm 2 ) by ~28%. f Multiphoton excitation PSF measurements were made with subresolution beads (0.2 µm) in agar under a coverslip at three depths and four locations, for both of the AO-equipped, temporally multiplexed beam pathways. FWHM of the Gaussian fits for measurements from the fluorescence beads radially and axially are calculated and plotted. Eight beads ( n = 8) at each locations are measured, except that there are 7 beads ( n = 7) measured on axis at the depth of 500 µm. Data are presented as mean values ± S.D. g XY images of a calibrated, structured fluorescent sample with a periodic line pattern (5 lines per millimeter) in two orientations acquired under the full scan range of the system. Each image shows 25 lines on the top edge (left image) and on the left edge (right image), receptively, verifying a 5 × 5 mm FOV. h The XZ image along the orange dashed line and the YZ image along green dashed line in ( g ) are also plotted. The imaging pattern is colinear with the straight lines, suggesting a flat field both in x and y directions across the FOV. Full size image In this work, we develop a system with a large FOV, subcellular resolution, and dual independent scan engines for highly flexible, asymmetric multi-point sampling of distributed neural circuitry. Here, we present a custom two-photon system with dual scan engines that can operate completely independently. Each arm has optical access to the same large imaging volume (~25 mm 2 FOV) over which subcellular resolution is maintained in scattering tissue to typical 2-photon imaging depths. These two arms use adaptive optics (AO) for wavefront shaping, temporal multiplexing for simultaneous imaging, and polarization optics for beam recombination. Due to the independence of the arms, and the use of polarization optics for beam combining, the input lasers can come from the same source or different sources (multi-wavelength). Moreover, each arm can use multiple sources simultaneously, for example, in imaging and photoactivation experiments. We refer to the system as the Diesel2p (Dual Independent Enhanced Scan Engines, Large field-of-view Two-Photon). Results System design and performance benchmarks The Diesel2p has two major design features. First, it has a ~25 mm 2 FOV to encompass multiple cortical areas and provides subcellular resolution throughout (Fig. 1a ). Second, the Diesel2p can perform simultaneous two-region imaging using two scan engine arms. In contrast to prior work 12 , 15 , these two arms are completely independent. They can each scan any region and be configured to with different imaging parameters (e.g., pixel dwell time, scan size) including random access scanning (Fig. 1b–d ). To achieve these two features, several scan engine components were custom designed and manufactured: the optical relays, the scan lens, the tube lens, and the objective. The full optical prescriptions are provided in this report (Supplementary Fig. 1 – 5 , ZEMAX models). The system was optimized as a whole, rather than optimizing components individually, to minimize the aberrations across scan angles up to ±5 degrees at the objective back aperture and primarily for the excitation windows of 910 ± 10 nm and 1050 nm ± 10 nm. The optics use an infinity-corrected objective design to facilitate modifications and modularity. Based on the optical design model, the Strehl ratio exceeded 0.8 (consistent with a diffraction-limited design) over an area only slightly smaller than a 5 × 5 mm 2 (25 mm 2 ) square, which is ~28% larger than a 5 mm diameter circle (Fig. 1e ). The Diesel2p system uses two independent scan engines to access two areas simultaneously (Fig. 1a, b, d ), as opposed to one beam jumping back and forth between two areas sequentially. In the sequential imaging regime, information is missed both during the scanning of the other area and during the time of jumping. This latter dead time is a larger fraction of the duty cycle as the frame rate increases (Supplementary Fig. 6 ). The laser beam, after passing through the dispersion compensator 17 , is split into two pathways by a polarization beam splitter (Fig. 1b ). The temporal multiplexing is set by delaying one laser beam’s pulses relative to the other (for an 80 MHz system as used here, the delay is 6.25 ns). Beams are guided into two independent scan engines. Each scan engine consists of an x-resonant mirror, an x-galvo mirror, and a y-galvo mirror in series, each at conjugate planes connected by custom afocal relays. The x-resonant mirror provides rapid and length-variable x-line scanning, up to 1.5 mm. The x- and y-galvo mirrors provide linear transverse scanning across the full FOV. Therefore, each arm of the scan engine can arbitrarily position the imaging location within the full FOV, and scan with parameters that are completely independently from the other arm. Each pathway is also equipped with a deformable mirror AO for both rapidly adjusting the focal plane axially (Supplementary Fig. 7a and Supplementary Video 1 ) and correcting optical aberrations (Supplementary Fig. 7b, c and Supplementary Video 2 ). Next, we measured the resolution of the Diesel2p system by taking z-stacks of 0.2-µm fluorescent beads at various positions and depths. The lateral and axial resolutions were estimated from the full-width-at-half-maximum (FWHM) of measured intensity profiles. For beads at each XYZ location in the FOV, measurements were made with the deformable mirror optimized to act as an AO element. Overall, the lateral FWHM was ~1 µm and axial FWHM was ~8 µm across the 5-mm FOV and up to 500-µm imaging depth (Fig. 1f ). This indicates a space-bandwidth product of ~(25 mm 2 /1 µm 2 ) = 25 × 10 6 , or 25 megapixels. The use of the deformable mirror as an AO element reduced the resolution variation across the measured volume, and improved the axial resolution by ~2 µm (Supplementary Fig. 8 ). The AO also enabled imaging of neural activity over 3.5 mm from the center of the FOV, equivalent to a 7-mm diameter along the diagonal axis (Supplementary Fig. 7b ). These results show that the Diesel2p system maintains a nearly constant subcellular resolution throughout the 25 mm 2 FOV, and allows measurement of neural activity across this area. For efficient multiphoton excitation, the characteristics of the ultrafast pulses must be maintained over the full FOV. We measured pulse characteristics at the focal plane using the frequency-resolved optical gating (FROG) technique 18 . Throughout the FOV, the pulse width is maintained at ~110 fs, and the pulse front tilt and the spatial chirp remain low (Supplementary Fig. 9 ). Together with the bead measurement, these results show that the Diesel2p system has a nearly consistent resolution and spatiotemporal pulse characteristics across the entire FOV. We next verified the imaging FOV by imaging a structured fluorescent sample with repetitive 5 lines per mm (57–905, Edmund Optics). Images contain 25 lines along both the x and y directions, indicating a 5-mm length on each axis of the FOV (Fig. 1g ). The result demonstrates that the Diesel2p system has a FOV very close to a 5 × 5 mm 2 field, consistent with the nominal model performance (Fig. 1e ). By z-scanning through this sample and rendering the x-z and y-z cross-section, we found that the thin fluorescence pattern is nearly co-linear with a straight line, indicative of a field curvature <30 µm over 5000 µm of FOV (Fig. 1h ). This result also demonstrates that the Diesel2p system has the flattest field among the reported mesoscopes with a FOV of 5 mm diameter or beyond 3 , 19 . For the application of in vivo neuronal imaging, this extent of field curvature is negligible, and field curvature calibration and correction are not necessary. Together, these benchmarks show that the system exhibits subcellular resolution throughout a flat 25 mm 2 FOV for both imaging pathways. Two-photon imaging in vivo After benchmarking the optical performance imaging system, we performed a series of in vivo imaging experiments with neurons expressing the genetically encoded calcium indicator, GCaMP6s 20 . The brain tissue under a 5 mm diameter cranial window was positioned within the 5 × 5 mm 2 FOV, and subcellular detail was resolved in individual neurons across the FOV (Fig. 2a ). To further verify the Diesel2p’s subcellular resolving power, we also performed in vivo imaging of dendritic spines (in a Thy1-GFP mouse), which were also resolved >250 µm deep (Fig. 2b ). This result demonstrates subcellular resolution in scattering, living brain. Next, we positioned the pathways to image two adjacent stripes of cortex simultaneously and set each pathway to scan a large FOV of 1.5 × 5 mm 2 (1024 × 4096 pixels), of an awake mouse (Fig. 2c and Supplementary Video 3 ). In this data set, we imaged a total area of 15 mm 2 with a pixel resolution of ~1.5 × 1.2 µm 2 (undersampling the resolution for the sake of increased frame rate), and an imaging rate of 3.84 frames/s, resulting in a pixel throughput of 32.3 megapixels/s over 15 mm 2 . Calcium signals from 5,874 neurons were detected from these two stripes (Fig. 2d ). The raw calcium signals had a signal-to-noise ratio of 7.9 ± 2.5 (mean ± standard deviation) (Fig. 2e ), and this supported robust spike inference (Fig. 2f ), which we used to calculate the correlation matrix and plot how correlations vary as a function of distance between neuron pairs (Fig. 2g, h ). The correlations are relatively high at distances less than 40 μm, and there is a slow decrease beyond 40 μm, and the measurements supported this plot out to 4000 μm. This data set demonstrates the ability of the system to measure neuronal activity with high fidelity over the large FOV. Fig. 2: Diesel2p provides subcellular resolution two-photon imaging of neural activity across its FOV. a Diesel2p’s galvo-galvo raster scanning records neuronal activity at the depth of 345 µm through a cranial window with a diameter of 5 mm (dashed line) on a transgenic mouse expressing the genetically encoded fluorescent calcium indicator GCaMP6s in excitatory neurons. Three zoom-in views from different sub-regions (colored squares) show the preservation of the subcellular resolution. b Dendritic spines were resolved in vivo from a transgenic mouse expressing Thy1-GFP at different depths. c Neuronal activity in two strips of cortex are recorded using Diesel2p’s two pathways simultaneously (blue and orange rectangles), covering a combined area of 3 mm × 5 mm with a frame rate of 3.85 frames/s. The imaging depth is 371 µm. The pixel number for each image is 1024 × 4096 pixels. d In this data set, calcium signals were recorded from 5,874 active neurons. e The histogram of the transients’ signal-to-noise ratio in d . f Ca 2+ signals in d were used to infer spike times. g Spikes in f were used to measure >17 million cross-correlations per experiment. h The correlations in g plotted as a function of distance between neurons ranging from 0 μm to 4000 μm. Data are presented as mean values ± S.D. Full size image Flexible measurement with dual independent scan engines To demonstrate the flexibility of the Diesel2p system, we performed four test experiments. First, we imaged two regions that were 4.36 mm apart, which is equivalent to the distance between the primary visual cortex and the motor cortex. Moreover, we configured the imaging fields, frame acquisition speeds, and the pixel numbers independently for the two regions (Fig. 3a and Supplementary Video 4 ). Non-multiplexed imaging with these parameters would reduce the acquisition rate and involve >20% dead time (Supplementary Fig. 6 ), and yoked multiplexing would require a compromise in imaging parameters so that both regions would have had the same size and scan rate. Thus, the Diesel2p system enables new classes of measurements. Second, to demonstrate that the two pathways can be overlapped, we set Pathway 1 to image a subregion of the region imaged by Pathway 2 (Fig. 3b and Supplementary Video 5 ). Third, we increased the number of imaging regions within a single imaging session to four. Two sub-regions are imaged simultaneously, then they are both repositioned (this entails time for repositioning the beams, or a “jump” time). In this way, a total of four sub-areas that differed in XY locations and Z depths were imaged in a single imaging session (Fig. 3c and Supplementary Video 6 ). Fourth, we used random-access scanning of cell bodies in conjunction with large FOV imaging. We configured Pathway 1 to raster scan a 2.25 mm 2 area, and Pathway 2 executed a random-access scan of 12 cell bodies sequentially (Fig. 3d and Supplementary Video 7 ). Finally, we characterized the crosstalk between the two imaging pathways, and found it to be minimal (Supplementary Fig. 10 ). Together, these results, enabled by both the large FOV and the dual independent multiplexed scan pathways, demonstrate the flexibility of the Diesel2p system to enable new measurements and experiments. Fig. 3: Flexible measurement with dual independent scan engines. a Neuronal activity at two distant regions (4.36-mm apart) are imaged simultaneously with independent imaging parameters. The blue and orange boxes indicate the imaging sizes and positions within the full FOV. Expanded regions are shown at right. Calcium signals from 500 neurons imaged in Pathway 1 were used to infer spikes. Simultaneously, Pathway 2 imaged calcium signals at 58 frames/s 4.36 mm away. b Neuronal activity from two overlapped regions are imaged simultaneously with different imaging parameters. Over 400 neurons were imaged in Pathway 2 while neural activity was imaged at 60 frames/s in Pathway 1. c Neuronal activity from four separate regions are imaged with two pathways independently positioned and then repositioned within the full FOV. By serially offsetting the galvo scanners, Pathway 1 accesses the blue and green regions, and Pathway 2 accesses the orange and the magenta regions. By changing the curvature of the AO, Pathways 1 and 2 also image at different depths. Calcium signals and inferred spike trains from neurons in the Pathway 1 data (Region 1 and Region 2 combined) are shown. Example calcium signals are shown for neurons from Region 3 (orange) and Region 4 (magenta) scanned by Pathway 2. d A combination of raster scanning and random-access scanning is configured on Pathways 1 and 2 for neuronal imaging. While Pathway 1 performs raster scanning, Pathway 2 performs random-access scanning for 12 cell bodies. Calcium signals and the inferred spike trains are shown for neurons imaged by Pathway 1. Example calcium signal traces are shown for neurons imaged by Pathway 2 (orange). Full size image Discussion In summary, we present a newly developed imaging system to enable flexible, simultaneous, multi-region, multiphoton excitation in scattering tissue. The Diesel2p system has a nearly constant subcellular resolution over a ~25 mm 2 FOV with very low field curvature, linear galvo access to the full FOV, and resonant scan size of 1.5 mm. These optics and scan specifications are combined with a layout of two independent scan engines, each with deformable mirrors for fast z-focus and aberration correction. The temporally multiplexed imaging pathways can record neural activity in two arbitrarily selected portions of the imaging volume simultaneously. This new system is also designed to facilitate experiments with behaving animals. The objective lens is air immersion, so no water interface is required, and it has an 8-mm-long working distance, to enable a variety of headplate designs and other instrumentation that needs to be close to the imaged area (e.g., electrode arrays). In addition, the objective can rotate 360 degrees to work with non-horizonal imaging surfaces. This rotation facilitates imaging in some preparations and could be further improved if the axis of rotation was along the focal plane of the objective. The advantage of an air immersion objective is evident when the objective is fixed at an angle for imaging the brain of a behaving animal from the side. These ergonomic features can facilitate behavior experiments that require flexibility for animal posture, further increasing the flexibility of the Diesel2p system. The Diesel2p system uses an infinity-corrected objective design, making it compatible with a range of extensions to multiphoton imaging, including Bessel-beam scanning 21 and reverberation microscopy 14 to enhance the volumetric imaging capability. It can also be combined with two-photon optogenetics approaches to perform simultaneous multiphoton neuronal imaging and functional perturbation 22 . Wavelength multiplexing can be implemented to add another pulsed laser (e.g. a 1040 nm pulsed laser) into the Diesel2p system, enabling the dual-excitation imaging of two different molecules simultaneously (Supplementary Fig. 11 ). The Diesel2p system is achromatic at 910 nm and 1040 nm, and thus can work simultaneously with these two wavelengths with no need of realignment such as spacing between optics. The advantage of the Diesel2p’s simultaneous imaging, with zero jumping time, can be critical when imaging very fast dynamics such as neurotransmitter reporters 23 and voltage indicators 24 expressed at distant brain areas. The Diesel2p system enables new measurements of neural activity, is compatible with a range of variants of multiphoton imaging, and is a fully documented and open optical design that can be extended to support studies of neural interactions across brain areas 25 . Methods Optical design and simulations The entire system including relay, scan, tube, and objective lens systems were modeled in OpticStudio (Zemax, LLC). The subsystems of relay, scan, and tube, and objective lenses were first designed, modeled, and optimized individually. Then, the system was optimized as a whole to further minimize additive aberrations between subsystems. The system was optimized for two wavelengths of 910 ± 10 nm and 1050 ± 10 nm for dual-color imaging in the future. All lenses were custom made by Rocky Mountain Instrument Inc, using tolerances of: radii ±0.1%, center thickness ±0.1 mm and coated with broad band anti-reflective coating (BBAR), R avg < 1.5% at 475nm-1100 nm.The relay lenses between the x-resonant scanner and the x-galvo mirror were constructed from the Thorlabs Inc parts (LSM254-1050.ZBB, AC508-250-B-ML). The effective focal lengths of the custom scan lens, the tube lens, and objective are 61 mm, 243 mm, 30 mm, respectively. Complete lens data of the Diesel2p system is given in Supplementary Figs. 1 – 5 . Assembly The lens sub-assemblies were manufactured, aligned, and assembled in the factory of Rocky Mountain Instrument Co. There are threaded connectors between optical relays connected the two galvo scanners, and between the tube lens and the objective lens. Together with the correction collar on the objective, they are adjustable for the axial separation between subassemblies. Galvo and resonant scanners were mounted on a XY translator (Thorlabs, CXY2), and this was attached to a 60 mm cage cube (Thorlabs, LC6W), which also bridged the subassemblies. As the system is assembled, locations of afocal space at conjugate planes are checked for collimation as designed in the system. Animals All procedures involving living animals were carried out in accordance with the guidelines and regulations of the US Department of Health and Human Services and approved by the Institutional Animal Care and Use Committee at University of California, Santa Barbara. We used GCaMP6s and Thy1-GFP Line O (Jackson Labs stock #007919) transgenic mice in this study. GCaMP6s transgenic mice were generated by triple crossing of TITL-GCaMP6s mice, Emx1-Cre mice (Jackson Labs stock #005628) and ROSA:LNL:tTA mice (Jackson Labs stock #011008) 26 . TITL-GCaMP6s mice were kindly provided by Allen institute. Mice were housed in 12 h dark/light reverse cycle room. The temperature set-point is 74–76 F; the low-temperature alarm is 70 F; the high-temperature alarm is 78 F. The relative humidity is ~45% (range 30–70%). Mice were deeply anesthetized using isoflurane (1.5–2%) augmented with acepromazine (2 mg/kg body weight) during craniotomy surgery. Carpofen (5 mg/kg body weight) was administered prior to surgery, as well as after surgery for 3 consecutive days. A 5 mm diameter cranial window was implanted after removing the scalp overlaying the right visual cortex. In vivo two photon imaging All imaging was performed on the custom Diesel2p system. The instrumentation (see Diesel2p instrumentation below) and image acquisition were controlled by ScanImage from Vidrio Technologies Inc. Animals were awake during calcium imaging. The imaging was performed with <100 mW out of the front of the objective. With typical imaging parameters (512 × 512 at 30 frames/s, 0.5 mm imaging region), no damage was observed from the surface of the dura to the 500 µm depth. Assessment of damage due to laser intensity was based on visual morphological changes to the appearance of the dura mater and/or continuously bright cell bodies. Pulses per pixel in the resonant scanning regime When raster-scanning with a resonant mirror, the pulses per pixel along a line scan varies due to the nonlinear scanning speed of the resonant scanner. The pulses per pixel is a function of the fill fraction (FF), the number of pixels per line (N), the resonant frequency of the resonant scanner (Freq), and the repetition rate of the laser (Rep), and the position of the pixel on the resonant axis (n). The ratio between the length of an active acquisition of a line and the total length of the line is defined as the fill fraction. Equation ( 1 ) shows the formula to calculate the pulses per pixel as follows. $${{Pulses\; per\; pixel}}\left(n,{FF},N,{Freq},{Rep}\right)=\frac{2* {FF}}{N}* \frac{{Rep}}{2* {{\pi }}* {Freq}* {{\cos }}\left\{{{{\sin }}}^{-1}\left[-{FF}+\dfrac{2* {FF}}{\left(N-1\right)}\left(n-1\right)\right]\right\}}$$ (1) Three commonly used imaging parameters are plotted in Supplementary Fig. 12 . Image analysis for neuronal calcium signals Ca 2+ signals were analyzed using custom software 27 in MATLAB (Mathworks). Neurons were segmented and fluorescence time courses were extracted from imaging stacks using Suite2p ( ) 28 . Signals from neurons are a sum of neuronal and neuropil components. The neuropil component was subtracted from the neuronal signals by separately detecting it and subtracting it. The neuropil component was isolated using the signal from an annulus region around each neuron, and then subtracted from the neuronal signal to provide a higher fidelity report of neuronal fluorescence dynamics. Subsequently, spike inference was performed on these neuropil-subtracted traces using a Markov chain Monte Carlo method 29 . The parameters for the MCMC spike inference were p = 2 (second order autoregressive model), b = 200 (initial burn), 400 rounds of simulation, and then the frame rate for each data set. We computed the Pearson correlation of inferred spike trains between neurons using MATLAB built-in function “corr”. Excitation point spread function measurements and simulations The measurement and analysis procedure were described in our previous publication in details 15 . To evaluate the excitation point spread function (PSF), sub-micrometer beads were imaged. Sub-micrometer fluorescent beads (0.2 µm, Invitrogen F-8811) were imbedded in a thick (~1.2 mm) 0.75% agarose gel. 30 µm z-stacks were acquired, each centered at one of three depths (50 µm, 250 µm, 500 µm). The stage was moved axially in 0.5 µm increments (∆ stage ). At each focal plane 30 frames were acquired and averaged to yield a high signal-to-noise image. Due to the difference between the refractive index of the objective immersion medium (air) and the specimen medium (water), the actual focal position within the specimen was moved an amount ∆ focus = 1.38 x ∆ stage 30 . The factor 1.38 was determined in Zemax and slightly differs from the paraxial approximation of 1.33. These z-stack images were imported into MATLAB for analysis. For the axial PSF, XZ and YZ images were created at the center of a bead, and a line plot was made at an angle maximizing the axial intensity spread, thereby preventing underestimation of the PSF due to tilted focal shifts. For the radial PSF, an XY image was found at the maximum intensity position axially. A line scan in X and Y was made. Gaussian curves were fit to the individual line scans to extract FWHM measurements. The radial PSF values are an average of the X PSF and Y PSF, and the axial PSF is an average of the axial PSF found from the XZ and YZ images. Excitation PSF measurements were performed both on axis and at the edges of the FOV for both imaging pathways. Data reported (Fig. 1i and Supplementary Fig. 8b ) are the mean ± S.D. of eight beads. Diesel2p instrumentation Our laser source is a Ti:sapphire pulsed laser with a central wavelength at 910 nm and a 80 MHz repetition rate (Mai-Tai, Newport). The laser first passes through a built-in pre-chirper unit (DeepSee, Newport), further followed by an external custom-built single prism pre-chirper 17 . The material of the prism is PBH71 (Swamp Optics) with a refractive index of 1.89. The laser power is controlled using a half wave plate (AHWP05M-980, Thorlabs) followed by a polarization beam splitting cube (CCM5-PBS203, Thorlabs). Similar polarization optics were used to split the beam into two paths and control the relative power between the two paths. Prior to splitting, the beam was expanded using a 2× beam expander (GBE02-B, Thorlabs). One beam travels directly to a deformable mirror (DM140A-35-UM01, Boston Micromachines Corporation), and the other beam is first diverted to a delay arm, and subsequently to a separately deformable mirror. The delay arm is designed to impart a 6.25 ns temporal offset to the pulses in one beam (1.875 m additional path length). As the laser pulses are delivered at 12.5 ns intervals (80 MHz), they are evenly spaced in time at 160 MHz after the two beams are recombined. Before the recombination, both pathways pass through their own set of scan engines comprised with a x-resonant scanner (CRS8KHz, Cambridge technology), and a x-galvo scanner (6220H, Cambridge technology), and a y-galvo scanner (6220H, Cambridge technology) in series. These scanners are connected by custom-designed afocal relays. Two beams are recombined with another polarization beam splitter (PC75K095, Rocky Mountain Instrument). A scan lens and tube lens formed a 4× telescope. Together with a short-pass dichroic mirror, they relayed the expanded beams to the back aperture of the custom objective. The entrance scan angles at the objective back aperture were ~±5 degrees yielding our 5 mm FOV. The generated fluorescence from the imaging plane is directed to a photomultiplier tube (PMT, H10770PA-40 MOD, Hamamatsu) via the assembly of 3 collection lenses (AC508-100-A-ML, AC254-030-A-ML, Thorlabs; 48425, Edmunds Optics). The ultrafast 1040 nm laser used for dual-wavelength imaging was an ALTAIR IR-10 (Spark Lasers). The optics are achromatic for the 910 nm and 1050 nm wavelength windows, so both wavelengths can be used simultaneously in either or both paths without any reconfiguration of the imaging system. AO optimization We adopted a sensorless and model-based approach for the wavefront correction. Eleven of the first 15 modes of Zernike coefficients are corrected sequentially and manually to maximize the brightness of the image. The modes of piston, tip, tilt, and defocus are not adjusted. In general, three iterations of adjustment across the 11 modes reaches a plateau of brightness. Compensation is location dependent. An optimal configuration at one area is not applicable to other areas. Look-up-table approaches could be used to vary corrections during scanning due to the fast update of deformable mirrors (~0.8–1.5 ms). When the AO serves as a remote focusing component, only the defocusing term of the Zernike coefficients is adjusted to move the imaging plane. The current maximum defocusing range is 120 µm (± 60 µm from the middle plan) limited by the 3.5 µm maximum stroke traveling distance of the Boston Micromachines deformable mirror. The ALPAO unit has a stroke of 80 µm, which is almost 23-fold more, and thus can offer a greater defocus range. Photon counting electronics Output from the photomultiplier tube was first amplified with a high bandwidth amplifier (C5594-44, Hamamatsu) and then split into two channels (ZFSC-2-2A, Mini-Circuits). One channel was delayed relative to the other by 6.25 ns by using a delay box (DB64, Stanford Research Systems). Each channel was connected to a fast discriminator (TD2000, Fast ComTec GmbH). The ~80 MHz synchronization output pulses from the laser was delivered to a third fast discriminator (TD2000, Fast ComTec GmbH), which has a continuous potentiometer adjustment to adjust the output pulse width from 1 ns to 30 ns. This output pulse was delivered to the common veto input on the previous two TD2000 discriminators where the PMT outputs were collected. The veto width was adjusted by the potentiometer on third TD2000 discriminator and the relative phase of the veto window was adjusted by delaying the synchronization pulses from the laser module using the DB64 delay box. Outputs from each TD2000 is sent to a channel of the digitizer of the vDAQ card (Vidrio Technologies). Digitized signals were arranged into images with the indicated pixel count in the ScanImage software (Vidrio Technologies). In this manner we could demultiplex the single PMT output into two channels corresponding to the two excitation pathways. Pulse characterization at the focal plane We used the frequency-resolved optical gating (FROG) method measurements to retrieve the pulse conditions at the focal plane using three off-axis parabolic (OAP) mirrors and a FROG system (GRENOUILLE 8-50-334-USB, Swamp Optics). We used reflective OAP mirrors to avoid the post-focus chromatic and spatial dispersion that would be introduced if refractive lenses were used. OAP mirrors help retain more of the original pulse information of the pulses. The focused laser at the focal plane was collimated by the first OAP mirror (MPD00M9-M01, Thorlabs). Then the beam needed to be reduced for measurement. So it was refocused by the second OAP mirror (MPD169-M01, Thorlabs) and re-collimated by a third OAP mirror (MPD129-M01, Thorlabs). At this point, the beam size of the collimated laser was small enough to fit the FROG apparatus’ entrance aperture. The FROG traces were retrieved, and the pulse width, pulse front tilt, and the spatial dispersion were calculated with the built-in retrieval algorithm (QuickFrog, Swamp Optics). The focus was parked at the three FOV locations (on-axis, 1.25-mm off-axis, and 2.5-mm off-axis) for the FROG measurement by deflecting the angle of the X-galvo scanner (0, 5, and 10 degrees), corresponding to the angle of 0, 2.5, and 5.0 degrees off-axis at the entrance pupil of the objective. Rotating the X-galvo scanner (instead of the Y-galvo scanner) maximizes the off-axis traveling pathway for the deflected laser beam in the system, and thus provides an upper-limit to any distortions detected. Statistics and reproducibility For resolution measurements in Fig. 1f , eight or seven beads were measured for each location, and all data points are plotted. As this study is to demonstrate the functionality of a microscopy technique, rather than drawing biological conclusions, replicate experiments were not performed with animals. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data sets reported here are openly available in “figshare” at “ ”. Code availability The code used in this work is already publicly available as detailed above. If additional materials are required for replication, the authors invite such requests.
Advancing our understanding of the human brain will require new insights into how neural circuitry works in mammals, including laboratory mice. These investigations require monitoring brain activity with a microscope that provides resolution high enough to see individual neurons and their neighbors. Two-photon fluorescence microscopy has significantly enhanced researchers' ability to do just that, and the lab of Spencer LaVere Smith, an associate professor in the Department of Electrical and Computer Engineering at UC Santa Barbara, is a hotbed of research for advancing the technology. As principal investigator on the five-year, $9 million NSF-funded Next Generation Multiphoton Neuroimaging Consortium (Nemonic) hub, which was born of President Obama's BRAIN Initiative and is headquartered at UCSB, Smith is working to "push the frontiers of multi-photon microscopy for neuroscience research." In the Nov. 17 issue of Nature Communications, Smith and his co-authors report the development of a new microscope they describe as "Dual Independent Enhanced Scan Engines for Large Field-of-view Two-Photon imaging (Diesel2p)." Their two-photon microscope provides unprecedented brain-imaging ability. The device has the largest field of view (up to 25 square millimeters) of any such instrument, allowing it to provide subcellular resolution of multiple areas of the brain. "We're optimizing for three things: resolution to see individual neurons, a field of view to capture multiple brain regions simultaneously, and imaging speed to capture changes in neuron activity during behavior," Smith explained. "The events that we're interested in imaging last less than a second, so we don't have time to move the microscope; we have to get everything in one shot, while still making sure that the optics can focus ultrafast pulses of laser light." The powerful lasers that drive two-photon imaging systems, each costing about $250,000, deliver ultrafast, ultra-intense pulses of light, each of which is more than a billion times brighter than sunlight, and lasts 0.0001 nanosecond. A single beam, with 80 million pulses per second, is split into two wholly independent scan engine arms, enabling the microscope to scan two regions simultaneously, with each configured to different imaging parameters. In previous iterations of the instrument, the two lasers were yoked and configured to the same parameters, an arrangement that strongly constrains sampling. Optimal scan parameters, such as frame rate and scan region size, vary across distributed neural circuitry and experimental requirements, and the new instrument allows for different scan parameters to be used for both beams. The new device, which incorporates several custom-designed and custom-manufactured elements, including the optical relays, the scan lens, the tube lens and the objective lens, is already being broadly adopted for its ability to provide high-speed imaging of neural activity in widely scattered brain regions. Smith is committed to ensuring open access to the instrument. Long before this new paper was published, he and his co-authors released a preprint that included the engineering details needed to replicate it. They also shared the technology with colleagues at Boston University, where researchers in Jerry Chen's lab have already made modifications to suit their own experiments. "This is exciting," Smith said. "They didn't have to start from scratch like we did. They could build off of our work. Jerry's paper was published back-to-back with ours, and two companies, INSS and CoSys, have sold systems based on our designs. Since there is no patent, and won't be, this technology is free for all to use and modify however they see fit." Two-photon microscopy is a specialized type of fluorescent microscopy. To perform such work in Smith's lab, researchers genetically engineer mice so that their neurons contain a fluorescent indicator of neuron activity. The indicator was made by combining a fluorescent protein from jellyfish and a calcium-binding protein that exists in nature. The approach leverages the brief, orders-of-magnitude increase in calcium that a neuron experiences when firing. When the laser is pointed at the neuron, and the neuron is firing, calcium comes in, the protein finds the calcium and, ultimately, fluoresces. Two-photon imaging enhances fluorescence microscopy by employing the quantum behavior of photons in a way that prevents a considerable amount of out-of-focus fluorescence light from being generated. In normal optical microscopy, the light from the source used to excite the sample enters it in a way that produces a vertical cone of light that narrows down to the target focus area, and then an inverted cone below that point. Any light that is not at the narrowest point is out of focus. The light in a two-photon microscope behaves differently, creating a single point of light (and no cones of light) that is in sharp focus, eliminating all out-of-focus light from reaching the imaging lens. "The image reveals only light from that plane we're looking at, without much background signal from above or below the plane," Smith explained. "The brain has optical properties and a texture like butter; it's full of lipids and aqueous solutions that make it hard to see through. With normal optical imaging, you can see only the very top of the brain. Two-photon imaging allows us to image deeper down and still attain sub-cellular resolution." Another advantage of two-photon excitation light is that it uses lower-energy, longer-wavelength light (in the near-infrared range). Such light scatters less when passing through tissue, so it can be sharply focused deeper into tissue. Moreover, the lower-energy light is less damaging to the sample than shorter wavelengths, such as ultraviolet light. Smith's lab tested the device in experiments on mice, observing their brains while they performed tasks such as watching videos or navigating virtual reality environments. Each mouse has received a glass implant in its skull, providing a literal window for the microscope into its brain. "I'm motivated by trying to understand the computational principles in neural circuitry that let us do interesting things that we can't currently replicate in machines," he said. "We can build a machine to do a lot of things better than we can. But for other things, we can't. We train teenagers to drive cars, but self-driving cars fail in a wide array of situations where humans do not. The systems we use for deep learning are based on insights from the brain, but they are only a few insights, and not the whole story. They work pretty well, but are still fragile. By comparison, I can put a mouse in a room where it has never been, and it will run to someplace where I can't reach it. It won't run into any walls. It does this super reliably and runs on about a watt of power. "There are interesting computational principles that we cannot yet replicate in human-made machines that exist in the brains of mice," Smith continued, "and I want to start to uncover that. It's why I wanted to build this microscope."
10.1038/s41467-021-26736-4
Chemistry
Cost-effective catalyst converts CO2 into natural gas
"Electrocatalytic reduction of carbon dioxide to carbon monoxide and methane at an immobilized cobalt protoporphyrin." Nature Communications 6, Article number: 8177 DOI: 10.1038/ncomms9177 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms9177
https://phys.org/news/2015-09-cost-effective-catalyst-co2-natural-gas.html
Abstract The electrochemical conversion of carbon dioxide and water into useful products is a major challenge in facilitating a closed carbon cycle. Here we report a cobalt protoporphyrin immobilized on a pyrolytic graphite electrode that reduces carbon dioxide in an aqueous acidic solution at relatively low overpotential (0.5 V), with an efficiency and selectivity comparable to the best porphyrin-based electrocatalyst in the literature. While carbon monoxide is the main reduction product, we also observe methane as by-product. The results of our detailed pH-dependent studies are explained consistently by a mechanism in which carbon dioxide is activated by the cobalt protoporphyrin through the stabilization of a radical intermediate, which acts as Brønsted base. The basic character of this intermediate explains how the carbon dioxide reduction circumvents a concerted proton–electron transfer mechanism, in contrast to hydrogen evolution. Our results and their mechanistic interpretations suggest strategies for designing improved catalysts. Introduction The efficient electrochemical reduction of carbon dioxide to a fuel with a high-energy density would be a major step forward in the introduction of a CO 2 neutral energy cycle, as it would allow for the direct low-temperature conversion of photo-generated electrical current to stored chemical energy, in a manner very similar to the way nature stores solar energy. Plants fix CO 2 from the atmosphere by photosynthesis in an enzymatic complex called Rubisco, which selectively binds CO 2 and inserts it into existing carbon chains by reductive carboxylation. The high-energy electrons necessary for this process are photo-generated by photosystem II. Synthetic catalysts for the electrocatalytic reduction of CO 2 , which could facilitate such an artificial CO 2 neutral redox cycle, have been studied for many decades 1 , 2 , 3 , 4 . A main challenge in electrochemical CO 2 reduction is to develop catalysts that are capable of reducing CO 2 beyond the two-electron products carbon monoxide (CO), formic acid (HCOOH), and oxalate (C 2 O 4 2− ). Unfortunately, the formation of reduction products requiring four or more electrons is invariably associated with considerable overpotentials due to the multiple intermediates involved in the reaction mechanisms 5 (although more reduced products often have higher stability and correspondingly more positive equilibrium potentials). Metallic copper is unique in producing significant amounts of high-energy multi-electron transfer products such as methane, ethylene and ethanol 3 , 6 , 7 . Molecular catalysts that are capable of reducing CO 2 to a product different from one of the above-mentioned two-electron products are much less common and typically involve a strong interaction with the working electrode 8 . A second important challenge in CO 2 electrocatalysis concerns the suppression of the concomitant evolution of hydrogen, which is a dominant side reaction for CO 2 reduction from aqueous electrolytes. Strategies for suppressing hydrogen evolution typically involve working with high(er) CO 2 to proton ratios, such as high CO 2 pressures or solvents with a higher CO 2 solubility. Recent fundamental and theoretical work has reconsidered porphyrin-based molecular catalysts for electrochemical CO 2 reduction. Tripkovic et al . 9 have performed extensive density functional theory calculations of metal-functionalized porphyrin-like graphene surfaces, and predicted the potential formation of methane and methanol from CO 2 . Costentin et al . 10 considered ligand modifications of iron-based porphyrins and found that local proton sources built into the porphyrin ring give rise to high activity and good Faradaic efficiency (FE) for the reduction of CO 2 to CO in a mixed DMF–water solvent. In fact, it has been known since the early 1980s that cobalt (Co)-based macrocyclic complexes, either in solution or adsorbed onto carbon electrodes, act as effective electrocatalysts for CO 2 reduction, producing CO, HCOOH, methanol and methane, although at relatively high overpotential and with varying selectivity 11 , 12 , 13 , 14 , 15 . Herein, we report on the electrochemical reduction of CO 2 to CO and methane, as well as smaller amounts of HCOOH and methanol, on a simple Co protoporphyrin molecular catalyst immobilized onto a pyrolytic graphite (PG) electrode in a purely aqueous electrolyte solution. Previous similar work using immobilized Co porphyrins or Co phthalocyanines has shown the capability of Co-based catalysts to achieve a high FE towards CO, which is highly sensitive to pH and potential 16 , 17 , 18 . Our work confirms that immobilized Co-based porphyrins are good CO 2 reduction electrocatalysts capable of producing multi-electron products such as methane and methanol. More significantly, our work underscores the important role of pH in steering the catalytic activity and selectivity towards CO and CH 4 , especially in the very narrow pH=1–3 range in the absence of coordinating anions. This high sensitivity to pH is explained by a mechanism highlighting the important role of the initial electron transfer in activating CO 2 electrochemically. We also demonstrate how such a mechanism for CO 2 reduction manifests experimentally and how this property can be exploited to suppress concomitant hydrogen evolution. Furthermore, we show that the overpotential and corresponding turnover frequency (TOF) for CO 2 reduction of our catalyst compare favourably to the best molecular porphyrin-based catalyst in the literature 10 . Therefore, we believe that these insights may have significant implications for the design of new and improved molecular catalyst electrodes and for the formulation of optimized process conditions for efficient electrochemical CO 2 reduction to CO as well as to products reduced to a more significant degree. Results Voltammetry and online electrochemical mass spectrometry The Co protoporphyrin-coated PG (CoPP-PG) electrode was prepared following a procedure described earlier 19 and was detailed in the Methods section. In situ electrochemical scanning tunnelling microscopy and atomic force microscopy images of iron and zinc protoporphyrins on basal plane graphite electrodes by Tao et al . 20 suggest that these molecules form monolayer films on the electrode with the molecules lying flat. The blank cyclic voltammograms of the PG electrode, the CoPP-PG electrode in 0.1 M HClO 4 and the voltammetry of the dissolved CoPP in the same electrolyte are compared in Supplementary Fig. 1 . The voltammetry in Supplementary Fig. 1 shows the reversible redox peak of the Co 3+ /Co 2+ transition at 0.8–0.85 V versus reversible hydrogen electrode (RHE), from which the coverage of the Co-PP on the PG electrode can be determined to be ca. 4 × 10 −10 mol cm −2 , which is in good agreement with previous experiments of protoporphyrins on PG 19 , 21 . No further redox transition of the CoPP is observed at more negative potential, with the onset of hydrogen evolution being at ca. −0.5 V RHE . However, we note that we have previously observed a Co 2+ /Co + transition at ca. −0.6 V versus NHE for CoPP immobilized in a DDAB (didodecyl dimethylammonium bromide) film on PG 19 . The observation of this peak in the DDAB films may be related to the higher hydrophobicity of DDAB. The Co 2+ /Co + redox transition has previously been associated with the onset of electrocatalytic hydrogen evolution on Co porphyrins 22 . Figure 1 shows the voltammetry at 1 mV s −1 of the CoPP-PG electrode in unbuffered 0.1 M perchlorate solution of pH=1–3, saturated with CO 2 , together with the mass signals corresponding to H 2 ( m/z =2), CH 4 ( m/z =15, corresponding to the CH 3 fragment) and CO (m/z=28) as measured simultaneously using online electrochemical mass spectrometry (OLEMS) 23 . The OLEMS experiment samples the gases formed at the electrode surface by a tip covered with a hydrophobic membrane placed at a distance of ca. 10 μm from the surface. This technique can follow gas production online during cyclic voltammetry (CV). Calibration of our experiment is cumbersome as the signals depend on parameters that are not easy to control (tip distance and tip porosity). Quantitative measurements were therefore performed using long-term electrolysis combined with gas chromatography (to be discussed later). Depending on the quality of the gas-sensing tip used in the OLEMS experiment shown in Fig. 1 , m/z =31 was also measured, corresponding to the formation of methanol ( Supplementary Fig. 2 ). Using high-performance liquid chromatography (HPLC), we could also detect HCOOH as one of the products ( Supplementary Fig. 3 ), although both HCOOH and methanol appear to be minority products under these conditions. This confirms, for the first time in a single study, that all four products, CO, HCOOH, CH 3 OH and CH 4 can be formed from CO 2 reduction on a Co-based porphyrin. Figure 1a,d,g measured at pH=1 shows that the reduction current is accompanied by the simultaneous formation of H 2 and CH 4 . The m/z =28 signal in Fig. 1 was not corrected for the CO 2 fragmentation, and therefore the CO signal combines CO production from CO 2 electroreduction with CO formation from CO 2 fragmentation in the mass spectrometer (MS). This explains why the CO signal decreases for more negative potentials at which the CO 2 reduction rate is higher, as a result of the lower local CO 2 concentration near the electrode surface. However, at pH=2 and 3, an increase in the CO signal with more negative potential is observed, simultaneously with the CH 4 production, suggesting that CO is an intermediate in the reaction (as also suggested by the fact that CO may be reduced to CH 4 on CoPP-PG; Fig. 4 below). Most significantly, at pH=3, CO and CH 4 production is observed at less-negative potentials than H 2 evolution, showing that the CO 2 reduction has a different pH dependence from the hydrogen evolution reaction. We chose to restrict ourselves to pH≤3 in perchlorate solution in order to avoid the interference of buffering anions such as bicarbonate or phosphate (see below) with the CO 2 reduction process. Figure 1: Voltammetry and volatile product identification by online electrochemical mass spectrometry. This figure shows the electrochemical reduction of CO 2 on Co protoporphyrin immobilized on a PG electrode and the various volatile products detected by OLEMS. ( a ) CV in 0.1 M HClO 4 ; ( b ) CV in 10 mM HClO 4 +90 mM NaClO 4 ; ( c ) CV in 1 mM HClO 4 +99 mM NaClO 4 ; ( d ) m/z =2 (H 2 ) signal in 0.1 M HClO 4 ; ( e ) m/z =2 (H 2 ) signal in 10 mM HClO 4 +90 mM NaClO 4 ; ( f ) m/z =2 (H 2 ) signal in 1 mM HClO 4 +99 mM NaClO 4 ; ( g ) m/z =15 (CH 4 ) signal in 0.1 M HClO 4 ; ( h ) m/z =15 (CH 4 ) signal in 10 mM HClO 4 +90 mM NaClO 4 ; ( i ) m/z =15 (CH 4 ) signal in 1 mM HClO 4 +99 mM NaClO 4 ; ( j ) m/z =28 (CO) signal in 0.1 M HClO 4 ; ( k ) m/z =28 (CO) signal in 10 mM HClO 4 +90 mM NaClO 4 ; ( l ) m/z =28 (CO) signal in 1 mM HClO 4 +99 mM NaClO 4 . Scan rate was in all cases 1 mV s −1 . Blue lines are negative-going (forward) scans; magenta lines are positive-going (return) scans. Supplementary Fig. 4 shows the same data with the unnormalized MS signals, as well as the signals obtained in the first and second CV scan. Full size image Figure 4: Identification of volatile products by OLEMS during electrochemical reduction of CO and HCHO. CV of CO reduction in ( a ) 100 mM HClO 4 and ( b ) 1 mM HClO 4 +99 mM NaClO 4 saturated with CO with associated mass fragments of volatile products detected with OLEMS. ( c ) CV of HCHO (5 mM) reduction in 100 mM HClO 4 with associated mass fragments measured with OLEMS. ( d – f ) The corresponding OLEMS signals for m/z =2 (H 2 ); ( g – i ) The corresponding OLEMS signals for m/z =15 (CH 4 ). Scan rate: 1 mV s −1 . Blue lines are negative-going (forward) scans; magenta lines are positive-going (return) scans. Supplementary Fig. 14 shows the same data with the unnormalized MS signals, as well as the signals obtained in the first and second CV scan. Full size image We have performed a number of experiments to convince ourselves that the Co-PP is indeed the active catalytic centre turning over dissolved CO 2 . On the unmodified PG electrode and on a PG electrode modified with Co-free protoporphyrin, H 2 evolution was observed, but no CO 2 reduction ( Supplementary Figs 5 and 6 ). A PG electrode onto which a small amount of Co was electrodeposited was also tested for CO 2 reduction, but showed no activity ( Supplementary Fig. 7 ). Finally, the reduction of isotopically labelled 13 CO 2 in deuterated water yielded m/z =19 (corresponding to 13 CD 3 ) as reduction product ( Supplementary Fig. 8 ), which irrefutably proves the reduction of dissolved CO 2 into methane. These combined results show that the immobilized Co protoporphyrin is responsible for the production of CO and methane from CO 2 electroreduction. As mentioned, the most important conclusion from Fig. 1 is the remarkable role of the pH. Initially, we performed the CO 2 reduction experiments at pH=2 and 3 in buffered phosphate solution, also yielding methane as a product but with a pH dependence that was not straightforward to understand. Therefore, we decided to remove the buffering phosphate anions, as they are suspected to interfere with the reactivity by coordinating to the catalytic centre 24 or interacting with the catalytic intermediates. In non-adsorbing perchlorate solution, the role of the proton concentration can be better understood by comparing the voltammetry of the CoPP-PG in the absence of CO 2 at pH=1–3, as shown in Fig. 2 . At pH=1, there is only a single catalytic reduction wave in the potential window studied, corresponding to the reduction of H + to H 2 . The voltammetry at pH=2 and 3 shows two waves, one at less-negative potential that is proportional to the H + concentration and corresponds to H + reduction, and one starting at −1.1 V that corresponds to H 2 O reduction. This is also reflected in the H 2 formation profiles observed in the mass signals in Fig. 1 . We must also take into account here that because of the relatively low proton concentration at pH=3, the direct proton reduction quickly runs into diffusion limitations, and further H 2 evolution can only take place at more negative potentials by direct water reduction, which does not suffer from such diffusion limitations. By comparing the results in Figs 1 and 2 , we conclude that H 2 evolution dominates over CO 2 reduction in the presence of a high concentration of protons in solution, whereas the opposite is the case for pH=3. The activation of CO 2 is apparently less sensitive to the presence of protons, implying that water molecules are just as powerful in hydrogenating the activated CO 2 . This remarkable pH dependence is somewhat similar to observations made by Noda et al . 25 during CO 2 reduction on a gold electrode. The important new finding here is that this small pH shift is the key in favouring CO 2 reduction over H 2 evolution, also on our molecular catalyst, especially in the absence of buffering anions. This is also evidenced by the FE measurements summarized in Fig. 3 , to be discussed next. A mechanistic explanation for this pH sensitivity will be given in the Discussion section. Figure 2: pH dependence of hydrogen evolution reaction on the CoPP-PG electrode. Hydrogen evolution reaction at pH=1 (black curve), pH=2 (red curve) and pH=3 (blue curve) on Co protoporphyrin-modified PG electrode in the absence of CO 2 . Inserted: highlight of the voltammetry at pH=3. Scan rate was in all cases is 100 mV s −1 . All electrolyte solutions were 0.1 M perchlorate, with different ratios of H + and Na + . Full size image Figure 3: FE of carbon dioxide reduction to CO and methane. FEs to CO and CH 4 were determined for yellow bars: pH=1, P CO2 =1 atm; blue bars: pH=1, P CO2 =10 atm; magenta bars: pH=3, P CO2 =1 atm and black bars pH=3, P CO2 =10 atm. FE of ( a ) CH 4 and ( b ) CO in 0.1 M perchlorate solution saturated with CO 2 . At each potential, the electrolysis was conducted for 1 h at P CO2 =1 atm, while it is 90 min at P CO2 =10 atm due to the longer time to reach the steady state. Error bars were determined from 3–8 data points based on samples taken every 6 min during the steady state of a single electrolysis run. Full size image Faradaic efficiency The FE for the simultaneous CO 2 and water reduction to hydrogen, CO and methane was determined separately with long-term electrolysis experiments, using a gas chromatography setup coupled to an electrochemical cell, as detailed elsewhere 26 , 27 . Figure 3 shows results for CO and CH 4 at pH=1 and 3 for different potentials. The remaining current is used to form H 2 . The quantitative data and error bars are summarized and further explained in Supplementary Table 1 . HCOOH was also observed as a minority product at pH=1 using HPLC, but was not observed at pH=3 ( Supplementary Fig. 3 ). As mentioned above, methanol was observed as a product using OLEMS ( Supplementary Fig. 2 ), but it remained below the detection limit during the gas chromatography (GC) measurements. At pH=1, the FE to CO and methane is low, on the order of a per cent, and the dominant product is H 2 , and therefore for pH=1, we show results at only a single potential in Fig. 3 . Note, however, that at pH=1, more methane is produced than CO. At pH=3, a dramatic change in selectivity is observed, with now CO being a majority product, especially at less cathodic potentials, for which the FE to CO is ∼ 40%. This high selectivity is maintained for at least 1 h during the long-term electrolysis experiment at fixed potential ( Supplementary Fig. 9 ), testifying to the good stability of the catalyst. The stability and integrity of the CoPP-PG electrode was also confirmed by pre- and post-electrolysis analysis using X-ray Photoelectron Spectroscopy (XPS), Raman and nuclear magnetic resonance ( Supplementary Figs 10–12 ). Raman spectroscopy showed no significant change in the spectral features of the CoPP-PG surface; XPS showed no change in Co oxidation state after 1 h of electrolysis; and nuclear magnetic resonance showed no decomposition products in solution that could be related to CoPP. Figure 3 also illustrates that less methane is produced at pH=3 as compared with pH=1. We ascribe this lower methane production to the slower reduction of CO to CH 4 at pH=3 compared with pH=1 (see next paragraph). The efficiency towards CO can be further boosted by performing the experiment at higher CO 2 pressure. Figure 3 illustrates this for a CO 2 pressure of 10 atm, which leads to a FE of ∼ 60% at pH=3 at a potential of −0.6 V. Note that at pH=1, both the efficiency towards CO and CH 4 increases to a few % when the reduction is carried out at increased CO 2 pressure. We emphasize that OLEMS and GC experiments exhibited good consistency and reproducibility. The error bars shown in Fig. 3 were based on single long-term electrolysis experiments sampled every 6 min. Reduction of other compounds To determine the involvement of potential intermediates, we also studied the reduction of HCOOH, CO and formaldehyde (HCHO), by combined voltammetry-OLEMS. HCOOH was not reduced at either pH=1 or 3 ( Supplementary Fig. 13 ), and is therefore an end product, not an intermediate. Figure 4 shows the voltammetry and associated OLEMS mass signals on the CoPP-PG electrode for CO reduction at pH=1 and 3, and for HCHO reduction at pH=1. Remarkably, CO is clearly reduced to methane at pH=1, simultaneous with H 2 evolution, but the CO reduction activity is much lower compared with hydrogen evolution at pH=3, with an insignificant amount of CH 4 detected. This observation is consistent with the results in Fig. 3 , showing that methane production from CO 2 is lower at pH=3. HCHO is reduced to methane at pH=1 and 3 ( Fig. 4 only shows pH=1). Interestingly, HCHO is not reduced to significant amounts of methanol, whereas methanol is the product of HCHO reduction on copper electrodes 6 . Figure 4 suggests that CO and HCHO, or their catalyst-bound derivatives, are intermediates in the reaction mechanism from CO 2 to CH 4 , but HCOOH is not. It also shows that the reduction of CO exhibits a different pH dependence compared with CO 2 reduction, explaining why the selectivity of CO 2 towards CO increases with higher pH, but the selectivity towards CH 4 decreases with higher pH. Discussion The results presented above give unique new insights into the mechanism of CO 2 electroreduction on immobilized Co protoporphyrins, and the observed pH dependence reveals the important role of the initial electron transfer to CO 2 in the overall mechanism as explained below, and as illustrated in our suggested mechanistic scheme in Fig. 5 . At pH=1, the dominant reaction is hydrogen evolution: Figure 5: Proposed mechanistic scheme for the electrochemical reduction of CO 2 on Co protoporphyrin. H + and H 2 O are the hydrogen source for the hydrogen evolution reaction at pH=1 and 3, respectively. CO 2 ·− is the initial intermediate for the CO 2 reduction to CO. CO can be further reduced to methane with HCHO as an intermediate. The catalytically inactive ‘resting’ state of the Co is assumed to be 2+. The reduction of Co 2+ to Co + is supposed to trigger both the H 2 evolution and CO 2 reduction pathways. Full size image At pH=3, the main origin of hydrogen evolution is direct water reduction: with reaction 1 generating a smaller amount of H 2 at less-negative potential due to diffusion limitations ( Fig. 2 ). This observation is very similar to recent experiments on platinum electrodes 28 . The observation that CO 2 reduction to CO becomes much more dominant at higher pH, must mean that CO 2 activation does not sensitively depend on the presence of protons, and hence must involve an intermediate that can easily react with water at any pH. Such an intermediate is most likely a negatively charged Brønsted base, and the most obvious candidate for this intermediate is a CO 2 radical anion 25 , 29 , 30 bound to the Co complex ‘M’: which subsequently reacts with water to a metal-bound carboxyhydroxyl intermediate: The formation of the CO 2 ·− radical anion normally has a very negative redox potential 3 , 8 , but may be shifted to less-negative potential by the stabilization provided by the coordination of CO 2 ·− to the catalyst. The carboxyhydroxyl intermediate then generates CO: with the CO subsequently dissociating from the complex. Owing to the presence of the negatively charged intermediate in reaction 4, the pH dependence of this pathway is different from that of the mechanism for reactions 1 and 2, in which no such intermediate is assumed. For reactions 1 and 2, we assume: and which involve concerted proton-coupled electron transfer at every step 31 , 32 . Reaction 4 is different from the reaction suggested by the Density Functional Theory (DFT) calculations of Leung et al . 29 , 30 because we specify that the proton donor may be water, rather than H + , owing to the basic character of the CO 2 radical anion intermediate. Note that in this mechanism, the reaction rate for CO 2 reduction itself does not depend on pH, only its relative rate with respect to the hydrogen evolution. Another way of formulating our mechanism is by stating that in the potential window of interest, CO 2 reduction is approximately zeroth order in proton concentration, while hydrogen evolution is first order in proton concentration. The further reduction of CO must be slower than its generation, explaining the relatively low overall FE of CO 2 reduction to methane. To explain the pH dependence of CO reduction and methane selectivity from CO 2 , we must assume that CO is reduced to methane without the involvement of negatively charged intermediates. Our experiments also show that an intermediate or by-product of CO reduction to methane is HCHO. Our suggested overall mechanism is summarized in Fig. 5 . The above mechanism, which we believe explains our observations consistently, has important implications for future catalyst design. The onset potential for CO 2 reduction is determined by reaction 3, that is, by the stabilization of the CO 2 radical anion coordinated to the complex. As noted above, the onset potential appears to be related to the Co 2+ /Co + redox transition on the basis of CV 19 and also on the previous observation that the Co + state is the active state for proton reduction 22 . Nielsen and Leung have also concluded, based on literature data and their own DFT calculations, that CO 2 binds to the Co + state of the porphyrin 29 , 30 . Therefore, we assume that Co + state of the CoPP is the catalytically active state. The closer the Co 2+ /Co + redox potential lies to the overall equilibrium potential, the lower is the overpotential for CO 2 reduction. Reaction 3 is therefore the potential-determining step 33 , 34 . The key point is that the formation of this intermediate is decoupled from proton transfer, as otherwise we cannot explain the observed pH dependence, an important feature not included in the recent DFT calculations of Tripkovic et al . 9 . Therefore, future calculations must take into account the existence of such intermediates, and should aim at enhancing the stability of the intermediate in reaction 3. Moreover, in order to have a higher overall efficiency towards methane, the rate of the reduction of CO to methane must be enhanced. Presumably, the rate of this reaction can be tuned by the binding of CO to the complex. This will also require further experiments and calculations aimed at screening various catalyst alternatives. We also believe that our mechanism provides a possible rationale for tuning the H 2 /CO ratio from electrochemical CO 2 reduction, as was recently reported for a Ru-based molecular catalyst in aqueous solution 35 . A final word on the overpotential and the TOFs of our catalyst in comparison with previous work on molecular catalysts for CO 2 electroreduction to CO. From our experiment, we calculate TOFs through the formula: (FE for CO production) × (current density/2 F )/(number of Co-PP per cm 2 ), where F =Faraday constant. In Fig. 3 , the average current densities measured over 1 h at potentials of −0.6 and −0.8 V versus RHE, corresponding to overpotentials of ca. 0.5 and 0.7 V, were 0.08 and 0.16 mA cm −2 (at atmospheric pressure), respectively. This corresponds to TOFs of ca. 0.2 and 0.8 s −1 . Costentin et al . 10 have recently reported on the enhanced activity of a modified Fe tetraphenylporphyrin for CO 2 reduction to CO in a mixed DMF–water solvent. In their experiment, the porphyrin was in solution. Their measured current densities and corresponding effective CO 2 turnover rates are very similar to ours, namely, 0.3 mA cm −2 (see Supplementary Fig. 5 in their paper) at a similar overpotential of ca. 0.5 V. Note that this comparison does not take into account that the solubility of CO 2 is considerably higher in DMF–water mixtures than in water 36 , thereby leading to correspondingly higher turnover rates in the DMF–water mixture. From a mathematical model for their reactive system including mass transport of the catalyst to the electrode surface, they report a catalytic TOF of ca. 3,000 s −1 . This is a TOF of a homogeneous catalyst corrected for the slow mass transport in their system, and can therefore not be compared directly with the ‘effective’ TOF of our heterogeneous catalyst. However, from the similar real current densities at a similar overpotential, we believe that we can safely state that our immobilized catalyst system has a similar efficiency. Summarizing, we have shown that a Co protoporphyrin immobilized on a PG electrode can reduce CO 2 to CO and even to the 6- and 8-electron products methanol and methane, in a purely aqueous electrolyte phase, with a moderate overpotential of ca. 0.5 V. The efficiency of our catalyst (that is, effective rate at given overpotential) compares favourably with best porphyrin-based catalyst reported in the literature 10 . For optimal FE, that is, low concomitant H 2 production, the proton concentration needs to be suitably tuned to the CO 2 concentration. The pH-dependent activity and selectivity are explained by a mechanism in which the initial step of CO 2 reduction leads to a catalyst-bound CO 2 ·− radical anion. This intermediate has a strong Brønsted-base character and can abstract a proton from water, thereby leading to an overall reactivity of the CO 2 reduction whose pH dependence is substantially different from the competing H 2 evolution. Lowering the potential for the formation of this catalyst-bound CO 2 ·− radical anion is therefore the key to making a better catalyst with a lower overpotential, and a suitable adjustment of pH will contribute significantly to a high FE of such a catalyst. The further reduction of CO to methane and methanol is slow owing to the weak binding of CO to the catalyst, and owing to the fact that CO reduction prefers a more acidic environment. These new insights into the mechanism of CO 2 reduction on immobilized molecular catalysts in aqueous solution provide important design rules for future catalyst improvement. Methods Electrochemistry and chemicals The experiments were performed on home-made PG electrodes (Carbone-Lorraine; diameter, 5 mm). Before each experiment, the electrodes were polished using P500 and P1000 SiC sandpaper consecutively, and were ultrasonicated in ultrapure water (Milli-Q gradient A10 system, 18.2 MΩ cm) for 1 min and dried in a flow originating from compressed air. The electrodes were subsequently immersed in the Co protoporphyrin (Frontier Scientific) solution (0.5 mM in borate buffer) for 5 min to immobilize the protoporphyrin on the surface and rinsed with ultrapure water before the experiments. A one-compartment electrochemical cell was used, with a platinum flag as counter electrode and a RHE as a reference, to which all potentials in this work are referred. The reference electrode was separated from the working electrode compartment through a Luggin capillary. An Ivium potentiostat/galvanostat (IviumStat) was used for the electrochemical measurements. Solutions were prepared from HClO 4 (Merck, 70%), NaClO 4 (Sigma-Aldrich, ≥98.0%), NaOH (Sigma-Aldrich, 99.998%), borate (Sigma-Aldrich) and ultrapure water. Argon (Hoekloos, purity grade 6.0) was purged though the solutions for 30 min before the experiment to remove dissolved oxygen. The reported current densities refer to the geometric surface area. Online electrochemical mass spectrometry The volatile products of the CO 2 electrochemical reduction were detected using online electrochemical mass spectroscopy (OLEMS) with an evolution mass spectrometer system (European Spectrometry systems Ltd) 23 . A porous Teflon tip (inner diameter, 0.5 mm) with a pore size of 10–14 μm was positioned close ( ∼ 10 μm) to the centre of the electrode. Before the experiments, the tip was dipped into a 0.2-M K 2 Cr 2 O 7 in 2 M H 2 SO 4 solution for 15 min and rinsed with ultrapure water thoroughly. The gas products were collected through a polyether ether ketone (PEEK) capillary into the mass spectrometer. A 2,400-V secondary electron multiplier (SEM) voltage was applied for all the fragments except for hydrogen ( m/z =2) which is 1,500 V. The OLEMS measurement was conducted while CV was scanning from 0 to −1.5 V and back at a scan rate of 1 mV s −1 . Gas chromatography The quantitative measurements of the gas products were carried out using GC 26 , 27 . At atmospheric pressure, CO 2 was continuously purged through a two-compartment flow cell with a volume of 12 ml for each compartment at a rate of 5 ml min −1 for 30 min to saturate the electrolyte. The flow rate declined to 2 ml min −1 while a constant potential was applied for 1 h. The reference electrode used here is a Ag/AgCl electrode. The experiments at high CO 2 pressure ( P =10 atm) were conducted in a stainless-steel autoclave using a Pt mesh as a counter electrode, and a home-made Ag/AgCl in 3 M KCl as a reference electrode. All potentials were scaled to RHE after the experiments for both atmospheric and high pressure, with E (versus Ag/AgCl)= E (versus RHE)−0.197 V−pH × 0.059. CO 2 was continuously purged through the autoclave before and during the electrolysis with a flow rate of 50 ml min −1 . The reactor effluent was sampled via GC once every 6 min. CO, CO 2 , H 2 and hydrocarbons were simultaneously separated using two series columns in series (a ShinCarbon 2 m micropacked column and a Rtx-1 column). The quantitative analysis of the gas products was performed using a thermal conductivity detector (H 2 and CO) and flame ionization detector (hydrocarbons). Online HPLC HPLC (Prominence HPLC, Shimadzu) was used to detect liquid products produced during electrochemical reduction of CO 2 using a method described in previous work 37 . Samples were collected using a Teflon tip (inner diameter: 0.38 mm) positioned ∼ 10 μm from the centre of the electrode surface (diameter: 1 cm). The sample volume collected was 60 μl stored in a 96-well microtitre plate (270 μl per well, Screening Device b.v.) using an automatic fraction collector (FRC-10A, Shimadzu). The flow rate of the sample collection was adjusted to 60 μl min −1 with a Shimadzu pump (LC-20AT). A linear sweep voltammogram was recorded while the sample was collecting at a scan rate of 1 mV s −1 from 0 to −1.5 V versus RHE. The microtitre plate with collected samples was then placed in an auto-sampler (SIL-20A) holder and 30 μl of sample was injected into an Aminex HPX 87-H (Bio-Rad) column. The eluent was diluted sulfuric acid (5 mM) with a flow rate of 0.6 ml min −1 . The temperature of column was maintained at 85 °C using a column oven (CTO-20A) and the separated compounds were detected with a refractive index detector (RID-10A). Additional information How to cite this article: Shen, J. et al . Electrocatalytic reduction of carbon dioxide to carbon monoxide and methane at an immobilized cobalt protoporphyrin. Nat. Commun. 6:8177 doi: 10.1038/ncomms9177 (2015).
A discovery made in Leiden helps not only to make natural gas from CO2 but also to store renewable energy. Research by Professor Marc Koper and PhD student Jing Shen shows how this process can be implemented in a cost-effective and controllable way. The conversion of the greenhouse gas CO2 into natural gas is achieved using a chemical process in which CO2 is bubbled through an acid solution. The solution contains a graphite electrode – to which a small negative voltage is applied – with a cobalt-porphyrin catalyst attached to it. It was already known that this catalyst can convert CO2 into carbon monoxide and methane, but the reaction always released unwanted hydrogen. In their investigation, Koper and Shen show for the first time how the process works. They therefore know exactly what the best acidity degree is in order to minimise the amount of hydrogen and to convert as much CO2 as possible into natural gas. Common materials An added benefit is that the catalyst is entirely made up of common materials. Cobalt porphyrin is a part of vitamin B12, while the graphite for the electrode is similar to a pencil lead. Therefore the catalyst only costs a few euros. Comparable methods of converting CO2 into methane often use rare and expensive metals, such as platinum. Realising a dream Koper hopes that this discovery will bring his dream a little closer to realisation: to convert CO2 and water, the by-products of fuels, into new energy or building blocks for the chemical industry. If this can be achieved using solar energy, this process will also offer a method of storing renewable energy. Using renewable energy efficiently 'We're generating more and more electricity using solar panels and windmills, but that energy is by no means always used straight away,' Koper explains. 'In Germany, for example, too much renewable electricity is generated sometimes, so you want to store it. That is the most important potential application of our research: to use renewable energy efficiently by converting water and CO2 into valuable products.' A fundamentally different way Still, Koper thinks that it will take a while to get to that point. 'This is something for the long term and it could be another fifty years before we have a method that makes valuable products and is also robust, scalable and cost-effective. But I'm nevertheless convinced that this is the way to go. It will not be easy, but this discovery is helpful. We have to find a fundamentally different way to manage energy, and our discovery can contribute to that.'
10.1038/ncomms9177
Nano
Smart sensor detects single molecule in chemical compounds
Yuanhui Zheng et al. Reversible gating of smart plasmonic molecular traps using thermoresponsive polymers for single-molecule detection, Nature Communications (2015). DOI: 10.1038/ncomms9797 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms9797
https://phys.org/news/2015-11-smart-sensor-molecule-chemical-compounds.html
Abstract Single-molecule surface-enhanced Raman spectroscopy (SERS) has attracted increasing interest for chemical and biochemical sensing. Many conventional substrates have a broad distribution of SERS enhancements, which compromise reproducibility and result in slow response times for single-molecule detection. Here we report a smart plasmonic sensor that can reversibly trap a single molecule at hotspots for rapid single-molecule detection. The sensor was fabricated through electrostatic self-assembly of gold nanoparticles onto a gold/silica-coated silicon substrate, producing a high yield of uniformly distributed hotspots on the surface. The hotspots were isolated with a monolayer of a thermoresponsive polymer (poly( N -isopropylacrylamide)), which act as gates for molecular trapping at the hotspots. The sensor shows not only a good SERS reproducibility but also a capability to repetitively trap and release molecules for single-molecular sensing. The single-molecule sensitivity is experimentally verified using SERS spectral blinking and bianalyte methods. Introduction Surface-enhanced Raman spectroscopy (SERS) is one of the few techniques that are capable of detecting and identifying chemical and biological compounds with single-molecule sensitivity 1 , 2 , 3 , 4 , 5 , 6 . This technique takes advantage of plasmonic (metal) nanostructures to amplify Raman signals. A unique feature of these metal nanostructures is they show a resonant oscillation of their conduction electrons on light irradiation. This light-matter interaction leads to an enormous electromagnetic field enhancement in the close vicinity of the metal surfaces. The field enhancement is particularly strong at sharp corners or tips 1 , 7 , interparticle gaps 8 , 9 , 10 , 11 , 12 , 13 and nanoscale pores 4 , 14 typically referred to as ‘hotspots’. Although the importance of hotspots has been both experimentally and theoretically demonstrated for SERS sensing 1 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , the fraction of analytes adsorbed to the hotspots for a conventional SERS substrate is extremely small due to the low spatial occupation of hotspots per unit area 14 , 15 . For example, a silver film-over-nanosphere SERS substrate showed a wide distribution of SERS enhancement factors (EFs) ranging from 2.8 × 10 4 to > 1 × 10 10 (ref. 15 ). Yet, the hottest spots, with SERS EFs larger than 10 9 , only accounted for 63 out of a million of total Raman-active sites 15 . There is therefore a prevailing need for the development of innovative SERS substrates that have a large number of uniformly distributed hotspots and the analyte molecules can be confined only at the hotspots. Several concepts have been developed with the aim to adsorb target analytes only at the hotspots 16 . The most straightforward one is the isolation of hotspots with a chemically inert material. Diebold et al. 17 developed a near-field optical lithography method to isolate hotspots on a macroscopic SERS substrate composed of an array of nanocones covered by a thin layer of a photoresist. The excitation of the nanocones with a laser scanning across the substrate results in a strong near field at the tips of the cones (that is, hotspots), which causes preferential exposure of the photoresist at the hotspots. The removal of the exposed photoresist yields a substrate for which only the hotspots are available as binding sites. A requirement for detection in such a sensor however is the analytes having a strong affinity for the metal. A promising alternative approach is the analyte trapping at hotspots. Hu et al. 18 demonstrated a molecular trap based on gold-coated flexible polymer fingers for SERS sensing. The tips of these gold nanofingers were brought together by the capillary force of solvent evaporation, resulting in molecules trapped between the tips 18 . This drying process inevitably results in the deposition of analytes outside the hotspots. Álvarez-Puebla et al. 19 developed a more controllable trapping system made of microgels. These microgels are composed of stimuli-responsive polymer-coated gold nanoparticles (AuNPs). The polymer shell either swelled or collapsed when responding to the external temperature. This change in volume was utilized as a means to trap the analytes and get them close to the metal surface, where the electromagnetic field is significantly enhanced. However, the overall SERS enhancement from these individual colloidal nanoparticles (NPs) is usually insufficient for single-molecule detection. To date, many complex plasmonic nanostructures, such as film-coupled metallic NPs (also referred as to NPs-on-mirror) 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , metal NP assemblies 2 , 3 , 6 , 7 , 8 , 9 , 10 , 11 , 12 and porous metal films 4 , 14 have been fabricated for SERS applications. Among all of these, film-coupled metallic NPs are of special interest for two reasons. First, the simplicity of this system makes it an ideal model for theoretical simulation studies 24 , 25 , 26 , 27 . Second, it has been shown that such a system enables SERS-based single-molecule detection 23 . In this work, we develop a smart plasmonic molecular trap based on a well-established film-coupled AuNP system on a silica-coated silicon optical interference substrate and demonstrate a gating mechanism to control the trapping and release of analytes at the particle–substrate gaps (that is, hotspots) for SERS-based single-molecule detection. Silica-coated silicon substrates are chosen as the silica layer can generate an additional SERS enhancement up to 50 times due to an interference effect 29 . The hotspots of the molecular trap developed here are isolated with a self-assembled monolayer of thermoresponsive polymer, which acts as gates for the reversible molecular trapping at the hotspots. The trapped molecules can be subsequently released after SERS sensing. This reversible trapping process makes it possible to detect an abundance of analytes in one measurement but also to reuse the SERS substrate multiple times. Results Sensor fabrication The fabrication of the smart plasmonic molecular traps and their SERS sensing mechanism are schematically illustrated in Fig. 1 . Gold/silica-coated silicon substrates were fabricated by evaporation of a 15-nm gold film on a 110-nm silica-coated silicon wafer using 3-mercaptopropyltrimethoxysilane as an adhesion layer. A freshly prepared gold/silica-coated silicon optical interference substrate was exposed to an ethanolic solution of 6-amino-1-hexanethiol (AHT) to form a self-assembled monolayer on the gold film (step 1), which confers a net positive surface charge on the substrate at neutral pH 12 , 30 . Commercially available spherical AuNPs (average diameter: 80 nm) functionalized with monothiolated DNA (referred to as DNA-AuNPs) were used as building blocks to produce an array of well-spaced NPs on the AHT-modified substrate. Exposing the negatively charged DNA-AuNPs to the AHT-modified substrate resulted in NP adsorption on the substrate driven by electrostatic attractions between the particles and the substrate (step 2). The strong repulsive electrostatic forces between DNA-AuNPs predetermine their separation during the assembly. These forces ultimately rely on parameters such as the AuNP concentration, surface charge density and the ionic strength of the medium 11 . All of these can be experimentally controlled. This allows us to achieve high levels of surface coverage of AuNPs on the substrate and minimize the distance between the neighbouring particles to avoid their surface plasmon coupling. Once the AuNP array is formed, the substrate was exposed to a dithiothreitol (DTT) aqueous solution. This results in the binding of the AuNPs to the underlying gold film and displacement of the DNA and AHT with DTT (step 3). The DTT molecules outside the particle–substrate gaps are selectively removed by oxygen plasma etching (step 4). Figure 1: Fabrication and sensing mechanism. An optical interference substrate, composed of 15 nm gold/110 nm silica on a silicon wafer, is modified with a monolayer of AHT (step 1). The AHT functionalized substrate is exposed to a solution of DNA-AuNPs to allow for their electrostatic adsorption (step 2). The DNA and AHT are then displaced with DTT (step 3). The DTT molecules outside the particle–substrate gap (that is, hotspot region) are selectively removed by oxygen plasma etching (step 4). Following the oxygen plasma treatment, the substrate is exposed to an ethanolic solution of HS-PNIPAM to allow for the formation of self-assembled monolayer on the AuNP and the gold film, isolating the hotspots (step 5). For SERS sensing, the substrate is exposed to an analyte (for example, rhodamine 6G) solution with a temperature (50 °C) higher than the LCST ( ∼ 34.5 °C) of the polymer. This temperature triggers the shrinkage of the polymers to allow the analyte molecules to flow into the molecular traps (step 6). Subsequently, the substrate is cooled down to a temperature (4 °C) much smaller than the LCST of the polymer. In this case, the polymer shell expands and the analyte molecules are trapped at the hotspots (step 7). Excess analyte molecules are removed by washing with a cold (4 °C) aqueous solution before the SERS measurements. After the SERS measurements, the substrate is exposed to a hot (50 °C) aqueous solution to release the analyte molecules (step 8) and then separated from the solution by disposing of the solution (step 9). Full size image Following the oxygen plasma treatment, the substrate was exposed to an ethanolic solution of thiolated poly( N -isopropyl acrylamide) (HS-PNIPAM, M w =4.7 × 10 4 g mol −1 ) to allow for the adsorption of PNIPAM on the AuNP and the gold film surfaces via thiol-gold bonds (step 5). The thiolated PNIPAM used here was synthesized according to the method described by Wong et al. 31 ( Supplementary Methods ) and its lower critical solution temperature (LCST) is determined to be ∼ 34.5 °C ( Supplementary Fig. 1 ). The PNIPAM is in an extended conformation in the ethanolic solution 32 and cannot adsorb to the particle–substrate gap due to the steric hindrance, yielding molecular traps with SERS hotspots isolated. For SERS sensing, the molecular traps were exposed to an analyte, for example, rhodamine 6G (R6G), solution at a temperature (50 °C) higher than the LCST. The high temperature induces the shrinkage of the polymers, allowing the analyte solution to flow into the molecular traps (step 6). Subsequently, the molecular traps were cooled down to a temperature (4 °C) that is lower than the LCST. At this temperature, the polymers expand to its original conformation and the analyte molecules are captured in molecular traps (step 7). During the analyte trapping process, an oxalic acid solution is used to adjust the pH of the analyte solution to 2, where very few of the carboxyl groups on the polymer are deprotonated 16 . This minimizes the nonspecific adsorption of the analyte on the polymer shell through electrostatic interaction. The non-adsorbed analyte molecules were removed by washing with the cold oxalic acid solution. Thereafter the sample spontaneously dried in air at room temperature ( ∼ 25 °C) when being taken out from the cold oxalic acid solution. This process happens within a few seconds. During this drying process, the thermoresponsive polymers remain in the extended conformation, as the polymer’s LCST is much higher than the room temperature. When drying in air, the trapped analyte molecules are drawn to the centre (the hottest region) of the molecular traps by the capillary force of the solvent evaporation 33 . The molecular traps are then ready for the SERS measurements. Following the SERS measurements, the molecular traps were exposed to a hot (50 °C) oxalic acid solution to release the analytes (step 8) and then separated from the solution (step 9). After the analyte molecules were released, the molecular traps are ready for next cycle of Raman spectroscopic analysis. Sensor characterization and near-field simulation Figure 2a shows a typical scanning electron microscopy image of the produced AuNP array on a gold/silica-coated silicon substrate. As shown in Fig. 2a , all particles are coated with PNIPAM as indicated by the darker ring (that is, polymer shell) around the particles (see inset). The observation of carbon, nitrogen and sulphur signals from the sample by X-ray photoelectron spectroscopy further confirms the presence of PNIPAM on the gold surfaces ( Supplementary Fig. 2 ). The atomic percentages of these three elements are listed in Supplementary Table 1 . The C–C/C–N/O=C–N ratio derived from the X-ray photoelectron spectroscopy measurement is 4.3:1:1.1, which is in good agreement with the theoretical value (4:1:1). The particle density and the polymer shell thickness are estimated to be ∼ 14 particles per μm 2 and ∼ 50 nm, respectively. The distance between each AuNP and its nearest neighbour was determined via image analysis. The statistical analysis ( Fig. 2b ) shows an average nearest-neighbour distance of 138 nm with a s.d. of 38 nm. At such separation distances there is no coupling between particles. The optical property of the AuNP arrays on gold-coated glass substrates was recorded using ultraviolet–visible absorption spectroscopy ( Fig. 2c , black line). The sample shows two distinct surface plasmon resonance peaks at 520 and 710 nm, which are ascribed to the dipole surface plasmon resonances parallel and perpendicular to the gold film, respectively 24 , 25 , 26 , 27 , 28 . To describe semi-quantitatively the film-coupled spheres, we simulated their absorption spectrum and electric field enhancement using three-dimensional finite-difference time-domain method. By carefully adjusting particle–substrate distance in our modelling ( Supplementary Fig. 3 ), we were able to reproduce the qualitative features of the measured absorption spectrum ( Fig. 2c , red line). From the simulations, the particle–substrate distance is estimated to be ∼ 0.7 nm, which is slightly smaller than the length of DTT molecules ( ∼ 1.0 nm). This suggests some molecular rearrangement within the gap between the nanoparticle and the underlying gold substrate 34 . Figure 2d shows the maximum local electromagnetic field intensity enhancement in the nanogap region (gap size: 0.7 nm) with respect to source intensity. It can be clearly seen that the enhancement mainly occurs in the range of 530–800 nm. To achieve maximum SERS enhancement, we chose 633-nm laser as an excitation source (dash line) and a typical Raman spectrum (0–2,000 cm −1 , grey shadow) falls in the maximum enhancement region. Figure 2e,f show the spatial SERS enhancement factor (| E | 4 /| E 0 | 4 ) distributions of a single-film-coupled sphere at the excitation wavelength of 633 nm. It is clear that the field enhancement is localized in the gap between the particle and the gold film. The average SERS EFs originated from near-field coupling is estimated to be ∼ 10 9 at the hotspot using the SERS EF boundary criterion of 10 7 . The corresponding hotspot volume is calculated to be 48 nm 3 ( Supplementary Fig. 3 and Supplementary Table 2 ). Figure 2: Characterization and simulations. ( a ) A typical scanning electron microscopy micrograph of an array of 80 nm AuNPs on a gold film covered with a monolayer of HS-PNIPAM (scale bar, 100 nm). The insets are of a AuNP on gold film before (upper) and after (lower) surface modification with the HS-PNIPAM. ( b ) Distance analysis of the self-assembled AuNPs: the distance between each AuNP and its nearest neighbour was measured (edge to edge) using image analysis. ( c ) Absorption spectra of the AuNPs on gold film (experimental: black line; calculated: red line). ( d ) Calculated maximum field intensity enhancement at the hotspot as a function of wavelength in the range of 400–900 nm (particle–substrate gap: 0.7 nm). The dash line and grey shadow area show the laser wavelength and Raman shift region of interest, respectively. ( e , f ) Simulated spatial SERS enhancement factor (| E | 4 /| E 0 | 4 ) distributions at a 80-nm AuNP–15-nm Au film junction sampled along the planes vertical ( xz ) and horizontal ( xy ) to the sample plane ( xy ), respectively. Full size image Reversible molecular trapping and high SERS reproducibility One application of the smart plasmonic molecular traps is molecular sensing based on surface-enhanced Raman signals at the hotspots. R6G, one of the most widely used Raman-active dyes, has a maximum of absorption at 545 nm and almost drops to zero at a wavelength higher than 600 nm ( Supplementary Fig. 4 ). It therefore can be considered as a non-resonant dye at 633 nm excitation 6 . Previous studies have shown that a SERS EF of ∼ 10 7 is sufficient to detect single R6G molecules adsorbed on AgNP aggregates at 633 nm laser excitation 6 . As discussed earlier, the smart molecular traps developed here exhibit a high average SERS EF of ∼ 10 9 , which allows them to detect single molecules. To demonstrate this potential, we investigate their SERS activity using R6G as a model analyte. Figure 3a shows the SERS activity of the smart molecular traps at the different stages of the sensing scheme illustrated in Fig. 1 . All of the spectra were obtained at 633 nm laser excitation. Prominent Raman modes at 621, 1,200, 1,280, 1,360, 1,510 and 1,642 cm −1 originated from R6G 5 , 6 , 35 are observed (red line), when the molecular trap was exposed to a 100-μM R6G solution at 50 °C for 3 min and then cooled down to 4 °C (see Approach 1 in Methods for the experimental details). The exposure of the molecular trap to the high temperature causes the polymer shells to collapse, and thus the analyte solution flowing into the molecular traps. Subsequent cooling of the solution brings the polymer chains back to its original the extended conformation, resulting in analyte molecules trapped. The trapped molecules are brought to the hotspot region with the SERS EFs of >10 7 driven by the capillary force of solvent evaporation. Further exposure to a hot oxalic acid solution leads to the shrinkage of the polymer shells and the release of the trapped molecules from the hotspots. This results in the marked decrease of Raman signals (purple line). These weak Raman signals are ascribed to the nonspecifically adsorbed R6G molecules on the polymer shells due to electrostatic or hydrophobic–hydrophobic interactions between the analyte and the polymer, as they were also observed when the molecular trap was exposed to the analyte solution at 4 °C (black line). This nonspecific adsorption can be completely removed by washing with a mixture of water and methanol (blue line). After the complete removal of the nonspecific adsorption, the molecular trap is ready for the next cycle of sensing process. Figure 3b shows the cyclic sensing capability and reusability of the molecular trap. Similar SERS intensities at the Raman peak of 621 cm −1 are observed each time when the molecular trap was subjected to a total of five consecutive sensing cycles. Figure 3: SERS performance using rhodamine 6G as a model analyte. ( a ) SERS activity at the different stages of the sensing scheme illustrated in Fig. 1 : analyte trapped (red curve); analyte released (purple curve); and nonspecific adsorption removed (blue curve). For comparison, a control experiment with analyte loading at 4 °C (black curve) is provided in a . ( b ) Cycling SERS activity, ( c ) substrate-to-substrate SERS intensity variation at 621 cm −1 measured for five different substrates (each intensity value represents the average of 10 measurements at different spots and the s.d.’s for each sample are shown as error bars) and ( d ) analyte concentration-dependent SERS activity. The analyte-loading concentration for the samples shown in a – c is 100 μM. For all samples, λ ex =633 nm, P ex ≈3.5 mW, acquisition time= 10 s and laser spot size≈2 μm 2 . Full size image Good reproducibility and high sensitivity are two key requirements for an ideal SERS sensor. We therefore undertook a statistical analysis to quantify the variation in the SERS signal intensity between different locations on one substrate (spot-to-spot variation) and between different substrates (substrate-to-substrate variation). Figure 3c shows the spot-to-spot Raman intensity variation of five sensors that resulted from independent fabrication process shown in Fig. 1 . For each sensor the 621 cm −1 Raman peak height was measured at 10 different spots. The highest spot-to-spot coefficient of variation among the five samples is 15.7% and the substrate-to-substrate coefficient of variation is about 5.6%. Such a good reproducibility indicates that the self-assembly process presented here provides excellent control over the particle density and the particle–substrate distance. Furthermore, we also evaluated the detection limit of the sensor. Figure 3d shows the Raman intensity of R6G at 621 cm −1 as a function of its concentration. As expected, the Raman intensity decreases with decreasing R6G loading concentration. At an elevated R6G concentration, for example, 10 and 100 μM, many molecules are trapped at hotspots. At the same concentration, R6G can also nonspecifically adsorb on the areas outside the hotspots. The observed SERS signals are from both the trapped and the nonspecifically adsorbed molecules. When decreasing R6G concentration to a point where single-molecule trapping is reached, from statistical analysis point of view, there are areas with no analytes trapped in hotspots but minor amounts of nonspecific adsorption may still occur. This inevitably results in two distribution peaks of Raman intensity. As shown in Fig. 3d , two distribution peaks of Raman intensity at 621 cm −1 are observed at R6G concentration of 1 μM. The one at the lower intensity is attributed to nonspecific adsorptions, while the other one at higher intensity predominantly stems from the trapped molecules at hotspots. On the basis of the particle density, the volume of a single-molecular trap and the size of the laser spot, the number of analyte molecules trapped in hotspots within the observation area is estimated to be ∼ 1.1 molecules when the analyte concentration is 1 μM ( Supplementary Fig. 5 and Supplementary Note 1 ). Further decreasing of the analyte concentration creates a new situation where there are areas having either trapped molecules in hotspots or nonspecific adsorptions. Meanwhile there are areas having neither trapped molecules in hotspots nor nonspecific adsorptions. This explains why we also observed two distribution peaks of SERS intensity at R6G concentration of 0.5 μM. Interestingly, the SERS intensity of the second distribution peak at R6G concentration of 0.5 μM is close to the intensity difference between the two distribution peaks at R6G concentration of 1 μM. This indicates that the first SERS intensity distribution peak at R6G concentration of 0.5 μM is related to nonspecific adsorptions, while the second one corresponds to single-molecule SERS. Single-molecule SERS blinking and detection of bianalytes Several single-molecule SERS verification experiments including Poisson distribution of intensities 36 , 37 , Raman spectral blinking 2 , 38 , 39 , 40 and bianalyte approach 37 , 41 , 42 , 43 have been developed. To further confirm single-molecule sensitivity, we conducted a time-dependent SERS experiment and a bianalyte experiment (see Approach 2 in Methods for the experimental details). The time-dependent SERS experiment was carried out by repetitively measuring the SERS spectra of R6G from the same spot of the molecular trap (R6G loading concentration: 1 μM). Figure 4a shows that the Raman peaks of R6G randomly appear and then disappear during the SERS measurements at the same location. This spectral blinking phenomenon has not been observed at higher concentration of R6G and is considered as a characteristic of the behaviour of single, or a few, molecules 2 , 38 , 39 , 40 . For the bianalyte experiment, we used R6G and crystal violet (CV) as the model analytes. Their concentration in the mixture is controlled to be 0.5 μM, respectively. This concentration was chosen to ensure that approximately one molecule is trapped in each probe region (laser spot) on the substrate based on our estimate of the number of molecular traps ( Supplementary Fig. 5 and Supplementary Note 1 ). Typical SERS spectra from four different spots are shown in Fig. 4b . Two of the spectra (blue and purple curves) show the typical fingerprint peaks of CV 44 and R6G, respectively (see Supplementary Fig. 6 for peak assignments). For a series of SERS measurements at 65 different spots, we observed that the SERS spectra were dominated by either one analyte (CV: 44.6%) or the other (R6G: 15.4%), or no molecules (36.9%) at all ( Fig. 4c ). Only 3.1% of measurements showed a mixed spectrum ( Fig. 4b , red curve). Since both R6G and CV do not have specific affinity for the gold surface, they should have similar probability of being captured at the molecular traps. However, the statistical analysis of single-molecule events shows that CV has ∼ 3 times higher probability to be present in hotspots than R6G ( Fig. 4c ). This indicates that CV may have stronger physicochemical affinity to the gold surface than R6G. The bianalyte results shown in Fig. 4b,c are in good agreement with previous reports providing evidence for single-molecule SERS 37 , 41 , 42 , 43 . The ability to trap and detect single molecules in the micromolar range, where the majority of biomolecular interactions and enzymatic activity takes place, would allow us to exploit its potential applications in diagnostics and biosensing 45 . This requires the further development of new smart polymers that can respond to various environmental (for example, temperature, pH and light, etc.) changes 46 and show excellent biocompatibility and antifouling property 47 . This work is under way. Figure 4: Single-molecule behaviours from the smart molecular trap. ( a ) Single-molecule blinking SERS spectra of R6G captured at the concentration of 1 μM, ( b ) single-molecule SERS detection of bianalytes: four representative SERS spectra showing no analytes (black curve); a pure CV event (blue curve); a pure R6G event (purple curve); and a mixed event (red curve), and ( c ) histogram of occurrences of none, pure R6G, pure CV and mixed molecules from 65 different spots of a molecular trap. The concentration of the two analytes is 0.5 μM, respectively. SERS measurement conditions: λ ex =633 nm; P ex ≈3.5 mW; acquisition time=10 s; and laser spot size≈2 μm 2 . Full size image Discussion In conclusion, we have developed a smart plasmonic sensor that consists of spherical AuNPs on a gold/silica-coated silicon optical interference substrate. The sensor is fabricated through electrostatic self-assembly of AuNPs onto the optical interference substrate. The electrostatic self-assembly strategy developed here is particularly advantageous in terms of achieving a high AuNP density and maintaining a minimum interparticle distance to avoid surface plasmon coupling between the neighbouring particles. The formed particle–substrate gaps are isolated with a self-assembled monolayer of a thiolated PNIPAM, which exhibits reversible conformational changes in response to temperature. The polymer shell acts as gates for molecular trapping at the hotspots that show an exceptionally high average SERS EF of ∼ 10 9 calculated using the SERS EF boundary criterion of 10 7 . The reversible conformational change of the polymer shell makes it possible to reuse the sensor multiple times. The produced sensor also shows an excellent SERS reproducibility as well as an ability to repetitively trap and release molecules for single-molecular sensing. Finally, this work represents a simple proof-of-concept experiment for single-molecule trapping and detection. The polymer used in this work can be easily extended to other stimuli-responsive polymer systems that are sensitive to humidity, pH and light. Methods Fabrication of smart plasmonic molecular traps A freshly prepared 15 nm gold/110 nm silica-coated silicon substrate (size: 4 × 6 mm) was immersed into 200 μl of 2 mM AHT ethanolic solution overnight and then washed with Milli-Q water five times. Subsequently, the substrate was placed into a humidity chamber and a few droplets of water were placed into the chamber to control the humidity. After that, 20 μl of DNA-AuNPs (particle concentration: 7.2 × 10 −11 M) was placed on the substrate, whereon the substrate was incubated at room temperature for 2 h. The substrate was washed with water five times and dried with a stream of N 2 . Following the AuNP self-assembly, the substrate was subjected to the following sequence of treatments: DTT treatment (200 μl of 0.5 M DTT aqueous solution, overnight), oxygen plasma etching (200 mTorr air, 30 s; Harrick Plasma Cleaning Instrument) and HS-PNIPAM treatment (200 μl of 0.25 M HS-PNIPAM ethanolic solution, overnight). The PNIPAM-coated AuNPs on gold film was then washed with water five times to remove excess HS-PNIPAM molecules. SERS activity measurements Approach 1: a freshly prepared molecular trap was immersed into 200 μl of R6G solution containing 8 mM oxalic acid (R6G concentration: 0.5, 1, 10 and 100 μM, respectively; pH≈2) and then heated to 50 °C. The sample was kept at this temperature for 3 min and then cooled down to 4 °C. Subsequently, the substrate was washed with a cold oxalic acid solution (4 °C) five times to remove excess R6G molecules. This washing process takes only 2–5 min. Thereafter the sample was removed from the cold oxalic acid solution. The sample dried instantaneously during this process. The subsequent SERS measurements were performed in air. To release the analyte, the sample was exposed to a hot oxalic acid solution (50 °C) for 10 min and then washed with the hot oxalic acid five times. Following the final washing step, the sample was taken out from the hot oxalic acid solution and dried immediately. The SERS measurements were conducted in air. To remove the nonspecifically adsorbed R6G, the substrate was subsequently exposed to a mixture of methanol and water (volume ratio=1:1) for 1 h. This process was repeated three times to completely remove nonspecifically adsorbed R6G. After that, the sample was taken out from the mixture and dried, which takes just a few seconds in air. Then the SERS measurements were performed in air. For comparison, a freshly prepared molecular trap was exposed to a cold oxalic acid solution (4 °C, 8 mM) containing 100 μM R6G for 3 min and washed with cold water (4 °C) five times. The substrate was dried immediately on removal from the cold oxalic acid solution and then SERS measurements were performed in air. Approach 2 (blinking and bianalyte experiments): a freshly prepared molecular trap was immersed into an 8-mM oxalic acid solution (pH≈2) with a temperature of 50 °C. The polymers collapsed at this temperature, forming a denser the polymer shell. This minimizes the nonspecific adsorption of target analytes in the polymer shell. Subsequently, a given volume of a mixture of 10 μM R6G and CV (ratio: 1:1; 50 °C) was added to the oxalic acid solution under vortex mixing to adjust the final concentration of R6G and CV to be 0.5 μM, respectively. The sample was kept at 50 °C for 3 min and then cooled down to 4 °C. Subsequently, the substrate was washed with cold oxalic acid solution (4 °C) five times to remove excess R6G and CV molecules. The substrate was dried in air before the SERS activity measurements. SERS spectra of the smart plasmonic molecular trap were recorded using a Renishaw RM 2,000 Confocal micro-Raman System equipped with a laser at a wavelength of 633 nm (laser power: ∼ 10 mW; excitation power: ∼ 3.5 mW; laser spot size: ∼ 2 μm 2 ). All of the Raman spectra were collected by fine focusing a × 50 microscope objective and the data acquisition time was 10 s. Simulation Three-dimensional finite-difference time-domain simulations were performed on a single gold sphere on the gold-coated substrate indicated, enclosed in a domain with a size of 200 × 200 × 400 nm 3 , lined with perfectly matched layers to suppress spurious reflections. The particle–substrate gap and the refractive index of the polymer were adjusted from 0.3 to 1 nm and from 1.2 to 1.5, respectively. The square mesh size was 0.1 nm, which proved to give an acceptable spatial resolution down to the matching nanogap size. The sphere was excited by a plane-wave total-field scattered-field source ranging from 400 to 900 nm, and the total and scattered fields were collected by sets of monitors surrounding the particle and substrate. A three-dimensional monitor was employed to measure the local normalized electric field intensities in the nanogap and integrate them inside the hotspot. The average SERS EF (EF avg ) is defined by where E and E 0 are the local normalized and incident fields, respectively. V is the collective volume of all the hotspots evaluated by the integral of all the volume elements with SERS EFs >10 7 . Additional information How to cite this article: Zheng, Y. et al. Reversible gating of smart plasmonic molecular traps using thermoresponsive polymers for single-molecule detection. Nat. Commun. 6:8797 doi: 10.1038/ncomms9797 (2015).
Australian and Italian researchers have developed a smart sensor that can detect single molecules in chemical and biological compounds – a highly valued function in medicine, security and defence. The researchers from the University of New South Wales, Swinburne University of Technology, Monash University and the University of Parma in Italy used a chemical and biochemical sensing technique called surface-enhanced Raman spectroscopy (SERS), which is used to understand more about the make-up of materials. They were able to greatly amplify the technique's performance by taking advantage of metal nanostructures, which help generate 'hotspots' in close proximity to the metal surfaces. The sensor was created using gold nanoparticles which self-assemble onto a gold- and silica-coated silicon base. This approach means the nanoparticles find the perfect spacing to achieve lots of uniformly distributed hotspots on the surface. The hotspots also used a heat responsive polymer which acted as a gate to trap molecules, but importantly also allow them to be released down the track. "The sensor shows not only a good SERS reproducibility but also the ability to repetitively catch and release molecules for single-molecular sensing," postdoctoral fellow at Swinburne's Centre for Micro-Photonics, Dr Lorenzo Rosa, said. "This reversible trapping process makes it possible to detect an abundance of analytes in one measurement, but also to reuse the SERS substrate multiple times." The technique used in this work has various applications for other measurement and detection systems sensitive to humidity, pH and light.
10.1038/ncomms9797
Computer
Football helmet smartfoam signals potential concussions in real time
A. Jake Merrell et al, Nano-Composite Foam Sensor System in Football Helmets, Annals of Biomedical Engineering (2017). DOI: 10.1007/s10439-017-1910-9 Journal information: Annals of Biomedical Engineering
http://dx.doi.org/10.1007/s10439-017-1910-9
https://techxplore.com/news/2017-09-football-helmet-smartfoam-potential-concussions.html
Abstract American football has both the highest rate of concussion incidences as well as the highest number of concussions of all contact sports due to both the number of athletes and nature of the sport. Recent research has linked concussions with long term health complications such as chronic traumatic encephalopathy and early onset Alzheimer’s. Understanding the mechanical characteristics of concussive impacts is critical to help protect athletes from these debilitating diseases and is now possible using helmet-based sensor systems. To date, real time on-field measurement of head impacts has been almost exclusively measured by devices that rely on accelerometers or gyroscopes attached to the player’s helmet, or embedded in a mouth guard. These systems monitor motion of the head or helmet, but do not directly measure impact energy. This paper evaluates the accuracy of a novel, multifunctional foam-based sensor that replaces a portion of the helmet foam to measure impact. All modified helmets were tested using a National Operating Committee Standards for Athletic Equipment-style drop tower with a total of 24 drop tests (4 locations with 6 impact energies). The impacts were evaluated using a headform, instrumented with a tri-axial accelerometer, mounted to a Hybrid III neck assembly. The resultant accelerations were evaluated for both the peak acceleration and the severity indices. These data were then compared to the voltage response from multiple Nano Composite Foam sensors located throughout the helmet. The foam sensor system proved to be accurate in measuring both the HIC and Gadd severity index, as well as peak acceleration while also providing additional details that were previously difficult to obtain, such as impact energy. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Concussions due to contact sports have received a great deal of attention in recent years. For decades, the dangers were ignored or misunderstood, but with scientific data showing they are more dangerous than originally assumed, they can no longer be overlooked. 11 , 13 , 20 , 21 , 22 Younger athletes are believed to be more susceptible to concussion than older athletes and can have severe, acute, and long-term complications that are not found in their older counterparts. 2 , 35 , 36 Furthermore, it has been found that young athletes do not consistently self-report concussion, or concussion related symptoms, with some studies showing only 21% self-reporting. 43 A recent study found that out of 20 high school sports football had the highest incidence of concussion with an injury rate of 22.9 concussions per 10,000 athletic exposures, defined as one athlete participating in one athletic practice or competition. 30 Many scholars and medical professionals are looking for ways to more effectively quantify both the frequency and severity of impacts the players are experiencing. 6 , 7 , 8 , 31 , 39 , 45 With an increased understanding of athlete exposure throughout a game and even over a player’s career, medical professionals and helmet designers can better identify and protect against injury. Real time impact detection has become a reality with the introduction of consumer-based accelerometer systems. 46 Wearable devices have been developed to measure and/or calculate the head’s linear and angular acceleration during impact. These devices vary in their design and function, but generally depend on several different accelerometers and gyroscopes. These sensors have been directly implemented into helmets, patches (adhered to the skin), earplugs, skullcaps, mouthpieces, or chinstraps. 3 , 9 , 25 , 28 , 34 , 38 , 44 The accuracy in determining location and severity of impacts of each implementation has become the focus of researchers. 40 One system that is often included in studies is Riddell’s Head Impact Telemetry System, or HITS. 9 , 23 Riddell’s HIT system has been used in studies to determine severity and frequency of head impacts during a full season of play. 27 , 37 These data are then used to determine the effectiveness of efforts to reduce athlete concussion risk. These systems are experiencing a low adoption rate due to several factors: the expense (HITS costs $1,200 per helmet), difficulty of operation, and the limited number of helmets that are compatible with the system. Many of the current football impact measurement systems are mounted directly into or on the helmet and almost exclusively use accelerometers. These systems have been shown to overestimate head motion and head exposure. 4 , 29 Some systems, including the HITS, have attempted to reduce this disparity by using accelerometers that are pressed to the head with springs to maintain constant contact. 10 Furthermore, it has been shown that helmet fit can affect the accuracy of the HIT system. 27 Some systems attempt to directly measure head acceleration through closer contact with the head in the form of mouth guards, patches, or skull caps. 5 , 24 , 40 Some systems, such as Riddell’s Insight and Shockbox’s impact detection system, make no attempt in overcoming this disparity through design; it is assumed it is accomplished through post-processing. These issues are not easy to overcome and have been widely overlooked in previous work. The most widely accepted mobile gold standard helmet sensor is Riddell’s HITS. 9 , 10 Duma et al., demonstrate that Riddell’s HITS was capable of real-time measurement of impacts during football practice and games. HITS correlated well with a helmet-equipped Hybrid III dummy instrumented with an accelerometer array ( R 2 = 0.97). 10 Other systems have shown similar results by different implementations. 1 , 3 , 25 , 40 However, HITS only works with two different Riddell helmets. Additionally, the other systems mentioned must be calibrated for each helmet based on where the sensor is placed on the helmet. This paper seeks to evaluate the accuracy of a new NCF-based sensor that could be adapted into existing helmet designs. An ideal system would be compatible with any helmet type and provide measurement of impact directly experienced by the head. Materials and Methods Nano Composite Foam (NCF) Sensors This paper demonstrates the use of a new type of foam sensor that can measure impacts through a triboelectric response, to compression and subsequent relaxation. The triboelectric charge is generated by an interaction between the nickel-based additives and the polyurethane foam matrix. The NCF is created by adding nickel nano particles and nickel coated carbon fiber to the liquid components of polyurethane foam prior to casting. The foam is cast around a conductive electrode, which is used to measure the generated charge and transmit it to the measurement device. The NCF sensors used in this experiment used stranded copper wires, however other NCF sensors use conductive films to measure the response. With further development, the NCF sensors can be implemented into foams currently used in helmets. The NCF response is dependent upon several characteristics (strain rate, total strain, impact area, impact duration, etc .) of the impact which may prove helpful in head impact measures. When the foam is impacted, it creates both a positive and negative voltage response as shown in Fig. 1 . The NCF response scales with the magnitude of the impact and is strongly dependent upon both the impact force and initial velocity. Impact force correlates to the maximum strain whereas the initial impact velocity significantly affects the strain rate. Higher rates of strain will result in larger NCF charge generation if the foam doesn’t bottom out or enter the densification region on the stress strain curve. The NCF response maintains a linear correlation throughout the plateau region of the stress strain curve but that correlation breaks down as it passes into the densification region. Figure 2 demonstrates a general correlation between impact energy and NCF voltage response with strains within the plateau region. The NCF sensors were designed to keep the strain of the sensors in the lower half of the stress strain curve with no strains exceeding 50%. As the NCF response is dependent upon both the impact velocity and force it allows the foam to measure the standard helmet impact metrics of interest to researchers. Figure 1 Typical NCF voltage response to dynamic deformation or impact. Full size image Figure 2 NCF peak response to varying levels of impact energy. Full size image The nature of the NCF material lends itself particularly well to the football helmet environment, where the sensor acts multifunctionally. It directly replaces the existing traditional foam padding and provides equivalent energy absorption, while also measuring impact data. Football helmets are designed with the goal of reducing the amount of energy that is transferred to the head. A portion of the energy is absorbed and dispersed in the helmet’s shell, while the rest is either absorbed or transferred to the head via the foam. Due to their positioning in the direct line of action between the helmet shell and the energy is transmitted to the player’s head, the deformation of the foam sensors can directly measure how much of the impact energy is passed to the head. Impact Severity Measures Kinematic measures of the head are most commonly used to assess brain injury, as they are thought to be indicative of the mechanical response of the brain. The development of criteria that estimate head injury date back to the early 1950s. Two head injury indexes have been adopted as the standards for determination of head injury: The Head Injury Criterion and The Gadd severity index. Both indexes are functions of acceleration and require the use of highly accurate accelerometers placed within the head of anthropomorphic test devices (ATDs). Head Injury Criterion The Head Injury Criterion (HIC) was initially developed for the auto industry to quantify brain injury and was based on the linear acceleration of the head. 16 , 17 , 18 , 26 , 42 The HIC is calculated as, $${\text{HIC}} = \left( {t_{2} - t_{1} } \right)\left[ {\frac{1}{{\left( {t_{2} - t_{1} } \right)}}\mathop \int \limits_{{t_{1} }}^{{t_{2} }} a\left( t \right) dt} \right]^{2.5} ,$$ where a(t) and t represents the linear acceleration at the head’s center of mass (measured in g ) and the time which maximizes the expression respectively. The criterion was developed to measure the rate of kinetic energy change while determining the average value which results in injury. 26 Automotive Federal regulations require that the HIC does not exceed 1000 however, the threshold for concussion is even lower with some research suggesting a HIC of 615 ± 309 results in a concussion. 14 Gadd Severity Index The Gadd severity index 15 was developed after the HIC as a generic head injury index. Its derivation is similar to the HIC but has been simplified for easier calculation. The Gadd severity index is calculated as, $$I_{\text{Gadd}} = \mathop \int \nolimits a^{2.5} dt,$$ where a and t represent the acceleration and time respectively. In 1973 NOCSAE adapted the Gadd index to create standards in football helmet performance. NOCSAE adjusted the index by limiting the time integration interval to periods when the acceleration exceeds 10 g. The current NOCSAE standard for newly manufactured football helmets states that the peak Gadd severity index of any impact shall not exceed 1200 SI. 32 The Gadd severity index wasn’t intended to be used to determine if one received a concussive blow, but rather determine if that blow would cause loss of life. Linear and Angular Acceleration To accurately measure the effects of acceleration on the human body in car impacts, General Motors developed an anthropomorphic test dummy called the Hybrid III. The Hybrid III headform mimics human geometry, weight, inertia, and biomechanical response to impact, while measuring triaxial acceleration at the head’s center of gravity. 12 The head acceleration traces recorded by the Hybrid III ATD are used to calculate the HIC, Gadd Severity index, and the peak accelerations for all impacts (Fig. 3 ). Figure 3 Nano composite foam helmet sensors used in this study. The sensors replaced a portion of the foam in the helmet to create a sensing helmet. Full size image Equipment In this study, a standard Riddell 360 football helmet was modified to accommodate eight NCF sensors throughout the inner surface of the helmet (Fig. 4 ). Each Riddell 360 helmet is comprised of an outer shell and 3 inner foam liners: the front, top, and one piece that surrounds the rest of the head. The foam liner has inner “head side” and outer “helmet side” foam pads. Both the inner and outer foam pads are contained in a plastic liner with an additional plastic film that separates the inner foam from the outer foam. Eight separate pieces of the inner foam on the front, sides, and top, were removed by cutting the plastic liner and removing and replacing them with NCF sensors with similar energy absorption characteristics of the same size and shape (Fig. 3 ). The inner foam was selected as it is in direct contact with the head and would provide the most direct measure of the head during impact. All NCF sensors were individually connected to one central data acquisition device. Figure 4 Football helmet instrumented with eight NCF sensors which replaced existing helmet padding. Full size image All NCF sensors were attached to a National Instrument NI 9234 high accuracy data acquisition module, sampled at the Nyquist frequency of 1650 Hz. Previous frequency response testing of the NCF demonstrated that the highest frequency of interest in the NCF response is 800 Hz. The NCF sensors were connected to the NI 9234 module using 14 AWG shielded wire and connected with BNC connectors directly to the DAQ to reduce signal noise during acquisition. All data was recorded through a custom LabVIEW script with each recording representing an individual impact event. All drop tests were performed on a NOCSAE approved twin-wire guide, carriage assembly with a NOCSAE approved headform instrumented with a 3-2-2-2 head accelerometer array. 33 All acceleration data was collected from the tri-axial accelerometer at the headform’s center of mass at a sampling rate of 20 kHz. Additionally, the drop tower and accelerometer array were properly calibrated per NOCSAE standards prior to testing. STAR Testing The Virginia Tech STAR testing procedure 37 attempts to recreate impacts that represent the hits that an average player experiences during a season of play. The test is conducted by dropping the helmet on 4 locations (front, rear, right-side, and top) from five different heights (12, 24, 36, 48, and 60 inches). All tests were performed with increasing heights at each location starting with the front followed by rear, side, and top, for a total of 20 tests. 37 The voltage response from the NCF sensors, as well as the acceleration data, were recorded for each drop test for a total of 20 tests. Both data sets were recorded with separate acquisition systems, which required synchronization afterwards. Data Analysis All data were collected and stored on an individual impact basis with both the acceleration and NCF data maintaining the same naming convention for later correlation. The impact velocity, impact energy, and severity index for each impact were calculated from the drop height, weight, and resultant acceleration traces respectively. As all tests were performed on a drop tower, the helmet will experience an initial impact with subsequent bounces. All NCF data were trimmed to 120 ms to account for the entire compression and recovery of the foam during the initial impact. The acceleration was limited to 30 ms as the response only accounts for the impact and does not have a recovery time. The NCF signal was recoded for a longer duration to measure the entire response, initial impact, and recoil. An example response from all NCF sensors during a typical 60-inch drop test is shown in Fig. 5 . The voltage response can be separated into different portions of interest. The initial spike occurs when the headform compresses or releases the padding inside the helmet upon impact, and the subsequent spikes occur when the headform recoils. The remaining positive and negative spikes occur as the headform continues to recoil in the helmet before coming to rest. Figure 5 shows the rear sensors, which are initially compressed, exhibiting a positive voltage response while the front sensors, which are initially decompressed, exhibit a negative voltage response. All 3D acceleration data were post processed by Virginia Tech to filter out noise, remove subsequent bounces, and converted to a resultant acceleration. Figure 6 shows a typical resultant acceleration trace from a 60-inch impact. Figure 5 Sample voltage response from all NCF sensors to 60-inch rear helmet drop test. This signal shows a positive response from all the rear sensors while the front sensors show an opposite response. Full size image Figure 6 Sample resultant acceleration trace from tri-axial accelerometer in testing head form. Full size image The NCF sensor data, sampled at 1652 Hz, were filtered with a 5th order Butterworth low-pass filter with a cutoff frequency of 200 Hz. The cutoff frequency was selected by inspecting the FFT of the signal surrounding the peaks. Additionally, the long wires used in the test setup introduced some higher frequency noise during impacts which were filtered out with the selected cutoff frequency. The headform acceleration data, sampled at 20 kHz, were filtered with a 2nd order phaseless Butterworth low-pass filter with a cut off frequency of 1650 Hz as per SAE J211 specification. The 3D acceleration data was then converted to a resultant acceleration, which was used for all calculations. Results This study evaluated the accuracy of a helmet instrumented with eight separate NCF sensors at measuring the magnitude of impact. The helmet was tested with 20 separate drops following the STAR testing procedure. The helmet was dropped on four locations from five different heights. The NCF signal was correlated to the standard measures of impact and each will be evaluated below. Statistical Analysis Multiple regression was performed showing that NCF sensors can be used to predict the impact severity measures of interest including: severity index (SI), head injury criterion (HIC), maximum acceleration (MA) and impact energy (IE). Other studies that have evaluated impact severity have measured impact forces, 41 however due to the interdependency of acceleration and force this paper will focus on the acceleration based measures. After examining many characteristics of the NCF signal (voltage integral, FFT frequencies, distance between peaks, etc .) it was found that peak NCF response was both most significant as well as easiest to extract. We considered models using the measured peaks for the NCF sensors located in the front (F), left (L), back left (BL), back (B), back right (BR), right (R), top front (TF), and top (T) of the helmet. For each of the impact severity measures, the squared multiple correlation coefficient R 2 based on all 8 predictors is between 0.91 and 0.94. However, to minimize the potential of overfitting the data, we consider the predictive ability for subsets of the predictors. Subsets of predictors and the R 2 for predicting each impact severity measure is given in Table 1 . Note that because our training data includes drops on the right side of the helmet, we tend to include more of the NCP sensor peaks from the right side of the helmet. The model with the best fit, as determined by R 2 and Root Mean Squared Error (RMSE), is the model with five sensors (F, B, BR, R and TF). The best subset of three sensors includes the BR, R, and FT sensors with R 2 values between 0.84 and 0.87. However, using a more geographically balanced set of three sensors (L, R, and FT) still yields R 2 values between 0.79 and 0.88, with the severity index being the only measure with substantially diminished predictability. Note that even using a model with only two predictors yields R 2 values between 0.76 and 0.87. Thus, we have compelling evidence for the relationship between the NCP sensors and the accuracy of several measures of impact severity. Table 1 R2 values and RMSE, in parentheses, for predicting each of the different impact severity measures. Full size table Many of the previous or existing impact systems referenced in this paper determine the direction of the impact in addition to the impact severity measures. Some models use impact location as an input to their models for added accuracy. During testing the NCF equipped helmet was dropped on four locations: the front, right back and top. Discriminant analyses were performed using the peak voltage values from all eight NCF sensors. Two different methods were used: k -nearest neighbors (KNN) with k = 5 and linear discriminant analysis (LDA). Using hold-one-out cross validation, 0 and 5% misclassification rates were obtained with KNN and LDA, respectively. A confusion matrix demonstrates the fit of a prediction by showing all the predicted locations vs actual locations. A perfect model will only contain numbers along the diagonal of the table; if there are numbers outside of the diagonal they represent improper predictions. The resultant confusion matrix from k -nearest neighbors model would only have values in the diagonal of the matrix. The discriminant analysis confusion matrix is shown in Table 2 with 1 out of 20 locations incorrectly predicted (shown in red). The model predicted a front impact once when it was a back impact. Predicting the opposite side of impact can be explained by the headform compressing the NCF sensors on the side of impact and then recoiling to the opposite side, resulting in a measure on both sides of the helmet. This analysis only evaluated peak NCF response, independent of time. It is expected that future analysis or algorithms would account for time differences between peaks, further increasing the accuracy. Table 2 Confusion matrix demonstrating predicted impact location vs true impact location. Full size table Discussion The purpose of this study was to evaluate the use of NCF sensors in a football helmet in measuring location and quantifying severity of impacts. A study by Guskiewicz et al. highlights the discrepancies between many of the acceleration-based determinations of concussions. 19 Furthermore, many different acceleration thresholds have been proposed that do not necessarily correlate with actual head injuries. This paper proposes a new method of quantifying the severity of impacts while also reporting the standard measures used in the field. The NCF sensors were effective in determining both the location and severity of impacts, correlating well with the measurements taken by the accelerometer inside the testing headform. A total of 20 drop tests were performed using the STAR testing method, impacting on the four sides of the helmet. Predictions of impact severity, max acceleration, impact energy, impact velocity, and location of impact, all obtained an R 2 of 90% fit or better. This overall accuracy is considerably higher than several existing consumer products and provides evidence that NCF sensors are a viable solution for real time impact measurement in helmets. Helmet manufacturers would simply place several NCF sensors in lieu of standard foam and measure their response with a microcontroller. As the NCF is self-powered, the microcontroller system would require little power to monitor helmet activity. The standard measurement systems on the market today directly measure the acceleration of the helmet through accelerometers and then use that to calculate the severity indexes and the maximum acceleration of the player’s head. The measure of acceleration can be erroneous when the helmet, mouth guard, etc ., are dropped, or otherwise removed from the player during play. Furthermore, helmet-based accelerometer systems have been shown to measure different accelerations from what the head actually experiences. Some studies have shown that improper helmet fit can reduce accuracy by more than 15%. 27 The NCF sensors measure impact when they are compressed, which could result in lower false impact measures and higher accuracy than competing acceleration based systems. Ultimately, the accelerometer and gyroscope systems could be combined into the electronics that measure the NCF sensors to create redundancy and adding new measurements to the helmet based impact system. In addition to correlating well with acceleration based metrics of head impacts, the NCF sensor response relates directly to the interactions between helmet shell and head, potentially providing a truer indication of the impact experienced by the head. Some of the first models created to predict concussions were based on linear acceleration alone. Subsequent models combined multiple measures of acceleration, thereby increasing the accuracy of the concussion model. Future concussion models could include measures of impact energy and velocity to further increase accuracy. This NCF based sensor system proved to be accurate in measuring standard impact metrics (e.g. peak acceleration, Gadd Severity index and HIC) while also providing additional details (e.g. impact velocity and impact energy) that were previously difficult to obtain. The NCF sensors can measure max acceleration, impact velocity, impact energy, severity index, and impact location with 90% or better accuracy, with a foam product similar to that which is already designed into all football helmets. New manufacturing methods have been developed since this study, which reduce the difficulty in manufacturing the NCF. These newer methods increase the consistency between sensors while also providing a sheet foam product which is commonly used in helmets. Future work will include the use of these newer NCF sensors and live testing. It is expected that with different head shapes, helmet sizes and impact scenarios more than just the NCF peak will be used to create more complex and accurate models.
Most football fans have seen players get hit so hard they can barely walk back to the sideline. All too often, those players are back on the field just a few plays later, despite suffering what appears to be a head injury. While football-related concussions have been top of mind in recent years, people have struggled to create technology to accurately measure them in real time. Enter BYU mechanical engineering Ph.D. student, Jake Merrell, and a team of researchers across three BYU departments. Merrell and others have developed and tested a nano composite smartfoam that can be placed inside a football helmet (and pads) to more accurately test the impact and power of hits. The foam measures the impact of a hit via electrical signals. The data is collected in real time and sent wirelessly to the tablet or device of a coach or trainer on the sidelines. A coach can know within seconds how hard a player has been hit and whether or not they should be concerned about a concussion. "The standard measurement systems on the market today directly measure the acceleration, but just measuring the acceleration is not enough and can even be erroneous," Merrell said. "Our XOnano smartfoam sensors measure much more than just acceleration, which we see as a vital key to better diagnose head injuries." The foam, which replaces the standard helmet foam, measures a composite of acceleration, impact energy and impact velocity to determine impact severity and location of impact, all with 90 percent accuracy, according to research published by Merrell in the Annals of Biomedical Engineering. To date, no one—not even the NFL—has been able to successfully measure the impact energy and velocity of a collision, which are two data points necessary to accurately measure whether a player is at risk of a concussion or not. Football shoulder pads with smartfoam from BYU. Credit: BYU Here's how the BYU smartfoam works: When the foam is compressed, nickel nano-particles rub against the foam, creating static electric charge, similar to when you rub a balloon against your hair. That charge is then collected through a conductive electrode in the foam, measured by a microcomputer, and transmitted to a computer or smart device. A hard hit spikes the voltage, while small impacts result in a reduced spike in voltage. Merrell is excited for the future of his smart foam technology as companies incorporate it into their products. Merrell and Xenith created shoulder pads with the impact sensing technology, and a company producing taekwondo vests has also started using the smartfoam to score fights and train athletes. Merrell worked with researchers in the mechanical engineering, exercise science and statistics departments at BYU on the nano composite foam. Mechanical engineering professors David Fullwood and Anton Bowden, exercise science professor Matthew Seeley, and statistics professor William Christensen were all coauthors on the study.
10.1007/s10439-017-1910-9
Medicine
Intelligent brains take longer to solve difficult problems, shows simulation study
Michael Schirner et al, Learning how network structure shapes decision-making for bio-inspired computing, Nature Communications (2023). DOI: 10.1038/s41467-023-38626-y Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-023-38626-y
https://medicalxpress.com/news/2023-06-intelligent-brains-longer-difficult-problems.html
Abstract To better understand how network structure shapes intelligent behavior, we developed a learning algorithm that we used to build personalized brain network models for 650 Human Connectome Project participants. We found that participants with higher intelligence scores took more time to solve difficult problems, and that slower solvers had higher average functional connectivity. With simulations we identified a mechanistic link between functional connectivity, intelligence, processing speed and brain synchrony for trading accuracy with speed in dependence of excitation-inhibition balance. Reduced synchrony led decision-making circuits to quickly jump to conclusions, while higher synchrony allowed for better integration of evidence and more robust working memory. Strict tests were applied to ensure reproducibility and generality of the obtained results. Here, we identify links between brain structure and function that enable to learn connectome topology from noninvasive recordings and map it to inter-individual differences in behavior, suggesting broad utility for research and clinical applications. Introduction Do intelligent people think faster? Strong correlations between reaction times and intellectual performance support this idea, providing a cornerstone for intelligence research for over one century 1 , 2 , 3 , 4 , 5 , 6 . Here, we show an important exception in empirical data and provide an explanation based on brain simulation (Supplementary Movie 1 ). Participants with higher intelligence were only faster when the test was simple. Conversely, in hard tests that required problem solving over several seconds or minutes without time limit, participants with higher intelligence used more, not less time to arrive at correct solutions. We reproduced this link between reaction time and performance in personalized multi-scale brain network models 7 , 8 (BNMs) that couple each participant’s structural white-matter connectivity (SC) with a generic neural circuit for decision-making (DM) and working memory (WM). Simulation results indicate that decision-making speed is traded with accuracy, resembling influential theories from the fields of economy and psychology on fast and slow thinking 9 . Intelligence is here defined as the performance in psychometric tests in cognitive domains like verbal comprehension, perceptual reasoning or working memory. A consistent finding is that individuals who perform well in one domain tend to perform well in the others, which led to the derivation of a general factor of intelligence called g -factor 10 . While the g -factor also targets learned skills like verbal fluency, the term fluid intelligence (FI) refers to abilities related to solving new problems independently of acquired knowledge 11 . Reaction time (RT) as a measure of cognitive processing speed provides strong evidence in support of the idea that people are more intelligent because they have faster brains 2 . A meta-analysis over 172 studies and 53,542 participants reported strong negative correlations between general intelligence and diverse measures of RT 6 . RT and intelligence are also linked over the lifespan: RT increases with age and is strongly correlated with decline in other domains 5 , 12 . Intriguingly, RT is a more powerful predictor of death than well-known risk factors like hypertension, obesity, or resting heart rate: RT is the second most important predictor of death after smoking 13 and explains two-thirds of the relationship between general intelligence and death 14 . After adjusting for smoking, education, and social class, RT was an even stronger predictor of death than intelligence. However, these results do not imply that PS is the causal factor underlying intelligence: an important counterargument is that training and improving PS does not transfer to untrained measures 15 . We found that participants with higher intelligence were only quicker when responding to simple questions, while they took more time to solve hard questions. This became apparent in the Penn Matrix Reasoning Test (PMAT), which consists of a series of increasingly difficult pattern matching tasks for quantifying FI 11 . While PS tests are typically so simple that people would not make any errors if given enough time, FI tests like PMAT can be unsolvable even without time limit. PMAT requires to infer hidden rules that govern the figure, which involves a recursive decomposition of complex problems into easier subproblems, forming a hierarchy of DM processes 11 . To solve the problem, it is required to make decisions about tentative solution paths while storing previous progress in WM. Sub-problems higher up in the hierarchy need to be held longer in WM as evidence from lower in the hierarchy needs to be integrated later in time 11 . Therefore, taking decisions on higher-level problems must be held out until evidence from sub-problems was integrated to not prematurely jump to a conclusion. This form of cognition can be contrasted with the flexibility required by PS tests where it is actually advantageous if decisions do not rely on extensive accumulation of evidence and memories can be flexibly overwritten. Here, by closely fitting brain models to each subject’s functional connectivity (FC), we identify a fast mode of cognition for rapid decision-making and flexible working memory and contrast it with a slow mode of cognition that supports prolonged integration of information and more stable working memory. Importantly, by identifying a smooth and monotonous relationship between structural and functional neural network architecture it was possible to devise a network fitting algorithm that allows to simultaneously and precisely control the state of synchronization between every pair of network nodes, allowing to tune each connection from full antisynchronization to full synchronization, enabling a close reproduction of whole-brain subject-specific FC. In the following, we first provide behavioral findings that link intelligence test results with processing speed and FC (Fig. 1 and Table 1 ). Then we demonstrate a computational framework for closely fitting BNMs to personal FC (Figs. 2 and 3 ), and subsequently explain the empirical data based on the in silico identified biological candidate mechanisms (Figs. 4 – 6 and Supplementary Figures). For the fitting we created a parameter learning algorithm that makes use of our observation that FC and synchronization between two simulated brain areas can be smoothly and monotonically tuned via their long-range excitation-inhibition balance (E/I-ratio). We then show that the internal dynamics of the fitted models correlated with the empirical cognitive performance of the subjects (Fig. 4a, b ). In addition, E/I-balance modulated the amplitude and synchrony of large-scale synaptic currents in a way that modulated DM winner-take-all races and WM persistent activity in accordance with the empirical observations (Figs. 5 and 6 and Supplementary Fig. 4 ). Phase space analysis of the resulting model dynamics allowed to frame the trade-off between speed and accuracy in terms of generic dynamical systems behavior in dependence of the E/I-balance of long-range brain network topology, which may jointly explain individual variability in FC, intelligence, and processing speed (Supplementary Figs. 5 and 6 and Supplementary Movie 1 ). Fig. 1: Correlations between intelligence, RTs and FC. a , b Group-average g -factor (30 groups, based on g-factor, N = 650 subjects) versus RT for correct responses in PMAT questions #1 (very easy, \(p=4.0\times {10}^{-6}\) ) and #24 (very hard, \(p=3.0\times {10}^{-6}\) ). c , d Group-average and subject-level correlations between g /PMAT24_A_CR and the RT for correct responses in each individual PMAT question. Subjects with higher g /PMAT24_A_CR were quicker to correctly answer easy questions, but they took more time to correctly answer hard questions (questions sorted according to increasing difficulty; sign of correlation flips at question #9). e Group-average g -factor versus mean FC (20 groups, based on g - f actor, N = 650 subjects, \(p=0.13\) ). f Group-average PMAT24_A_RTCR versus mean FC (20 groups, based on PMAT24_A_RTCR, N = 650 subjects, \(p=6.9\times {10}^{-7}\) ). g , h Group-average (20 groups, based on PMAT24_A_RTCR) and subject-level correlations between mean FC and RT for correct responses in each PMAT question. Subjects that took more time to correctly answer test questions had a higher FC, independent of whether the question was easy or hard. P values of two-sided Pearson’s correlation test: * p < 0.05, ** p < 0.01, *** p < 0.001; including only p values that remained significant after controlling for multiple comparisons using the Benjamini–Hochberg procedure with a False Discovery Rate of 0.1. Full size image Table 1 Correlation coefficients between intelligence, RT, and PS on an individual-subject level ( N = 1176) Full size table Fig. 2: Modeling outline. a 379-nodes large-scale BNMs were constructed from person-specific white matter connectomes estimated with dwMRI tractography. In addition, a simplified network with only two nodes (but identical node dynamics) was used to create E/I-ratio tuning curves (Fig. 4 ). b In previous BNM studies long-range white matter coupling from excitatory to inhibitory populations was often absent. Adding these connections allowed to tune the relative strength of long-range excitatory-to-excitatory versus long-range excitatory-to-inhibitory connections, enabling to precisely tune the E/I-ratio of synaptic inputs between each pair of BNM nodes. Importantly, setting the E/I-ratio allowed to monotonically and smoothly control the FC between all nodes (Fig. 3a ). Underlying predicted fMRI time series, the E/I-ratio allowed to smoothly tune synchronization and amplitude of synaptic currents (Fig. 4 ). c By systematically tuning E/I-ratios, the fit between simulated and empirical FC can be increased until full similarity (Fig. 3b, c ). d Upon fitting each participant’s BNM with their empirical FC, each BNM was coupled with a smaller scale frontoparietal circuit for simulating DM and WM. Subpopulations in prefrontal cortex (PFC) and posterior parietal cortex (PPC) are mutually and recurrently coupled to encode two decision options A and B. For example, evidence for option A recurrently excited the populations A PPC and A PFC (red connections) while it led to an inhibition of the populations B PPC and B PFC (blue connections). Importantly, instead of independent noise, we used the activity of the PFC and PPC regions of the 379-nodes large-scale network to drive the DM circuit, which allowed to analyze how local decision-making and working memory performance can be modulated by large-scale brain network topology. Panel a is adapted from ref. 77 . and used under a CC BY 4.0 license ( ). Full size image Fig. 3: Identification of a smooth, monotonic relationship between E/I-ratio and FC to fit brain network models. a Tuning curves for a reduced model with only two nodes, but otherwise identical to the 379-nodes BNM. FC (that is, correlation) between the two nodes increased smoothly and monotonically as a function of their E/I-ratio \(\frac{{w}_{{{{{\mathrm{1,2}}}}}}^{{LRE}}}{{w}_{{{{{\mathrm{1,2}}}}}}^{{FFI}}}\) . The relationship between E/I-ratio and FC persisted when the strength of noise \(\sigma\) (upper panel; Eqs. 5 and 6 ) and the strength of structural coupling \({C}_{{ij}}\) (lower panel; Eqs. 1 and 2 ) were modulated for test purposes (both are fixed parameters during the fitting of the full 379-nodes model). b Fitting results for the full 379-nodes model for one exemplary FC. Empirical (upper triangular portion of the matrix) versus simulated (lower triangular portion of the matrix) FC and joint distributions without E/I-tuning (upper panel) and with E/I-tuning (lower panel). c Pearson correlations and root-mean-square errors between all N = 650 empirical and simulated FCs for three different model variants: EI-tuning (the tuning algorithm applied on both \({w}_{{ij}}^{{LRE}}\) and \({w}_{{ij}}^{{FFI}}\) ), E-tuning (the tuning algorithm applied only on \({w}_{{ij}}^{{LRE}}\) ), original (tuning of a scalar global coupling scaling factor to rescale \({C}_{{ij}}\) ). Full size image Fig. 4: Model dynamics correlate with empirical cognitive performance. FC, synchrony, amplitude and variance of neural population activity depend on E/I-ratios. a PMAT24_A_RTCR versus strength of correlation of input currents in the full 379-nodes large-scale BNMs. The models of slower subjects had a higher synchrony between the time series of synaptic currents \({I}_{i}^{E}\) ( \(p=9.8\times {10}^{-5}\) ). b PMAT24_A_RTCR versus input amplitude. The models of slower subjects had a lower average synaptic current amplitude \({I}_{i}^{E}\) \((p=8.6\times {10}^{-4})\) ). c E/I-ratio versus parameter settings in the simplified two-node large-scale model. The E/I-ratio of a connection is defined by the quotient of long-range excitation \({w}_{{ij}}^{{LRE}}\) (black) and feedforward inhibition \({w}_{{ij}}^{{FFI}}\) (black). J i values (green) were obtained by FIC. d E/I-ratio versus FC for active (black) and inactive FIC (blue). A monotonic relationship between E/I-ratio and FC only emerged when FIC was active. e E/I-ratio versus correlation of input currents. f E/I-ratio versus input amplitude. With FIC input amplitudes peaked at relatively low E/I-ratios and then continued to monotonically decrease for increasing E/I-ratios g E/I-ratio versus input variance showed an inverse pattern compared to f . h Amplitude versus variance of inputs. FIC coupled the variance of synaptic inputs with the amplitude of synaptic inputs: the higher the variance (resulting from stronger coupling), the lower the amplitude. i , j Firing rate (Eq. 3 ) and input current (Eq. 1 ) time series after injecting 10-Hz sinusoidal waves with increasing variance for active (black) and inactive FIC (blue). FIC compensated higher input variances (which were modulated by the fitting algorithm via the multiplicative coupling parameters w LRE and w FFI ) with a lower mean ( h ). This was necessary as the upper half-wave of the input continued to grow in amplitude for increasing E/I-ratios, while the lower half-wave was bounded by 0 Hz f i ring (gray to black lines), which required FIC to increase J i to arrive at the same target average firing rate of 4 Hz. Data in panels c-h are presented as mean values +/- SD derived from N = 100 simulations with different random number generator seeds. Obtained p values of two-sided Pearson’s correlation test: * p < 0.001; including only p values that remained significant after controlling for multiple comparisons using the Benjamini-Hochberg procedure with a False Discovery Rate of 0.1. Full size image Fig. 5: DM performance depends on amplitude and synchrony of input currents to the isolated frontoparietal DM circuit. Decreased amplitude of PFC and PPC noise and increased synchrony of PPC noise led to more correct decisions and longer integration time in the DM circuit. a Percent correct decisions for varying the mean amplitudes of the input noise time series to the PFC and PPC modules of the DM circuit \({I}_{{noise},i}^{{PFC}}\) and \({I}_{{noise},i}^{{PPC}}\) . b Evidence integration times for varying mean amplitudes of the input noise time series \({I}_{{noise},i}^{{PFC}}\) and \({I}_{{noise},i}^{{PPC}}\) . c Percent correct decisions for varying correlation coefficients between input noise time series \({I}_{{noise},i}^{{PFC}}\) and \({I}_{{noise},i}^{{PPC}}\) . d Evidence integration times for varying correlation coefficients between input time series \({I}_{{noise},i}^{{PFC}}\) and \({I}_{{noise},i}^{{PPC}}\) . Full size image Fig. 6: Multiscale modeling: coupling PFC and PPC nodes of the person-specific BNMs with the corresponding modules of the generic DM circuit. The models of subjects with higher PMAT24_A_CR (fluid intelligence) made fewer mistakes, but were slower, echoing the empirically observed trade-off. a Distribution of significant correlations between mean input of all BNM nodes and PMAT24_A_CR ( p < 0.05 for 35 of 379 nodes), respectively PMAT24_A_RTCR ( p < 0.05 for 26 of 379 nodes) over all N = 650 models. b , c Group-average PMAT24_A_CR versus DM performance ( r = 0.77, \(p=7.2\times {10}^{-5}\) ), respectively DM time ( r = 0.69, \(p=7.2\times {10}^{-4}\) ), for an exemplary combination of PFC and PPC nodes. Data are presented as mean values +/− SD over all N = 650 models each simulated 100 times with different random number generator see d s. d Distribution of significant correlations between group-average PMAT24_A_CR and DM time ( p < 0.05 for 57 of 90 possible combinations), respectively, DM performance ( p < 0.05 for 19 of 90 possible combinations) over all N = 650 models. Including only correlations that remained significant after controlling for multiple comparisons using the Benjamini-Hochberg procedure with a False Discovery Rate of 0.1. Full size image Results Higher intelligence: taking complex decisions slowly We analyzed correlations between g -factor, FI (PMAT24_A_CR), RT for correct responses in the FI test (PMAT24_A_RTCR), and processing speed for 1176 participants of the Human Connectome Project (HCP) Young Adult study (Table 1 ) 16 . FI was measured by the number of correct responses in PMAT (PMAT24_A_CR) 11 , 17 . Processing speed was measured by the NIH Toolbox tests Dimensional Change Card Sort 18 and Pattern Completion Processing Speed 19 (CardSort_Unadj and ProcSpeed_Unadj). For findability we use the same abbreviations for the cognitive tests as used in the HCP (Table 2 ). Table 2 Abbreviations of the used cognitive tests Full size table Reproducing established results 6 , individuals with higher g and FI (PMAT24_A_CR) were faster in the simple processing speed tests. However, they needed more, not less time (PMAT24_A_RTCR) to form correct decisions in the harder FI test (PMAT24_A_CR, Table 1 ). This observation is remarkable as it challenges the notion that higher intelligence is the result of a faster brain. The observation may however have a trivial explanation: PMAT questions are arranged in order of increasing difficulty and the test is discontinued if the participant makes five incorrect responses in a row. People with higher intelligence could have a higher RT simply because they advanced until the more difficult questions. To exclude this explanation, we correlated intelligence with the RTs for each individual PMAT question, which shows the impact of problem difficulty on RT: for the first eight questions participants with higher g and PMAT24_A_CR were faster to give correct answers, but slower for the remaining sixteen questions (Fig. 1a–d ). Slow solvers have higher resting-state functional connectivity Next, we compared cognitive performance with mean FC (average correlation between all region-wise fMRI time series) in a subset of N = 650 participants with complete data and where no quality control issues were identified by the HCP consortium (see Methods). We have selected mean FC for the subsequent analyses as it is a compact representation of whole-brain FC and related to E/I-balance per our analysis (Figs. 3 and 4 ). Mean FC had no significant correlation with g on single-subject level ( r = 0.02, p = 0.69) and group level (Fig. 1e and Supplementary Fig. 1a ). On the single-subject level there was a significant correlation between mean FC and PMAT24_A_RTCR ( r = 0.13, p = 0.0012). Multiple regression to compute the coefficient of multiple correlation between all reported behavioral variables ( g , PMAT24_A_CR, PMAT24_A_RTCR, ProcSpeed, CardSort) and mean FC yielded r = 0.16 ( p < 0.001), which was only slightly higher than the univariate correlation between mean FC and PMAT24_A_RTCR. Importantly, independent of the complexity of the question there were strong positive correlations between mean FC and the times to correctly answer each individual PMAT question (Fig. 1g, h ): slower participants tended to have higher mean FC, regardless of whether the question was easy or hard, indicating that FC (or properties of the brain network underlying FC) could be related to the modulation of processing speed, which we studied with computational models below. Excitation-inhibition balance controls functional connectivity Which neurophysiological processes underly the observed correlations between intelligence, RT, and FC? To study neuronal processing in silico we created BNMs for the 650 subjects using a tuning algorithm that fits each participant’s simulated FC with their empirical FC (Figs. 2 and 3 ). The BNMs use coupled neural mass models to simulate the electric, synaptic, firing, and hemodynamic (fMRI) activity of a 379-nodes whole-brain network. Each node consists of one excitatory and one inhibitory population that mutually and recurrently interact. To simulate long-range white matter coupling, the neural masses were connected by each participant’s SC, which were estimated by dwMRI tractography. Importantly, we added feedforward inhibition to increase biological realism 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 : while in previous BNM studies there was typically only long-range coupling between excitatory populations, here, excitatory masses additionally targeted inhibitory populations (Fig. 2b and “Methods”). In addition, the strength of local inhibitory feedback from the inhibitory to the excitatory population of each node was controlled by inhibitory synaptic plasticity 31 , which was set to tune each excitatory population’s long-term average firing rate to 4 Hz in a process called Feedback Inhibition Control (FIC) 32 . By tuning the ratio of long-range excitation (LRE; strength of long-range excitatory-to-excitatory coupling \({w}_{{ij}}^{{LRE}}\) , Eq. 1 ) to feedforward inhibition (FFI; strength of long-range excitatory-to-inhibitory coupling \({w}_{{ij}}^{{FFI}}\) , Eq. 2 ) between each pair of brain regions it was possible to precisely control the synchrony, respectively functional connectivity, of the entire brain network (Fig. 3 and Supplementary Movie 1 ). Although many parameters are simultaneously tuned, which may raise concerns about overfitting, we show below that the fitting procedure robustly predicts the same model dynamics over different re-initializations and that the fitted models produce generalizable mechanistic insights and meaningfully comparable predictions over the subject cohort. While in previous models the values of \({w}_{{ij}}^{L{RE}}\) and \({w}_{{ij}}^{{FFI}}\) were implicitly set to the same scalar constant for every pair of brain regions, with this approach E/I ratios can be justified in a principled and data-driven way, in agreement with the direct relationship that we identified between E/I ratios and FC: increasing E/I-ratios led to increasingly positive FC up to full synchronization; vice versa, decreasing E/I-ratios decreased the correlation between the simulated fMRI time series until full anti-synchronization (Fig. 3a ). By simultaneously tuning the E/I-ratios of every connection to minimize the error between empirical and simulated FC, it was possible to considerably improve FC fits to a point where the simulated FCs of all 650 individual BNMs became almost indistinguishable from their empirical counterparts, explicitly reproducing even intricate and subtle patterns (Fig. 3b ). In comparison to the original model (Fig. 3c , green curves), where E/I-ratios were left untuned at their default settings ( \({w}_{{ij}}^{{LRE}}=1\) and \({w}_{{ij}}^{{FFI}}=0,\forall i,j\in \{1,\ldots,N\}\) ) from Deco et al. 32 , and compared to a variant where only \({w}_{{ij}}^{{LRE}}\) values were tuned (Fig. 3c , red curves), tuning both \({w}_{{ij}}^{{LRE}}\) and \({w}_{{ij}}^{{FFI}}\) at the same time allows to smoothly set the state of synchronization between each pair of brain regions (Figs. 3b and 4d, e ), which can be used to considerably reduce the root-mean-square error between simulated and empirical FC (Fig. 3c , blue curves). It is important to point out that E/I-ratio here refers only to the ratio of the long-range coupling strength parameters \(\tfrac{{w}_{{ij}}^{{LRE}}}{{w}_{{ij}}^{{FFI}}}\) without considering the effect of local inhibitory connectivity \({J}_{i}\) . Due to FIC the E/I-ratio of the total sums of long-range and local currents that arrive at excitatory populations ( \(\tfrac{{W}_{E}{I}_{0}+{w}_{+}{J}_{{NMDA}}{S}_{i}^{E}+{J}_{{NMDA}}\mathop{\sum}\nolimits_{j}{w}_{{ij}}^{{LRE}}{C}_{{ij}}{S}_{j}^{E}}{{J}_{i}{S}_{i}^{I}}\) , Eq. 1 ) is always in a balanced state, which ensures an average firing rate of 4 Hz of the excitatory population even in the case that long-range connections are unbalanced. Summarizing, the long-range E/I-ratios between network nodes control the direction (positive versus negative) and strength of their synchronization and FC; tuning these E/I-ratios enables simulation of person-specific empirical FCs with average correlations \(r \, > \, 0.97\) . Simulated brain activity correlates with cognitive performance To identify processes relevant for intelligence, we correlated the dynamics of each subject’s BNM with their PMAT24_A_RTCR. On the single-subject level we found only a low negative correlation between PMAT24_A_RTCR and the mean amplitude of synaptic currents ( r = −0.11, p = 0.0068) and a low positive correlation with the mean correlation between synaptic currents ( r = 0.13, p < 0.001). For the two processing speed measures CardSort_Unadj and ProcSpeed_Unadj no significant correlations were obtained on the single-subject level. On the group-average level correlations with PMAT24_A_RTCR were however large showing that the models of slower subjects had on average a lower amplitude of synaptic currents, but a higher synchrony between synaptic currents (Fig. 4a, b and Supplementary Fig. 2 ). Importantly, synaptic currents had an almost linear relationship with FC on an individual-subject level (Supplementary Fig. 3 ), indicating that E/I-ratios also control amplitude and synchrony of synaptic currents, which possibly points towards brain network mechanisms for explaining the observed differences in cognition. To better isolate the involved mechanisms, we again studied a reduced version of the 379-nodes BNM with only two-nodes. How E/I-ratios control FC To study how E/I-ratios modulate FC in isolation we tuned E/I-ratios from 0.01 to 100 in the two-node model. The two-node model is a simplified version of the 379-node large-scale brain model to study the effect of large-scale E/I-balance with a simpler network structure (Fig. 4c–j ). The two-node model (Eqs. 1 – 6 ) differed from the functional frontoparietal decision-making circuit 33 (DM circuit, Eqs. 7 – 10 ) further introduced below. The two-node model simulated mutual and recurrent interaction between one excitatory and one inhibitory population as in the 379-nodes large-scale model, but with a simpler network of only two nodes to produce tuning curves (Fig. 4c–h ). In contrast, the DM circuit is an existing frontoparietal circuit model to simulate winner-take-all competition resulting from cross-inhibition of two excitatory populations via one inhibitory population, which we studied in isolation (Fig. 5 ), and after coupling with the 379-nodes large-scale model to form the multiscale model (Fig. 6 ). Dynamics of the two-node model were identical to the full 379-regions model but with only two nodes \(i,j\) that had a mutual coupling strength of \({C}_{{ij}}={C}_{{ji}}=1\) . To increase E/I-ratios we increased \({w}^{{LRE}}\) and decreased \({w}^{{FFI}}\) under the constraint \({w}^{{LRE}}+{w}^{{FFI}}=1\) to keep the total sum of inputs constant (Fig. 4c ). As before, FIC was used to tune average firing-rates of the excitatory populations to a biologically plausible rate of 4 Hz 32 . As before (Fig. 3a ), increasing the E/I-ratio increased FC from a strong negative to a strong positive correlation (Fig. 4d ). Underlying the simulated fMRI, also the correlation between simulated synaptic inputs increased monotonically from negative to positive (Fig. 4e ). This monotonic relationship enabled fitting the models to empirical FC using a simple learning rule that increased or decreased E/I-ratios based on the strength of FC of each connection (Methods). Interestingly, the monotonic relationship only emerged when FIC was active (Fig. 4d, e , black curves). When FIC was disabled (Fig. 4d, e , blue curves), a complex nonlinear relationship between E/I-ratios and FC appeared, which would prevent the fitting with empirical FC. That is, without FIC, increasing the E/I-ratio could either increase or decrease the FC, and vice versa, while with FIC FC can be smoothly increased by increasing the E/I-ratio and vice versa. These observations underline the importance of FIC: only when FIC was active synaptic correlations increased and synaptic amplitude decreased for increased E/I-ratios, respectively FC (Fig. 4d–f ). Therefore, only with FIC a concordant effect of amplitude and correlation on decision times and decision accuracy was obtained that is in line with empirical data. Supplementary section How E/I-ratios control synchrony and amplitude of synaptic currents describes the involved mechanisms in more detail. E/I-ratios switch between fast and accurate DM To better understand how E/I-ratios modulate DM and WM we used an existing 33 frontoparietal circuit model for winner-take-all DM and persistent activity WM called DM circuit in the following (see Supplementary section Studying DM and WM with a frontoparietal circuit model). In the DM circuit NMDAergic and GABAergic synaptic dynamics of prefrontal cortex (PFC) and posterior parietal cortex (PPC) decision populations are explicitly modeled, while uncorrelated and independent noise from an Ornstein-Uhlenbeck process is used to simulate AMPA synapses 33 . However, a more realistic assumption is that synaptic inputs are not uncorrelated, but that populations receive correlated inputs from shared presynaptic groups 34 , 35 , 36 , 37 , 38 , 39 , 40 . Furthermore, inputs might not necessarily be fully balanced and centered at zero. Rather, our BNM simulations suggest that input amplitudes and correlations vary heterogeneously across brain areas and subjects and are strongly related to FC (Supplementary Fig. 3 ). Consequently, we systematically varied amplitude and correlation of AMPA noise inputs and found that they switch the DM circuit between fast-but-faulty and precise-but-slow modes of DM (Fig. 5 ). Decreasing the mean amplitude of inputs increased decision accuracy as well as integration time (Fig. 5a, b ). Similarly, increasing the correlation of input noise to the two PPC populations also led to increased performance and integration time (Fig. 5c, d ). Integration times followed an inverted U-shape and were at their maxima for intermediate levels of noise correlation ( \(r \sim 0.5\) , Fig. 5d ). In contrast, input correlation to the two PFC populations had no relevant effects (Fig. 5c, d ). These results indicate that DM performance depends on synaptic inputs in line with our empirical data: participants with higher FC (corresponding to lower amplitudes, but higher input correlations in the model) were slower (Fig. 1 g, h) but made fewer errors (Fig. 1c, d ). They also corroborate the identified link between empirical PMAT24_A_RTCR and synaptic inputs in the BNM simulations, where higher input synchrony and lower input amplitudes correlated with longer PMAT24_A_RTCR (Fig. 4a, b and Supplementary Fig. 2 ). The underlying dynamic mechanisms are described in supplementary sections How input amplitude modulates DM performance and How input correlation modulates DM performance (see also Supplementary Figs. 4 and 5 and Supplementary Movie 1 ). E/I-ratios switch between stable and flexible WM We also tested the effect of input amplitude on WM in the DM circuit and created bifurcation diagrams that visualize dynamical regimes of the system as a function of net recurrent synaptic currents \({J}_{S}\) (recurrent excitation minus cross-inhibition) and stimulus strength \({I}_{{app}}\) (Supplementary Fig. 6 ). Memories were induced by a brief stimulus to one of the PPC populations, which created persistent activity in the memory-encoding population. At t = 1.5 s after the target stimulus a distracting stimulus was applied to the other population, to test the robustness of the memory-encoding persistent activity. The WM state was robust if the memory-encoding population maintained its persistent high firing activity and it was fragile if the persistent firing was disrupted. Varying \({J}_{S}\) and \({I}_{{app}}\) parameters gave rise to three dynamical regimes in the bifurcation diagram: robust WM, disrupted WM, or no induction of WM at all (Supplementary Fig. 6 ). We found that the thresholds for WM induction and robustness shifted in dependence of input amplitude. Decreasing the input amplitude increased the thresholds for WM induction and disruption, which in turn requires larger stimuli to induce or overwrite WM content (Supplementary Fig. 6 ). A decreased input amplitude therefore makes WM less flexible, which is again in line with our empirical observations: slower subjects had a higher FC (Fig. 1f–h and Supplementary Fig. 1 ), which was related to decreased input amplitude via BNM simulations (Fig. 4b and Supplementary Figs. 2a and 3a ) and two-node model simulations (Fig. 4d, f ). Vice versa, higher input was related to lower thresholds for the induction and overwriting of working memories, which made WM more flexible to support simple but time-sensitive tasks. Coupling the DM circuit with the large-scale BNMs To predict DM performance of each individual, we coupled the DM circuit with each of the 650 BNMs with the effect that the PFC and PPC modules of the DM circuit were driven by large-scale PFC and PPC inputs instead of the independent noise that was used in the isolated circuit (replacing Eq. 7a from the original DM circuit model 33 by Eq. 7b ). Correlations between PMAT24_A_RTCR, respectively PMAT24_A_CR, and the input amplitudes of the 379 BNM regions indicate that the amplitudes encode information about individual cognitive performance (Fig. 6a ). For coupling we identified 10 PFC and 9 PPC atlas regions 41 that were activated during n-back task performance, which combines aspects of WM and DM (PFC: a9-46v, 9-46d, p9-46v, 8 C, i6-8, s6-8, 8Av, SFL, and 8BM. PPC: AIP, LIPd, IP1, IP2, 7PL, 7Pm, 7Am, POS2, PFm, and PGs). Simulation results predicted empirical performance for several of the 90 possible PPC-PFC combinations (Fig. 6b–d ). Multiscale models of participants with higher FI (PMAT24_A_CR) also had a higher DM accuracy and needed more time to take the decisions, reproducing the empirical data. Model validation To test the robustness of the fitting procedure we ran it 1000 times with random initial conditions and noise generator seeds using the average SC and FC from all subjects. The minimum correlation between all 1000 simulated FCs was r = 0.9946 and their average correlation with the empirical FC was r = 0.9973, which shows that the procedure consistently led to a high fit. Next, we simulated one hour of fMRI with the 1000 fitted models, this time using the same noise. The average correlation between all resulting fMRI time series over all 379 brain regions was r = 0.9962, showing that the fitting led to consistent fMRI predictions although there existed a variance in the obtained model parameters (average coefficients of variation CV LRE = 0.5 and CV FFI = 0.72 ). Although the repeated fitting runs did not converge to a unique parameter set the simulated time series were nevertheless robustly reproduced as a general result of the fitting procedure. To test whether DM performance predictions can be robustly reproduced we divided all subjects into six groups according to PMAT24_A_RTCR and fitted each 100 times, randomizing seeds and initial conditions as above. In all 100 tests mean amplitudes decreased and correlations increased from low to high PMAT24_A_RTCR (Supplementary Fig. 8 ; Friedman test rejected the null hypothesis that distributions are equal with p = 0; post-hoc multiple comparison analysis using Nemenyi’s test showed that the six groups were significantly different with p < 0.001 for all pairs), confirming that the identified link to empirical performance is a general result of the fitting procedure. To test whether there is a robust relationship and comparability between inferred synaptic inputs across the subject population we trained regression models on one half of the cohort and then applied the model on the second half to estimate its generalizability and repeated this process 1000 times to obtain a statistic over different random train and test groups. Predicting subject-wise mean FC from mean synaptic inputs yielded correlations of r = 0.67 ± 0.025 for the training sets and r = 0.66 ± 0.025 for the test sets. Next, using the mean inputs from the ten areas with highest correlation with mean FC as independent variables yielded a fit of r = 0.79 ± 0.018 with the training sets and r = 0.73 + /−0.055 with the test sets. Lastly, we computed regression models for every single FC connection ( N = 71,631) using the mean input currents from the ten areas with highest correlation with the respective FC connection as independent variables. Over all connections, this yielded an average fit of r = 0.61 + /−0.1 for the training and r = 0.52 + /−0.13 for the test set. The stability of prediction qualities in test versus train sets in above tests indicates that the inferred properties are meaningfully comparable across the subject population. Discussion We propose that FC and synchrony between brain areas directly depend on the ratio of their mutual excitation and inhibition. This theoretical observation yielded a parameter optimization algorithm that enabled to fit whole-brain simulated FCs to their empirical counterparts based on a Hebbian learning rule that implements homeostatic plasticity of excitation-inhibition balance in brain network models 42 . The dynamics of the resulting N = 650 models were then linked with the subjects’ empirical intelligence test scores and used to explain individual differences in cognitive performance. The research yields an implementation of multiscale brain network models that are able to perform decision-making tasks, both of which have recently been identified as crucial steps to explain the relationship between microscopic phenomena, large-scale brain function, and behavior as well as generating brain digital twins for personalized medical interventions 43 . The obtained insights held true independent of any parameter fitting in subsequent tests with isolated circuits. In addition, strict tests were employed to ensure the generality of the fitting procedure. Although we here focus on individual variability in DM and intelligence of healthy individuals, the insight that E/I-balance can be used to precisely set FC in brain models indicates a general-purpose method for inferring healthy and pathological neural mechanisms underlying functional brain networks. This is particularly relevant for clinical applications as impaired E/I-balance has now become a refined framework for understanding neurological diseases including autism spectrum disorders, schizophrenia, neurodegenerative diseases, and neuropsychiatric disorders 42 , 44 . It must be mentioned as a limitation that BNMs are high-dimensional models with thousands of parameters and the identified mechanism may be one out of a potentially infinite number of mechanisms that could explain the observed data. As with any scientific hypothesis, it is therefore crucial to validate and falsify theory with dedicated experiments. Since the used brain network model simulates detailed properties of neural systems like input currents, firing rates, synaptic activity, and fMRI, it is directly amenable for further validation or falsification with empirical data from different modalities. By integrating diverse empirical findings into a unifying computational framework that can be iteratively refined (or refuted) dynamic models provide an avenue out of the ‘reproducibility crisis‘ 7 . BNMs are limited when it comes to their resolution, as they are typically based on connectivity data obtained from non-invasive imaging techniques like MRI and limited computational power to simulate large networks. These problems are addressed with multiscale models where only some parts of the brain are simulated at a finer scale (for example, at the level of spiking neurons 45 ) while the remaining parts are simulated by a coarser network to save computational resources. In addition, by integrating connectivity and other microstructural information from finer scale studies, for example, from invasive rodent studies 46 or post-mortem human atlases 47 , it becomes possible to further constrain parameters and test the plausibility of simulation results. In this regard, we note that the described relationship between E/I-balance and FC (respectively population synchronization) appears independent of the spatial and temporal scales of the network, and may be used to generally tune also finer-scale or coarser-scale networks, as it is based on generic dynamical primitives of neural mass action applicable to describe dynamics across spatial and temporal scales 48 . Although BNMs employ abstractions, like all models, further advances may emerge precisely where the assumptions break down. For example, the used ensemble models capture neural population dynamics primarily when coherence is sufficiently weak that individual spikes can be ignored or when coherence is sufficiently strong that variance can be considered small, while scale-free dynamics with unbounded variance resist mean-field reductions and may require alternative ensemble methods 7 , 8 . Despite these limitations, BNMs are in contrast with artificial neural networks specifically designed to explain the underlying biology, using typically observed features of the empirical system as targets for validation and falsification (Supplementary Fig. 10 ) to achieve an incrementally improved computer model of the empirical system. In this work, we found that DM accuracy can be traded with DM speed in dependence of brain network configuration. Faster is therefore not necessarily better, but rather the ability to switch between fast and deep modes of information processing—depending on the nature of the problem and the involved brain areas. The idea that decision-making speed is traded with accuracy is supported by numerous empirical findings in the fields of economy, ecology, psychology, and neuroscience 9 , 49 , 50 , 51 . Our modeling results now cast this idea in terms of neural network interaction: FC depends on E/I-ratios, E/I-ratios modulate synaptic inputs, which in turn modulates evidence integration in winner-take-all circuits. Decreased synaptic inputs prolong the time window for integration and make DM more dependent on the buildup of slowly reverberating activity between PFC and PPC regions, pointing to a general mechanism that gives higher-order populations top-down control. Slowing down the timescale may bring DM under conscious control, enabling to modulate DM by attentional processes, which is supported by empirical results that associate top-down attention with amplification of PPC activity and increased correlation between PFC and PPC 52 . This idea was formulated as the ‘ignition’ theory of conscious processing, stating that while most of the brain’s early computations can be performed in a non-conscious mode, conscious perception is associated with long-distance integration of activity in frontoparietal circuits 52 , 53 . Importantly, the specific markers that contrast conscious from nonconscious processing overlap with those needed for DM slowing in our model. In the experimental literature conscious perception is systematically associated with surges of prefrontal activity followed by top-down parietal amplification: conscious access crucially depends on a sudden, late, all-or-none ignition of prefronto-parietal networks and subsequent amplification of sensory activity 54 . The most consistent correlate of conscious perception was a late (~300–500 ms) positive waveform in prefrontal regions that reactivated parietal regions along with increased long-distance synchronization in the frontoparietal network 54 , which strongly resembles our model’s behavior: in the slow DM mode ramping of PFC was necessary to amplify activity in PPC, while in the fast DM mode PPC ramping preceded the ramping of prefrontal cortex (Supplementary Figs. 4 and 5 and Supplementary Movie 1 ). Similarly, monkey recordings showed that WM content in PFC neurons was multiplexed with signals that reflected the subject’s covert attention 55 . Together with the observation that subjects’ performance and WM load correlated with the degree of prefronto-parietal synchronization 56 , 57 , the conclusion can be drawn that these processes may reflect top-down prefrontal attentional mechanisms that modulate processing in posterior cortex. Likewise, these results also integrate with the parieto-frontal integration theory of intelligence, which roughly states that after basic processing in temporal and occipital lobes, sensory information is collected in parietal cortex, which then interacts with frontal regions to perform hypothesis tests on attempted solutions to select an optimal solution 58 . Another relevant perspective is provided by the distinction into effortful versus automatic cognition: while effortful tasks require synchronization or parietal regions with PFC, the synchronization suddenly drops as soon as subjects move into a routine mode of task execution 59 , 60 . Similarly, harder decisions required slow integration in the model’s PFC-PPC network, while simpler decisions were quickly taken by the PPC module. A number of FC studies come to similar conclusions: FC became more integrated during challenging tasks and remained more segregated during simple tasks 61 , 62 , 63 , 64 , 65 . Likewise, work on short-term synaptic plasticity suggests that FC is changed to form temporary task-relevant circuits, which comes with energetic and computational advantages 66 , similar to the influential Communication through Coherence theory 67 , which proposes phase synchronization as an essential and generic mechanism for controlling selective information flow in multiplexed brain networks. More generally, our study indicates that areas with higher FC may interact on a slower time scale than areas with lower FC. These different time scales could give rise to a hierarchical information processing cascade where intermediate results from faster processes are integrated by slower processes, which is reflected in the emerging view that cortex posits a timescale-based topography with integration windows increasing from sensory to association areas 68 . As receptive windows are progressively enlarged along the hierarchy, DM integration is extended from local to long-range circuits integrating increasingly widespread information, which is supported by studies that show how slow (<0.1 Hz) power fluctuations reliably track the accumulation of complex sensory inputs in higher-order regions 69 . Summarizing, in the present work we identified a monotonic and smooth relationship between the structural and the functional architecture of neural networks: by tuning the E/I-ratio it became possible to precisely and simultaneously tune the FC between any pair of network nodes to the desired target configuration from full antisynchronization to full synchronization. We believe this is important, as the link between FC and structural brain architecture is often described as unclear and many research streams aim for inferring structural network topology 70 , 71 . We therefore expect that the described smooth and monotonic link between network architecture and FC, and the derived learning rule, will be useful to better understand and infer structural network mechanisms underlying healthy and pathological cognition 72 , 73 . Methods Large-scale brain network model The used large-scale BNM simulates brain activity based on the network interaction of population models that represent brain areas. Each brain area is simulated by coupled excitatory and inhibitory population models based on the dynamical mean field model, which was derived from a detailed spiking neuronal network model 32 , 74 , 75 . Populations are connected by structural connectomes estimated from dwMRI data via fiber tractography. Here, we extended the model using two additional parameters \({w}_{{ij}}^{{LRE}}\) and \({w}_{{ij}}^{{FFI}}\) that allow the balancing of long-range excitatory and feedforward inhibitory synaptic currents. The model equations read as follows. $${I}_{i}^{E}={W}_{E}{I}_{0}+{w}_{+}{J}_{{NMDA}}{S}_{i}^{E}+{J}_{{NMDA}}\mathop{\sum}\limits_{j}{w}_{{ij}}^{{LRE}}{C}_{{ij}}{S}_{j}^{E}-{J}_{i}{S}_{i}^{I}$$ (1) $${I}_{i}^{I}={W}_{I}{I}_{0}+{J}_{{NMDA}}{S}_{i}^{E}+{J}_{{NMDA}}\mathop{\sum}\limits_{j}{w}_{{ij}}^{{FFI}}{C}_{{ij}}{S}_{j}^{E}-{S}_{i}^{I}$$ (2) $${r}_{i}^{E}=\frac{{a}_{E}{I}_{i}^{E}-{b}_{E}}{1-\exp (-{d}_{E}({a}_{E}{I}_{i}^{E}-{b}_{E}))}$$ (3) $${r}_{i}^{I}=\frac{{a}_{I}{I}_{i}^{I}-{b}_{I}}{1-\exp (-{d}_{I}({a}_{I}{I}_{i}^{I}-{b}_{I}))}$$ (4) $$\frac{d{S}_{i}^{E}(t)}{{dt}}=-\frac{{S}_{i}^{E}}{{\tau }_{E}}+(1-{S}_{i}^{E}){\gamma }_{E}{r}_{i}^{E}+\sigma {\upsilon }_{i}(t)$$ (5) $$\frac{d{S}_{i}^{I}(t)}{{dt}}=-\frac{{S}_{i}^{I}}{{\tau }_{I}}+{\gamma }_{I}{r}_{i}^{I}+\sigma {\upsilon }_{i}(t)$$ (6) \({r}_{i}^{(E,I)}\) denotes the population firing rate of the excitatory ( \(E\) ) and inhibitory ( \(I\) ) population of brain area \(i\) . \({S}_{i}^{(E,I)}\) identifies the average excitatory, respectively inhibitory, synaptic gating activity of each brain area. The sum of all input currents to each area are identified by \({I}_{i}^{(E,I)}\) (units nA). \({W}_{(E,I)}{I}_{0}\) are the overall effective external input currents to excitatory, respectively inhibitory, populations, and \({w}_{+}\) the local excitatory recurrence. \({J}_{{NMDA}}\) and \({J}_{i}\) are parameters that quantify the strengths of excitatory synaptic coupling and local feedback inhibitory synaptic coupling, respectively. Feedback inhibition control using inhibitory synaptic plasticity modulates \({J}_{i}\) of each region such that the long-term average firing rate \({r}_{i}^{E}\) of the corresponding excitatory population is \(\sim 4\) Hz (see section Feedback Inhibition Control). We extended the original model by Deco et al. 32 . by introducing the parameters \({w}_{{ij}}^{{LRE}}\) and \({w}_{{ij}}^{{FFI}}\) , which are matrices with the same dimensions as the structural connectome \({C}_{{ij}}\) (regions × regions) that describe the strengths of long-range excitation and feedforward inhibition, respectively. Equations 3 and 4 are sigmoidal functions that convert input currents into firing rates. \({\tau }_{(E,I)}\) and \({\gamma }_{(E,I)}\) specify the time scales and rate of saturation of excitatory and inhibitory synaptic activity, respectively. \({\upsilon }_{i}(t)\) is noise drawn from the standard normal distribution. Table 1 lists all state variables as well as parameters and their settings. BOLD activity was simulated by inputting excitatory synaptic activity \({S}_{i}^{E}\) into the Balloon-Windkessel hemodynamic model 76 , which is a dynamical system that describes the transduction of neuronal activity into perfusion changes and the coupling of perfusion to BOLD signal. The model is based on the assumption that the BOLD signal is a static non-linear function of the normalized total deoxyhemoglobin voxel content, normalized venous volume, resting net oxygen extraction fraction by the capillary bed, and resting blood volume fraction. Please refer to Deco et al. 74 for the specific set of Ballon–Windkessel model equations that we used in this study. Multi-scale brain network model To form the multiscale model, we connected the two-module DM circuit functional WM/DM circuit 33 to the large-scale regions that simulate PPC and PFC. To connect the large-scale network with the mesoscopic network, we augmented the noise terms of the DM circuit network by large-scale BNM inputs to PPC and PFC. The equations of the DM circuit read as follows. $${I}_{i}^{n}=\mathop{\sum}\limits_{m,j}{S}_{j}^{m}{J}_{i,j}^{m\to n}+{I}_{0}+{I}_{{app},i}^{n}+{I}_{{noise},i}^{n}$$ (7a) $${I}_{i}^{n}=\mathop{\sum}\limits_{m,j}{S}_{j}^{m}{J}_{i,j}^{m\to n}+{I}_{0}+{I}_{{app},i}^{n}+{I}_{{BNM},i}^{n}$$ (7b) $$r(I)=\frac{{aI}-b}{1-\exp (-c({aI}-b))}$$ (8) $$\frac{d{S}_{i}^{n}}{{dt}}=-\frac{{S}_{i}^{n}}{\tau }+\gamma \left(1-{S}_{i}^{n}\right)r\left({I}_{i}^{n}\right)$$ (9) $${\tau }_{{AMPA}}\frac{d{I}_{{noise},i}^{n}(t)}{{dt}}=-{I}_{{noise},i}^{n}(t)+{\eta }_{i}(t)\sqrt{{\tau }_{{AMPA}}{\sigma }_{{noise}}^{2}}$$ (10) Parameter values and state variables have the corresponding meanings as in the equations for the large-scale models (see also Supplementary Table 2 for an overview of quantities and values). Equation 7a shows the original DM circuit model input equation with noise term \({I}_{{noise},i}^{n}\) . To couple the DM circuit to the large-scale network, the noise term \({I}_{{noise},i}^{n}\) was replaced by the term \({I}_{{BNM},i}^{n}\) in Eq. 7b . The term adds the noise process from the DM circuit model (Eq. 10 ) to the large-scale BNM input to drive the DM circuit: $${I}_{{BNM},i}^{n}= ({b}_{{MJW}}-{a}_{{MJW}})[({w}_{+}{J}_{{NMDA}}{S}_{i}^{E}+{J}_{{NMDA}}\mathop{\sum}\limits_{j}{w}_{{ij}}^{{LRE}}{C}_{{ij}}{S}_{j}^{E}-{J}_{i}{S}_{i}^{I})\\ -{a}_{{BNM},i}]/({b}_{{BNM},i}-{a}_{{BNM},i})+{a}_{{MJW}}+{I}_{{noise},i}^{n}(t)$$ (11) Similar to Eq. 1 the input from the BNM to the DM circuit populations consists of the sum of local recurrent excitation, global network input and local recurrent inhibition. For each region this input is range normalized to bring the range of amplitudes from the 650 individual models into a range suitable for the DM circuit as identified in Fig. 4 , with \({b}_{{MJW}}=0.001({nA})\) and \({a}_{{MJW}}=-0.006({nA})\) . For each region the amplitude ranges from the 10th percentile to the 90th percentile over the 650 BNMs was mapped into the range \([{a}_{{MJW}},{b}_{{MJW}}]\) . To the resulting amplitude the individual Ornstein–Uhlenbeck noise processes were added in order to make one variant of this input for each of the two nodes of one DM circuit module. Decision-making performance was computed as in the original publication of the DM circuit by Murray et al. by modeling the strength of evidence as an external current to the two parietal populations A PPC and B PPC as follows: $${I}_{{app},i}^{n}={I}_{e}\left(1\pm \frac{c{\prime} }{100\%}\right)$$ (12) where \({I}_{e}=0.0118\) nA scales the overall strength of the input and \({c}^{{\prime} }=6.4\%\) , referred to as the strength of evidence or contrast, determines which of the two populations A PPC or B PPC receives higher evidence ( A PPC received the higher evidence), which reflects the saliency of the target with respect to that of distractors. As in Murray et al. 33 , when one of the two action populations A PFC or B PFC reaches a firing rate threshold of 40 Hz the decision for option A or B is taken and a reaction time is registered. We repeated the decision-making task 1000 times in order to compute the percentage of times for which the decision was made correctly (number of times A PFC crossed the firing-rate threshold divided by the total number of trials) and the average time until the threshold was reached. Fitting algorithm The fitting algorithm is based on the observation (Fig. 2 ) that the correlation between the fMRI time series from two different model brain regions can be modulated by the relative strengths of long-range excitation versus feedforward inhibition. This ratio, as outlined in the section Large-scale brain network model, can be adjusted in our model by the parameters \({w}_{{ij}}^{{LRE}}\) and \({w}_{{ij}}^{{FFI}}\) , which are multiplicative factors that re-scale the structural connectivity connection weights \({C}_{{ij}}\) , between each pair of connected regions \(i\) and \(j\) . \({w}_{{ij}}^{{LRE}}\) modulates the amount of excitation conveyed via long-range connections to distant excitatory populations, or long-range excitation, while \({w}_{{ij}}^{{FFI}}\) modulates the amount of excitation provided via long-range connections to distant inhibitory populations, and their resulting feedforward inhibitory effect on the accompanying excitatory population, or feedforward inhibition. The goal of the fitting algorithm is to fit weights \({w}_{{ij}}^{{LRE}}\) and \({w}_{{ij}}^{{FFI}}\) such that FC computed from simulated fMRI time series matches a target FC as closely as possible. The goal is that the difference between each entry \({\rho }_{{ij}}^{{trg}}\) of the target FC matrix \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{trg}}}}}}}\) and \({\rho }_{{ij}}^{{sim}}\) of the simulated FC matrix \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{s}}}}}}{{{{{\bf{im}}}}}}}\) should be as small as possible. The basic idea of the fitting algorithm is to increase \({w}_{{ij}}^{{LRE}}\) and to decrease \({w}_{{ij}}^{{FFI}}\) if \({\rho }_{{ij}}^{{trg}} \, > \, {\rho }_{{ij}}^{{sim}}\) and, vice versa, to decrease \({w}_{{ij}}^{{LRE}}\) and to increase \({w}_{{ij}}^{{FFI}}\) if \({\rho }_{{ij}}^{{trg}} \, < \, {\rho }_{{ij}}^{{sim}}\) . While the overall parameter optimization approach followed a standard gradient descent schema, importantly, the gradients are based on the direct monotonic and smooth relationship that we identified between E/I-ratios and FC, respectively population synchronization (Fig. 4 ), creating a direct biologically interpretable link between brain network structure (specifically the E/I-ratios between network nodes) and the emerging brain network dynamics when simulating the model. In pseudocode the algorithm can be written as follows. Algorithm EI_tuning ( \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{trg}}}}}}}\) , \(\eta\) , M) Input \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{trg}}}}}}}\) : (n x n) target FC matrix \({\eta }_{{EI}}\) : scalar learning rate M: brain network model Returns \({w}^{{LRE}}\) , \({w}^{{FFI}}\) : (n x n) matrices long-range excitation, feedforward inhibition for fmri_time_step = 1 to simulation_length do simulate one fMRI time step using M compute simulated FC \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{sim}}}}}}}\) for i = 1 to n do rmse_i = root-mean-square deviation between matrix rows i in \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{trg}}}}}}}\) and \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{sim}}}}}}}\) : for j = 1 to number of connections of node i do diff_FC = \({\rho }_{{ij}}^{{trg}}\) - \({\rho }_{{ij}}^{{sim}}\) \({w}_{{ij}}^{{LRE}}\) = \({w}_{{ij}}^{{LRE}}\) + \({\eta }_{{EI}}\) x diff_FC x rmse_i \({w}_{{ij}}^{{FFI}}\) = \({w}_{{ij}}^{{FFI}}\) - \({\eta }_{{EI}}\) x diff_FC x rmse_i if \({w}_{{ij}}^{{LRE}}\le 0\) do \({w}_{{ij}}^{{LRE}}=0\) if \({w}_{{ij}}^{{FFI}}\le 0\) do \({w}_{{ij}}^{{FFI}}=0\) return \({w}^{{LRE}}\) , \({w}^{{FFI}}\) The algorithm iterates over all connections \((i,j)\) of the BNM and computes the difference between target and simulated FC for each connection. This difference is rescaled by the learning rate \(\eta\) , which is gradually decreased over the course of the tuning. Furthermore, the difference is re-scaled by the root-mean-square deviation (RMSE) between the correlation coefficient values of region \(i\) with all remaining regions (i.e. the RMSE between rows \(i\) of matrices \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{trg}}}}}}}\) and \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{sim}}}}}}}\) ), which can be compared to the temperature parameter in a simulated annealing heuristic. The factor has the purpose to decrease the change in \({w}_{{ij}}^{{LRE}}\) and \({w}_{{ij}}^{{FFI}}\) as the fit of the row-wise FC increases and we approach an optimum. Furthermore, the factor differentially weights the changes in \({w}_{{ij}}^{{LRE}}\) ( \({w}_{{ij}}^{{FFI}}\) ) and \({w}_{{ji}}^{{LRE}}\) ( \({w}_{{ji}}^{{LRE}}\) ) with the purpose that the region (either \(i\) or \(j\) ) that has a better fit at the current tuning iteration is changed less than the other one, since the change of connection strengths between one region pair has an effect on the FC between all other region pairs. By decreasing the step size for the better-fitting region, we ensure that respective parameters stay closer to the local optimum. In the present paper this heuristic is used as an online parameter tuning rule, which means that parameters are updated after each new BOLD fMRI time step is computed. We tested different values for the learning rate parameter \({\eta }_{EI}\) , and devised a tuning workflow in which initially the parameter space is sampled with large steps (large learning rate) using FC that is based on a short time window. The tuning lasted over six stages where each stage was simulated for 10 hours of biological time. The learning rate \({\eta }_{{EI}}\) was halved and the window size of simulated FC \({{{{{{\boldsymbol{\rho }}}}}}}^{{{{{{\bf{sim}}}}}}}\) was doubled in each stage, starting with a learning rate of \({\eta }_{{EI}}\) = 0.1 and a window size of 150 TRs. The wall time for simulating one hour of biological activity on one standard CPU was roughly two hours, which led to a computational cost of \(6\left[{{{{{\rm{Stages}}}}}}\right]*20\left[{{{{{\rm{CPU}}}}}}\; {{{{{\rm{hours}}}}}}\; {{{{{\rm{perstage}}}}}}\right]=120[{{{{{\rm{CPU}}}}}}\; {{{{{\rm{hours}}}}}}]\) to tune a single model and a total cost of 78,000 CPU hours to tune all 650 models. Fitting runs were executed in parallel on high performance computers. The costs for running subsequent DM and WM experiments with the fitted and coupled multiscale models were negligible and performed on a standard laptop as only a few seconds of activity were needed to simulate one DM or WM experiment. Feedback inhibition control The firing rate of the large-scale neural masses (Eqs. 3 and 4 ) depends on synaptic input currents (Eqs. 1 and 2 ), which are, to a large degree, determined by the structural connectome \(C\) , that is, large-scale inputs, and associated parameters ( \({w}_{{ij}}^{{LRE}}\) and \({w}_{{ij}}^{{FFI}}\) ). To compensate for excess or lack of excitation, which would result in implausible firing rates, a local regulation mechanism, called feedback inhibition control (FIC), was used. The approach was previously successfully used to significantly improve FC prediction, and for increasing the dynamical repertoire of evoked activity and the accuracy of external stimulus encoding 32 , 77 . To implement FIC we used a learning rule for inhibitory synaptic plasticity that was shown to balance excitation and inhibition in sensory pathways and memory networks 31 . The learning rule modulated all connection strengths from inhibitory to local excitatory populations once every 720 ms (corresponding to 1 fMRI repetition time) to achieve a target average firing rate of 4 Hz in excitatory populations. The learning rule can be summarized as $$\triangle w={\eta }_{{FIC}}({{{{{\rm{pre}}}}}}\times {{{{{\rm{post}}}}}}-{\rho }_{0}\times {{{{{\rm{pre}}}}}})$$ (13) where \(\triangle w\) denotes the change in synaptic strength, \({{{{{\rm{pre}}}}}}\) and \({{{{{\rm{post}}}}}}\) are the pre- and postsynaptic firing rates, \({\eta }_{{FIC}}=0.001\) is the learning rate and \({\rho }_{0}=4.0[{Hz}]\) is the target firing rate for the postsynaptic excitatory population. If postsynaptic firing rate \({po}{st}\) is larger than the target firing rate \({\rho }_{0}\) , the learning rule increases the inhibitory weight \(w\) , to decrease the postsynaptic firing rate. Conversely, if the postsynaptic firing rate is lower than the target firing rate, the learning rule decreases the inhibitory weight. The change of the inhibitory weight is modulated by the presynaptic firing rate \({{{{{\rm{pre}}}}}}\) : if presynaptic firing is large, then a higher weight change is needed to get the desired effect than when presynaptic firing is low. The learning rate \(\eta\) was found by trial and error. Data and preprocessing We used the publicly available HCP Young Adult data release 16 , which includes behavioral and 3 T MR imaging data from healthy adult participants (age range 22–35 years). Informed consent forms, including consent to share deidentified data, were collected for all subjects (within the HCP) and approved. Data collection was approved by a consortium of institutional review boards in the United States and Europe, led by Washington University (St Louis) and the University of Minnesota (WU-Minn HCP Consortium). The experiments were performed in compliance with the relevant laws and institutional guidelines and were approved by the medical ethical committee of the Charité Medical Center in Berlin (EA4/184/20). All data were collected on a 3 T Siemens Skyra scanner with gradients customized for the HCP. We restricted our analysis to 650 subjects (360 female, 290 male, based on self-report during data collection by the HCP; no analyses regarding sex or gender were performed as the goal of this study was to elucidate mechanisms that are independent of sex or gender) with complete MRI data including all four sessions of resting-state fMRI, structural MRI (T1w and T2w), diffusion-weighted MRI (dwMRI) as well as the behavioral measures PMAT24_A, CardSort, ProcSpeed and Flanker were available, and which were not identified with quality issues by HCP. The HCP publishes lists with subjects where quality control issues were identified ( ), which involved 151 subjects at the time of writing. Furthermore, we identified one additional subject that had absent connections that was more than four standard deviations away from the mean over all subjects. Resting-state fMRI data were acquired in four separate 15-min runs on two different days (two per day) with a 2-mm isotropic spatial resolution (FOV: 208 mm × 180 mm, Matrix: 104 × 90 with 72 slices covering the entire brain) and a 0.73-s temporal resolution. For correction of EPI distortions, additionally two spin echo EPI images with reversed phase encoding directions were acquired. dwMRI had a resolution of 1.25 mm isotropic, 128 diffusion gradient directions, and multiple q-space shells with diffusion-weightings of b = 1000 s/mm2, b = 2000 s/mm 2 and b = 3000 s/mm 2 . For correction for EPI and eddy-current-induced distortions two phase-encoding direction-reversed images for each diffusion direction were acquired. From HCP, we downloaded preprocessed fMRI, structural MRI and dwMRI data that underwent HCP’s preprocessing pipelines, which combine tools from FSL, FreeSurfer and the HCP Connectome workbench to perform distortion correction and alignment across modalities 78 . For high-resolution (0.7-mm isotropic) T1-weighted and T2-weighted MR scans HCP pipelines corrected for distortions using a B0 field map and then linearly registered the anatomy with a common MNI template. Individual surface registration was achieved by combining cortical surface features and a multimodal surface matching algorithm 79 . fMRI pipelines include distortion-correction, motion correction, registering fMRI data to structural data, reduction of the bias field, normalization to a global mean, brain masking, re-sampling of fMRI time series from the volume into the gray-ordinates standard space, and denoising using FSL’s ICA-FIX method. Corrected time series were then sampled into HCP’s 91,282 standard grayordinates (CIFTI) space, which is a combined representation of a cortical surface triangulation (32k vertices per hemisphere) and a standard 2 mm subcortical segmentation 78 . We parcellated grayordinate fMRI time series using HCP’s multimodal parcellation 41 and computed region-wise average time series and FC matrices. For dwMRI data, HCP pipelines normalize the b0 image intensity across runs; remove EPI distortions, eddy-current-induced distortions, and subject motion; correct for gradient-nonlinearities; perform registration with structural data, resamples into 1.25 mm structural space; and mask the data with a brain mask. For dwMRI tractography we employed our own pipelines 80 based on the tractography toolbox MRtrix3 81 . Structural MRI images were segmented into five tissue types to aid Anatomically-Constrained Tractography, a MRtrix3 function that removes anatomically implausible tracks. Multi-shell, multi-tissue response functions were estimated using MRtrix3 software dwi2response, followed by multi-Shell, Multi-Tissue Constrained Spherical Deconvolution using dwi2fod. For each subject full-brain tractograms with 25 Million tracks were generated using tckgen, subsequently filtered with tcksift2, and mapped to the HCP MMP parcellation used for computing fMRI FC to produce matching structural connectomes. The g -factor was computed using the code of Dubois et al. who performed factor analysis of the scores on 10 cognitive tasks from the HCP data set to derive a bi-factor model of intelligence, which is the standard in the field of intelligence research 82 . Statistical tests To test whether simulated data samples that we obtained for the different RT groups have the same or different distributions we used the nonparametric Friedman test (implemented by the function friedmanchisquare() in the Python package SciPy stats) followed by a posthoc multiple comparison analysis using Nemenyi’s test (using the function posthoc_nemenyi_friedman() implemented in the Python package scikit-posthocs). Data samples were not normally distributed (tested with Lilliefors test) and contained repeated measurements (each group model was fitted 500 times with different initial conditions and then simulated). To test whether medians are equal for data with unequal sample sizes and without repeated measurements we used the Kruskal–Wallis test followed by posthoc Conover’s test (implemented as SciPy functions kruskal() and posthoc_conover()) for pairwise multiple comparisons. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability All data used in this study was derived from the Human Connectome Project Young Adult study available in the repository . The derived data generated in this study are available under restricted access due to data privacy laws, access can be obtained within a timeframe of one month from the corresponding authors M.S. and P.R. as processing and sharing is subject to the European Union General Data Protection Regulation (GDPR), requiring a written data processing agreement, involving the relevant local data protection authorities, for compliance with the standard contractual clauses by the European Commission for the processing of personal data under GDPR ( ). The data processing agreement and dataset metadata are available in EBRAINS ( ). Code availability All custom codes used in this study are freely available at GitHub ( ) 83 licensed under the EUPL-1.2-or-later. Custom codes were implemented using Python version 3.9.7 and multiple Python packages (scipy 1.7.1; numpy 1.20.3; matplotlib 3.4.3; scikit-learn 1.1.3; statsmodels 0.12.2; scikit-posthocs 0.7.0) and MATLAB version R2020a; GCC 9.4 was used for C code compilation; FreeSurfer v7.1.0, MRtrix3 3.0, FSL 6.0 for MRI processing.
Do intelligent people think faster? Researchers at the BIH and Charité—Universitätsmedizin Berlin, together with a colleague from Barcelona, made the surprising finding that participants with higher intelligence scores were only quicker when tackling simple tasks, while they took longer to solve difficult problems than subjects with lower IQ scores. In personalized brain simulations of the 650 participants, the researchers could determine that brains with reduced synchrony between brain areas literally "jump to conclusions" when making decisions, rather than waiting until upstream brain regions could complete the processing steps needed to solve the problem. In fact, the brain models for higher score participants also needed more time to solve challenging tasks but made fewer errors. The scientists have now published their findings in the journal Nature Communications. There are 100 billion or so neurons in the human brain. Each one of them is connected to an estimated 1,000 neighboring or distant neurons. This unfathomable network is the key to the brain's amazing capabilities, but it is also what makes it so difficult to understand how the brain works. Prof. Petra Ritter, head of the Brain Simulation Section at the Berlin Institute of Health at Charité (BIH) and at the Department of Neurology and Experimental Neurology of Charité—Universitätsmedizin Berlin, simulates the human brain using computers. "We want to understand how the brain's decision-making processes work and why different people make different decisions," she says, describing the current project. Personalized brain models To simulate the mechanisms of the human brain, Ritter and her team use digital data from brain scans like magnetic resonance imaging (MRI) as well as mathematical models based on theoretical knowledge about biological processes. This initially results in a "general" human brain model. The scientists then refine this model using data from individual people, thus creating "personalized brain models." For the present study, the scientists worked with data from 650 participants of the Human Connectome Project, a U.S. initiative that has been studying neural connections in the human brain since September 2010. "It's the right excitation-inhibition balance of neurons that influences decision-making and more or less enables a person to solve problems," explains Ritter. Her team knew how participants fared on extensive cognitive tests and what their IQ scores were. Correlations between intelligence, RTs and FC. a, b Group-average g-factor (30 groups, based on g-factor, N = 650 subjects) versus RT for correct responses in PMAT questions #1 (very easy, p=4.0×10−6p=4.0×10−6p=4.0\times {10}^{-6}) and #24 (very hard, p=3.0×10−6p=3.0×10−6p=3.0\times {10}^{-6}). c, d Group-average and subject-level correlations between g/PMAT24_A_CR and the RT for correct responses in each individual PMAT question. Subjects with higher g/PMAT24_A_CR were quicker to correctly answer easy questions, but they took more time to correctly answer hard questions (questions sorted according to increasing difficulty; sign of correlation flips at question #9). e Group-average g-factor versus mean FC (20 groups, based on g-factor, N = 650 subjects, p=0.13p=0.13p=0.13). f Group-average PMAT24_A_RTCR versus mean FC (20 groups, based on PMAT24_A_RTCR, N = 650 subjects, p=6.9×10−7p=6.9×10−7p=6.9\times {10}^{-7}). g, h Group-average (20 groups, based on PMAT24_A_RTCR) and subject-level correlations between mean FC and RT for correct responses in each PMAT question. Subjects that took more time to correctly answer test questions had a higher FC, independent of whether the question was easy or hard. P values of two-sided Pearson’s correlation test: *p < 0.05, **p < 0.01, ***p < 0.001; including only p values that remained significant after controlling for multiple comparisons using the Benjamini–Hochberg procedure with a False Discovery Rate of 0.1. Credit: Nature Communications (2023). DOI: 10.1038/s41467-023-38626-y Artificial brains behave like their biological counterparts "We can reproduce the activity of individual brains very efficiently," says Ritter. "We found out in the process that these in silico brains behave differently from one another—and in the same way as their biological counterparts. Our virtual avatars match the intellectual performance and reaction times of their biological analogues." Interestingly, the "slower" brains in both the humans and the models were more synchronized, i.e., in time with one other. This greater synchrony allowed neural circuits in the frontal lobe to hold off on decisions longer than brains that were less well coordinated. The models revealed how reduced temporal coordination results in the information required for decision-making neither being available when needed nor stored in working memory. Gathering evidence takes time—and leads to correct decisions Resting-state functional MRI scans showed that slower solvers had higher average functional connectivity, or temporal synchrony, between their brain regions. In personalized brain simulations of the 650 participants, the researchers could determine that brains with reduced functional connectivity literally "jump to conclusions" when making decisions, rather than waiting until upstream brain regions could complete the processing steps needed to solve the problem. Participants were asked to identify logical rules in a series of patterns. These rules became increasingly complex with each task and thus more difficult to decipher. In everyday terms, an easy task would consist of quickly braking at a red light, while a hard task would require methodically working out the best route on a road map. In the model, a so-called winner-take-all competition occurs between different neural groups involved in a decision, with the neural groups for which there is stronger evidence prevailing. Yet in the case of complex decisions, such evidence is often not clear enough for quick decision-making, literally forcing the neural groups to jump to conclusions. "Synchronization, i.e., the formation of functional networks in the brain, alters the properties of working memory and thus the ability to 'endure' prolonged periods without a decision," explains Michael Schirner, lead author of the study and a scientist in Ritter's lab. "In more challenging tasks, you have to store previous progress in working memory while you explore other solution paths and then integrate these into each other. This gathering of evidence for a particular solution may sometimes takes longer, but it also leads to better results. We were able to use the model to show how excitation-inhibition balance at the global level of the whole brain network affects decision-making and working memory at the more granular level of individual neural groups." Findings are interesting for treatment planning Ritter is pleased that the results observed in the computer-based "brain avatars" match the results seen in "real" healthy subjects. After all, her main interest is in helping patients affected by neurodegenerative diseases like dementia and Parkinson's disease. "The simulation technology used in this study has made significant strides, and can be used to improve personalized in silico planning of surgical and drug interventions as well as therapeutic brain stimulation. For example, a physician can already use a computer simulation to assess which intervention or drug might work best for a particular patient and would have the fewest side effects."
10.1038/s41467-023-38626-y
Medicine
New genetic mutations shed light on schizophrenia
Purcell et al. "A polygenic burden of rare disruptive mutations in schizophrenia." Nature DOI: 10.1038/nature12975 . dx.doi.org/10.1038/nature12975 Fromer et al. "De novo mutations in schizophrenia implicate synaptic networks." Nature DOI: 10.1038/nature12929 . dx.doi.org/10.1038/nature12929 Journal information: Nature
http://dx.doi.org/10.1038/nature12975
https://medicalxpress.com/news/2014-01-genetic-mutations-schizophrenia.html
Abstract Schizophrenia is a common disease with a complex aetiology, probably involving multiple and heterogeneous genetic factors. Here, by analysing the exome sequences of 2,536 schizophrenia cases and 2,543 controls, we demonstrate a polygenic burden primarily arising from rare (less than 1 in 10,000), disruptive mutations distributed across many genes. Particularly enriched gene sets include the voltage-gated calcium ion channel and the signalling complex formed by the activity-regulated cytoskeleton-associated scaffold protein (ARC) of the postsynaptic density, sets previously implicated by genome-wide association and copy-number variation studies. Similar to reports in autism, targets of the fragile X mental retardation protein (FMRP, product of FMR1 ) are enriched for case mutations. No individual gene-based test achieves significance after correction for multiple testing and we do not detect any alleles of moderately low frequency (approximately 0.5 to 1 per cent) and moderately large effect. Taken together, these data suggest that population-based exome sequencing can discover risk alleles and complements established gene-mapping paradigms in neuropsychiatric disease. Main Genetic studies of schizophrenia (MIM 181500) have demonstrated a substantial heritability 1 , 2 that reflects common and rare alleles at many loci. Genome-wide association studies (GWAS) continue to uncover common single nucleotide polymorphisms (SNPs) at novel loci 3 . Rare or de novo genic deletions and duplications (copy-number variants (CNVs)) have been firmly established, including risk variants at 22q11.2, 15q13.3 and 1q21.1 (refs 4 , 5 ). One notable outcome of these large-scale, genome-wide investigations is the degree of polygenicity, consistent with thousands of genes and non-coding loci harbouring risk alleles 3 , 6 , 7 , 8 , 9 . Nonetheless, progress has been made in implicating biological systems and quantifying shared genetics among related psychiatric disorders (for example, refs 10 , 11 ), such as identifying common variants in calcium ion channel genes affecting schizophrenia and bipolar disorder 12 and de novo CNVs affecting genes encoding members of the postsynaptic density (PSD) proteome 13 , in particular members of the neuronal ARC protein and N -methyl- d -aspartate receptor (NMDAR) postsynaptic signalling complexes. Here we apply massively parallel short-read sequencing to assay a substantial portion of variation that previously was essentially invisible: rare coding point mutations (single nucleotide variants (SNVs)) and small insertions and deletions (indels). Although previous schizophrenia studies have applied sequencing, the results have been inconclusive, reflecting limited sample sizes or a focus on small numbers of candidate genes 14 , 15 , 16 , 17 . Exome-sequencing studies of de novo mutations published to date have neither demonstrated an increased rate in schizophrenia, nor conclusively implicated individual genes 18 , 19 , although some data suggest a link with particular classes of gene, such as those with higher brain expression in early fetal life 19 . De novo studies in intellectual disability 20 , 21 and autism 22 , 23 , 24 , 25 have, however, made considerable progress in identifying large-effect alleles and the underlying gene networks. In this study, we sought to identify the alleles, genes or gene networks that harbour rare coding variants of moderate or large effect on risk for schizophrenia by exome sequencing 5,079 individuals, selected from a Swedish sample of more than 11,000 individuals. Previous analyses of the full sample ( Supplementary Information section 1) have demonstrated an enriched burden of rare CNVs and a polygenic common variant component 3 . We generated high-coverage exome sequence to ensure sufficient sensitivity to detect and genotype alleles observed in only one heterozygous individual (singletons, implying an allele frequency of ∼ 1 in 10,000, although the true population frequency will typically be rarer). The high baseline rate of rare, neutral mutations makes it difficult to detect rare alleles that increase risk for common diseases 26 . Although power can be increased by jointly testing groups of variants in a gene 27 , association testing across all genes is likely to be under-powered at current sample sizes. Indeed, a recent application of population-based exome sequencing in autism did not identify genes 28 , despite moderately large sample size and the success of the de novo paradigm. Furthermore, many confirmed results from candidate-gene sequencing studies of nonpsychiatric disease still fall short of exome-wide significance 29 . We therefore adopted a top-down strategy in which we studied a large set of genes with a higher likelihood of having a role in schizophrenia, on the basis of existing genetic evidence ( Supplementary Information section 7). We focused on ∼ 2,500 genes implicated by unbiased, large-scale genome-wide screens, including GWAS, CNV and de novo SNV studies, testing for enrichment of rare alleles in cases. To prioritize individual genes, we characterized emerging signals with respect to the genes and frequency and type of mutations. We coordinated analysis with an independent trio exome-sequencing study (Fromer et al. 30 , this issue) and note key points of convergence below. After alignment and variant calling of all samples jointly, we removed 11 subjects with low-quality data along with likely spurious sites and genotypes ( Supplementary Information sections 2 and 3). Per individual, 93% of targeted bases were covered at ≥10-fold (81% at ≥30-fold). The final data set comprised 2,536 cases and 2,543 controls ( Extended Data Table 1a and Extended Data Fig. 1a ). Cases and controls had similar technical sequencing metrics, including total coverage, proportion of deeply covered targets, and overall proportion of non-reference alleles ( Extended Data Table 1b ). We observed 635,944 coding and splice-site passing variants of which 56% were singletons. Using Sanger sequencing and Exome Chip data on these samples, we determined high specificity and sensitivity for singletons ( Supplementary Information section 3). We annotated variants with respect to RefSeq and combined five in silico algorithms to predict missense deleteriousness ( Extended Data Table 1c and Supplementary Information section 4). As expected, allelic types more likely to affect protein function showed greater constraint: 69% of nonsense variants were singletons, compared to 58% of missense and 51% of silent variants. Primary analyses tested (1) disruptive variants (nonsense, essential splice site and frameshifts, n = 15,972 alleles with minor allele frequency (MAF) < 0.1%); (2) disruptive plus missense variants predicted to be damaging by all five algorithms ( n = 50,369); and (3) disruptive plus missense variants predicted to be damaging by at least one algorithm ( n = 233,575). These groups are labelled disruptive, NS strict and NS broad , in which NS indicates nonsynonymous. We also stratified most analyses by allele frequency: (1) singletons; (2) up to 0.1% (ten or fewer minor alleles); and (3) up to 0.5% (50 or fewer minor alleles). In the main gene set analyses, we empirically corrected for multiple testing over the nine combinations of these factors ( Supplementary Information section 7). The most significant SNV or indel association ( P = 5 × 10 8 ) was for a common missense allele in CCHCR1 , in the major histocompatibility complex (MHC), a known risk locus; this top SNP was in linkage disequilibrium with many other schizophrenia-associated SNPs in the MHC. All P < 10 −5 variants were for either common alleles or a few instances of likely aberrant variants that had escaped earlier filtering ( Supplementary Information section 5). We performed two series of gene-based tests: a one-sided burden test of an increased rare allele rate in cases, and the SNP-set (sequence) kernel association test (SKAT 27 ), which allows for risk and protective effects. For both tests, the distribution of gene-based statistics broadly followed a global null ( Extended Data Fig. 1b ). Considering only disruptive variants, the genic test yielding the lowest nominal P value was for KYNU (kynureninase), showing ten variants in cases and zero in controls ( Extended Data Table 2 and Supplementary Table 1 ); one novel nonsense mutation at chr2:143713804 (g.468T>A; p.Y156*) was observed in seven cases and not present in either the Exome Variant Server ( ) or 1000 Genomes Project ( ). Although previous studies have suggested links between the kynurenine pathway and schizophrenia 31 , our P value of 1.7 × 10 −3 does not withstand correction for multiple testing, even if considering only the 246 genes with ≥10 rare disruptive mutations capable of achieving a nominally significant result. A polygenic burden of rare coding variants We evaluated a polygenic burden of rare coding variants in cases, first selecting 2,546 genes ( ∼ 10% of the exome) on the basis of previous genetic studies that we proposed to be enriched for schizophrenia-associated mutations ( Supplementary Information section 6). Sources included genome-wide CNV studies 5 , 13 , GWAS 3 , 12 , 32 and exome sequencing of de novo mutations 18 , 19 , 30 . In our sample, these genes had a significantly higher rate of rare (MAF < 0.1%), disruptive mutations in cases compared to controls ( P = 10 −4 for 1,547 versus 1,383 mutations). The enrichment was unlikely to represent technical or ancestry-related artefact because the P values controlled for potential differences in exome-wide burden in cases and controls, and because we observed no differences exome wide ( P = 0.24). Furthermore, enrichment P values were empirically derived by permuting phenotypes within subgroups of cases and controls, matched on exome-wide identity-by-state, experimental batch and sex; the above result withstood correction for multiple testing ( Table 1 ). We observed similar results for rarer (singletons, P = 8 × 10 −4 ) and more frequent (MAF < 0.5%, P = 2 × 10 −4 ) alleles. We also observed case enrichment for the strictly defined set of damaging mutations (NS strict , P = 1.5 × 10 −3 ) but not the broader set (NS broad , P = 0.13). Table 1 Gene set analysis of primary schizophrenia candidate gene sets Full size table This enrichment suggests a polygenic burden of rare variants. Although not so marked as to be detectable at the exome-wide level given the sample size, it is relatively concentrated in genes that were found to be associated with schizophrenia by other methods. The mean allelic effect was not large: in the primary comparison, the odds ratio was 1.12 (1.04–1.20, 95% confidence interval) for each MAF < 0.1% disruptive mutation; 46% of cases carried one or more allele in this primary set (0.62 per case) compared to 41% of controls (0.55 per control). At two extremes, the modest mean effect could represent either that a subset of mutations are fully penetrant or that every allele is associated but increases risk by only 12%, similar to common alleles from GWAS. To extract subsets of potentially stronger-effect alleles, we individually tested the constituent gene sources ( Table 1 and Extended Data Fig. 1c ), focusing on disruptive variants as they showed the strongest omnibus enrichment. For disruptive mutations, eight out of 12 sets were nominally significant ( P < 0.05), indicating that the initial observation was not driven by a single category. ARC, PSD-95 and calcium ion channel genes Three of the smaller significantly enriched sets (the ARC and PSD-95 (encoded by DLG4 ) complexes and calcium ion channel genes) had odds ratios >5. We observed enrichment ( P = 1.6 × 10 −3 ) of disruptive mutations among the 28 ARC complex genes: nine mutations in nine genes (all singletons) in cases and zero in controls, yielding an odds ratio of 19.2 (2.4–2,471, 95% confidence interval; Extended Data Table 2 ). Along with the NMDAR gene set (also significantly enriched), ARC genes largely accounted for the overall PSD enrichment ( P = 4 × 10 −8 ) in ref. 13 , in which four ARC genes had one or more de novo CNVs. Of note, in an independent exome-sequencing study in trios, Fromer et al . 30 found that the ARC gene set was enriched ( P = 5 × 10 −4 ) for nonsynonymous de novo SNVs and indels, with four genes harbouring six mutations ( Extended Data Table 7 ). The other PSD gene set with strong enrichment ( P = 9 × 10 −4 ; odds ratio = 5.1, 1.8–19.2, 95% confidence interval) was the PSD-95 complex, which contains 65 genes and overlaps with ARC. PSD genes are very highly conserved and have critical roles in excitatory neural signalling components, as well as dendrite and spine plasticity. Further categorization of neuronal genes on the basis of subcellular localization 13 ( Extended Data Table 3a ) or associated mouse and human phenotypes 33 did not yield further enrichment. The other subset yielding a large odds ratio of 8.4 (2.03–77, 95% confidence interval) was the 26 voltage-gated calcium ion channel genes (12 cases, one control; disruptive singletons, P = 2 × 10 −3 , although the effect is attenuated when including recurrent alleles: 15/8 cases/controls, P = 0.021, see Extended Data Table 2 ). The singleton enrichment was predominantly driven by the pore-forming α 1 and auxiliary α 2 δ subunits; of the α 1 subunits, the Ca V 1/L-type genes carried the most case mutations, including two in CACNA1C , a gene implicated by GWAS of bipolar disorder and schizophrenia 3 , 10 . Calcium signalling is involved in many cell functions including the regulation of gene expression 34 and is critical for modulating synaptic plasticity 35 . In a secondary analysis of proteins found in the nano-environment of the calcium channel 36 , we observed independent enrichment for other ion channel transporters ( Supplementary Table 1 ), odds ratio 9.1 (2.2–83) for singletons ( P = 1 × 10 −3 ; 13/1 disruptive alleles). Convergence with de novo studies A line of convergence across studies was that genes carrying nonsynonymous de novo mutations 18 , 19 , 30 were enriched for rare disruptive mutations in cases ( P = 1 × 10 −3 ; Table 1 and Extended Data Table 6a, b ). We observed a similar result for the smaller class of genes carrying disruptive de novo mutations ( P = 7 × 10 −4 , from 47 genes in our study); these genes included UFL1 (5/0 disruptive mutations, P = 0.03; 7/0 NS strict , P = 0.008), SYNGAP1 (4/0 NS strict , P = 0.04) and SZT2 (18/9 NS strict , P = 0.049). SYNGAP1 (synaptic Ras GTPase activating protein 1) is a component of the NMDAR PSD complex 37 and mutations in this gene are known to cause intellectual disability and autism 38 . Genes under previously associated CNV regions did not show significant enrichment of rare disruptive mutations, although there was an enrichment of NS strict mutations ( P = 0.0044; Extended Data Table 4 ). Of the 11 CNV regions, only the 3q29 locus, which contains multiple genes including DLG1 (ref. 4 ), was significant ( P = 0.0006) and withstood correction for multiple testing. Autism/intellectual disability genes and FMRP targets We next tested, as a single set, the 2,507 genes representing autism and intellectual disability candidates ( Supplementary Information section 6), which yielded only nominal significance ( P < 0.05) for disruptive and NS strict variants and no test survived correction for multiple testing ( Table 2 ). Considering the 12 constituent sets, genes from autism de novo studies showed no enrichment ( Extended Data Fig. 1c ), despite greater sample size and number of disruptive de novo mutations. There was no evidence for autism or intellectual disability genes curated from the literature 39 or for genes in the protein–protein-interaction-derived subnetworks built around autism de novo mutations 24 . Table 2 Gene set analysis of secondary autism/intellectual disability candidate gene sets Full size table The nominal omnibus signals arose largely from the Darnell et al. list of FMRP targets 40 . FMRP is encoded by the gene FMR1 (the locus of the Mendelian fragile X syndrome repeat mutation) and is an RNA-binding protein that regulates translation and is needed at synapses for normal glutamate receptor signalling and neurogenesis 41 . Targets of FMRP are enriched for de novo mutations in autism 22 , 40 , 42 ; here we find significant enrichment of disruptive singletons ( P = 1.4 × 10 −3 ; 289/223 case/control count; odds ratio = 1.3). These FMRP targets overlap with PSD genes ( Extended Data Table 3b ), although were still enriched independently ( Supplementary Information section 6). In addition, these genes were enriched in GWAS of this sample ( P < 10 −3 , Supplementary Information section 9). Whereas the Darnell list is derived from mouse brain, a second recently reported FMRP target list 42 was generated from cultured human embryonic kidney cells, using a different experimental approach ( Supplementary Information section 6). This list has relatively little overlap with Darnell targets and, in contrast to the Darnell list, does not show any enrichment for rare case mutations, for GWAS loci, or comparable overlap with PSD genes ( Extended Data Table 3b ). Our results are perhaps surprising: unlike Fromer et al. 30 , we did not observe direct evidence for overlap at the individual gene level with autism and intellectual disability, despite CNV studies showing pleiotropic effects of individual loci. Nonetheless, at the broader level of gene sets, all three disorders showed enrichment for FMRP targets; autism and intellectual disability de novo mutations also showed strong enrichment in several PSD complexes enriched in our study, including NMDAR, PSD-95 and (for intellectual disability) ARC. At the least, our results suggest that any overlap is far from complete, although more refined analyses in larger samples will be needed before a clearer picture can emerge of which genes and pathways are shared and which are specific to one disease. Characterizing enrichment by variant type To further characterize the observed enrichment with respect to mutational function and frequency, we created a single ‘composite’ set of 1,796 genes comprising all members of the most prominently enriched sets ( Supplementary Table 2 ). Rare disruptive mutations in this set were present in 990 cases and 877 controls (for singletons, 645–530). Cases carrying rare disruptive mutations did not appear to be phenotypically or clinically unusual in terms of sex, ancestry, history of drug abuse, general medical conditions plausibly aetiologically related to psychosis, or epilepsy, although they did have a higher rate of admissions noting comorbid intellectual disability compared to other cases ( P = 0.009; Extended Data Table 2b ). Figure 1 shows composite set enrichment across a range of conditions. As this set merges other sets showing enrichment, it necessarily shows enrichment; it was not, however, due to confounding effects of ancestry, sex or experimental wave ( Supplementary Information section 8). It was primarily driven by singleton nonsense mutations across a large number of genes, as it was removed or greatly attenuated when either singleton or nonsense mutations were excluded. Considered alone, neither splice-site, frameshift, missense, silent or noncoding mutations showed enrichment at P < 0.01. Different ways of defining damaging missense mutations did not substantively affect results. Considering only nonsynonymous coding variants present on Exome Chip, we did not observe enrichment. Rather, enrichment mainly reflected novel variants ( Extended Data Table 5b ), which is expected as most rare variants in our study are novel. We also took an alternative approach, whereby instead of filtering variants on frequency, we excluded genes with any control disruptive variants before calculating the burden of case alleles; the composite set was still highly enriched (‘case-unique’ in Fig. 1 ; see Extended Data Table 5b and Supplementary Information section 7). Finally, the enrichment could not be attributed to only a small number of variants or genes ( Extended Data Fig. 2a ). Figure 1: Composite set gene set analysis, stratified by mutation type. Statistical significance ( x axis) for the composite gene set stratified by type and frequency of mutation and other variables. Numbers to the right of each bar represent the number of genes with at least one mutation in that category for the composite set. (S) represents strictly defined damaging missenses; (B) represents the broadly defined group. Nonsyn (all) represents all nonsynonymous mutations. Numbers to the left of the bars (1, 10, 50) represent the minor allele count threshold (i.e. 1 indicates a singleton-only analysis); here the ranges 2–10 and 2–50 represent analyses that excluded singletons; N/A indicates that no allele-wise threshold was used. The source of deleteriousness prediction algorithms (LRT, MutationTaster, PPH and SIFT) is described in the Supplementary Information . For the exome array contrasts, Exome Chip sites were tested using the exome sequence calls. PowerPoint slide Full size image These findings do not preclude potentially important effects from other classes of rare variation in specific genes or other gene sets, although exploratory analyses of generic gene sets (for example, based on Gene Ontology terms) did not unambiguously identify novel signals after correction for multiple testing ( Supplementary Information section 7). We found preferential enrichment in genes with high brain expression, but not for genes with a prenatally biased developmental trajectory ( Extended Data Fig. 3 ). In fact, greater enrichment came from postnatally biased genes. Finally, although greatly attenuated compared to disruptive mutations, other categories displayed nominal (0.01 < P < 0.05) enrichment in Fig. 1 and strictly defined damaging missense mutations alone showed enrichment for ARC and NMDAR gene sets (32/15 for ARC, P = 0.007; Extended Data Tables 5a and 7 ). Although rare coding alleles other than ultra-rare nonsense mutations will undoubtedly contribute to risk, it will probably prove harder still to elucidate such effects. Rare variants, CNVs and common GWAS variants We quantified the relative impact of common SNPs (indexed by a genome-wide polygene score from independent GWAS samples 32 ), rare CNVs (the burden of genic deletions) and disruptive mutations in the composite set. Considering the same 5,079 individuals, all three classes of variation were uncorrelated and significantly, independently and additively enriched in cases compared to controls. From logistic regression, the relative effect sizes (reduction in model R 2 ) were 5.7%, 0.2% and 0.4% for GWAS, rare CNV and rare coding variants, respectively ( Supplementary Information section 8). Although not a complete assessment, it indicates that for the current sets of identifiably enriched alleles, common GWAS variants account for an order-of-magnitude more heritability than this set of rare variants does. However, these estimates will be diluted to varying degrees, owing to associated variants being included. As a consequence of this, and also the fact that true risk variants outside of composite set genes were not considered here, this estimate represents a conservative lower bound on the contribution of rare coding variation. Discussion We have demonstrated a polygenic burden that increases risk for schizophrenia, primarily comprising many ultra-rare nonsense mutations distributed across many genes. Implicating individual genes remains challenging, as genes that contributed to the highest-ranked sets typically had unremarkable P values, often around 0.5 with the gene containing only one or two rare mutations. Nonetheless, we were able to detect several small and highly enriched sets, notably of genes related to calcium channels and the postsynaptic ARC complex. Across these ∼ 50 genes, ∼ 1% of cases carried a rare disruptive mutation likely to have a considerable impact on risk. However, reported effect sizes will have a tendency to over-estimate true population values ( Supplementary Information section 5). We add to previous work that has implicated disruption of synaptic processes in schizophrenia 13 . The PSD is comprised of supramolecular multiprotein complexes that detect and discriminate patterns of neuronal activity and regulate plasticity processes responsible for learning 43 . Members of the membrane-associated guanylate kinase (MAGUK) family of scaffold proteins, such as PSD-95, have a key role in assembling ∼ 2 MDa complexes comprising calcium channels, including the glutamate-gated NMDAR, voltage-gated calcium channels and ARC 36 , 44 . The genetic disruption of MAGUKs and their associated components result in specific cognitive impairments in mice and humans 45 . One possibility is that the genetic risk identified here reflects altered tuning in calcium-dependent signalling cascades, triggered by NMDAR 46 and L-type calcium channels 47 , mediated by postsynaptic MAGUK signalling complexes driving ARC synthesis. Although we cannot yet use rare mutations to partition patients into more homogeneous clinical subgroups, this will remain a central goal for future sequencing studies. The few population-based common-disease exome-sequencing studies published to date, in psychiatric (for example, ref. 28 ) and non-psychiatric (for example, ref. 48 ) diseases, have not been successful in finding individual genes showing significant enrichment. Our study yields similar findings for individual genes, but yields positive results when considering gene sets. These current findings are likely to foreshadow the definitive identification of individual genes in larger cohorts, following the trajectory of GWAS and other genetic studies of complex disease. Methods Summary Sample ascertainment Cases with schizophrenia were identified through the Swedish Hospital Discharge Register 3 . Case inclusion criteria: ≥2 hospitalizations with a discharge diagnosis of schizophrenia, both parents born in Scandinavia, age ≥18 years. Case exclusion criteria: hospital register diagnosis of any disorder mitigating a confident diagnosis of schizophrenia. Controls were randomly selected from Swedish population registers. Control inclusion criteria: never hospitalized for schizophrenia or bipolar disorder, both parents born in Scandinavia, age ≥18 years. All subjects provided informed consent; institutional human subject committees approved the research. Sequencing The samples (2,536 cases, 2,543 controls) were sequenced using either the Agilent SureSelect Human All Exon Kit (29 Mb, n = 132) or the Agilent SureSelect Human All Exon v.2 Kit (33 Mb). Sequencing was performed by IlluminaGAII or Illumina HiSeq2000. Sequence data were aligned and variants called by the Picard ( )/BWA 49 /GATK 50 pipeline. Validation of selected variants used Sanger sequencing. On the basis of validation and Exome Chip data, we estimated high sensitivity and specificity of singleton calls. BAM and VCF files are available in the dbGaP study phs000473.v1 ( ). Analysis We used PLINK/SEQ ( ) to annotate variants according to RefSeq gene transcripts (UCSC Genome Browser, ). Single-site association used Fisher’s exact test; primary gene-based association used a burden test and the sequence kernel association test 27 . Analyses controlled for ancestry and quality control metrics. Gene sets were evaluated on the empirical distribution of the sum of individual gene burden statistics, and incorporated an empirical correction for multiple testing. Odds ratios with 95% confidence intervals used penalized maximum likelihood (Firth’s method) for low cell counts. See Supplementary Information for further details. Summary results are posted at . Online Content Any additional Methods, Extended Data display items and Source Data are available in the online version of the paper; references unique to these sections appear only in the online paper.
Researchers from the Broad Institute and several partnering institutions have taken a closer look at the human genome to learn more about the genetic underpinnings of schizophrenia. In two studies published this week in Nature, scientists analyzed the exomes, or protein-coding regions, of people with schizophrenia and their healthy counterparts, pinpointing the sites of mutations and identifying patterns that reveal clues about the biology underlying the disorder. One study compared gene sequences from 2,500 people with schizophrenia to 2,500 healthy individuals from the same population. The second study looked for new mutations that might have occurred in protein coding genes by examining gene sequences from more than 600 schizophrenia trios (individuals with the disorder and their unaffected mothers and fathers). Both studies yielded further evidence that the disorder arises from the combined effects of many genes – a condition known as "polygenicity." The studies also suggest that genetic alterations tended to cluster in a few networks of functionally-related genes. Schizophrenia, a psychiatric disorder often characterized by hallucinations, paranoia, and a breakdown of thought processes, is known to be highly heritable. It affects roughly 1 percent of all adults, and individuals with immediate relatives who suffer from the disorder are at approximately ten times greater risk. While this high rate of heritability has long been recognized, previous genetic studies have struggled to identify specific genes that cause schizophrenia. The two current studies, which are the largest of their kind to date, looked for mutations that were effectively invisible in previous studies: they detected changes at the scale of single nucleotides – substitutions, insertions, or deletions of individual bases or "letters" in the genetic code. "Despite the considerable sample sizes, no individual gene could be unambiguously implicated in either study. Taken as a group, however, genes involved in neural function and development showed greater rates of disruptive mutations in patients," explained Broad senior associate member Shaun Purcell, who played key roles in both studies. "That finding is sobering but also revealing: it suggests that many genes underlie risk for schizophrenia and so any two patients are unlikely to share the same profile of risk genes." Purcell, who is also a research scientist at Massachusetts General Hospital (MGH) and an associate professor of psychiatry at Mount Sinai's Icahn School of Medicine, served as first author of one of the papers (Purcell et al.), which compared the exomes of individuals with schizophrenia with those from healthy individuals from the same population in Sweden. The researchers involved in the work hailed from nine institutions, including the Broad, Mount Sinai, and MGH. The second paper (Fromer et al.) reported similar findings. That study, which was conducted by a multi-institutional collaboration that included the Broad Institute's Stanley Center for Psychiatric Research, Mount Sinai, Cardiff University, the Wellcome Trust Sanger Institute, and six other research institutions, looked for de novo mutations – alterations in an offspring's genome that do not exist in the genomes of the parents, and therefore cannot be attributed to heredity. Such mutations account for roughly 5 percent of schizophrenia cases. Both studies found that mutations were distributed across many genes, and the research teams discovered similar patterns in the distribution of mutations across gene networks. Many of the genes that bore mutations shared common functions: they tended to be part of gene networks that govern synaptic function, including the voltage-gated calcium ion channel, which is involved in signaling between cells in the brain, and the cytoskeletal (ARC) protein complex, which plays a role in synaptic plasticity, a function essential to learning and memory. "From a scientific standpoint, it's reassuring to see different methods of studying the genetics of schizophrenia converge on the same sets of genes. These varied approaches are pointing toward the same underlying biology, which can be followed up in future research," said Steven McCarroll, who was an author on both papers. McCarroll is director of genetics for the Broad's Stanley Center for Psychiatric Research and a professor in genetics at Harvard Medical School. The analysis of de novo mutations also revealed significant overlap between those found in schizophrenia and de novo mutations previously linked to autism and intellectual disability, a finding that may influence the approach researchers take in follow-up studies. The authors argue that both papers demonstrate that genome sequencing will continue to be a powerful tool in the study of schizophrenia, though many more samples will need to be sequenced before the genetics of this complex disorder can be fully understood. "Few facts have been firmly established about the molecular or cellular causes of schizophrenia, and that's because many traditional scientific approaches can't be used to study the disorder: you can't grow it in a dish, and there aren't very good animal models for it," McCarroll explained. "We think that genomes are the path out of the darkness, and that these studies and others like them will ultimately provide the molecular clues we will need to map out the pathophysiology of the disorder." Stanley Center director Steven Hyman and Ed Scolnick, the Stanley Center's chief scientist, thanked the institutions that collaborated on the studies. "The genetic analysis of schizophrenia is yielding remarkably promising results because scientists around the world have worked collaboratively for years to recruit and study the large number of patients and comparison subjects needed to pick out rare genetic variants associated schizophrenia against the staggeringly complex background genetic variation that characterizes humanity. Phrases like 'finding needles in haystacks' do not begin to do justice to this shared global effort," Hyman said. Scolnick emphasized that this collaboration is accelerating research that will ultimately benefit patients. "The exome sequencing data in these papers together with ongoing whole-genome association studies in patients with schizophrenia are helping to unravel the pathogenesis of this devastating illness," Scolnick said. "This work is building a roadmap which will inexorably lead to better treatments for patients and families."
10.1038/nature12975
Physics
Scientists boost quantum signals while reducing noise
Jack Qiu, Broadband squeezed microwaves and amplification with a Josephson travelling-wave parametric amplifier, Nature Physics (2023). DOI: 10.1038/s41567-022-01929-w. www.nature.com/articles/s41567-022-01929-w Journal information: Nature Physics
https://dx.doi.org/10.1038/s41567-022-01929-w
https://phys.org/news/2023-02-scientists-boost-quantum-noise.html
Abstract Squeezing of the electromagnetic vacuum is an essential metrological technique used to reduce quantum noise in applications spanning gravitational wave detection, biological microscopy and quantum information science. In superconducting circuits, the resonator-based Josephson-junction parametric amplifiers conventionally used to generate squeezed microwaves are constrained by a narrow bandwidth and low dynamic range. Here we develop a dual-pump, broadband Josephson travelling-wave parametric amplifier that combines a phase-sensitive extinction ratio of 56 dB with single-mode squeezing on par with the best resonator-based squeezers. We also demonstrate two-mode squeezing at microwave frequencies with bandwidth in the gigahertz range that is almost two orders of magnitude wider than that of contemporary resonator-based squeezers. Our amplifier is capable of simultaneously creating entangled microwave photon pairs with large frequency separation, with potential applications including high-fidelity qubit readout, quantum illumination and teleportation. Main Heisenberg’s uncertainty principle establishes the attainable measurement precision, the ‘standard quantum limit’ (SQL), for isotropically distributed vacuum fluctuations in the quadratures of the electromagnetic (EM) field 1 , 2 , 3 . Squeezing the EM field at a single frequency—single-mode squeezing—decreases the fluctuations of one quadrature below that of the vacuum at the expense of larger fluctuations in the other quadrature, thereby enabling a phase-sensitive means to beat the SQL. Squeezing can also generate quantum entanglement between observables at two distinct frequencies, producing two-mode squeezed states. Since its first experimental demonstration in 1985 (ref. 4 ), squeezing has become a resource for applications in quantum optics 5 , quantum information 6 and precision measurement 7 . The Josephson parametric amplifier (JPA) is a conventional approach to generate squeezed microwave photons (Fig. 1a ). JPA squeezers use a narrowband resonator and its resonant-enhanced circulating field to increase the interaction between photons and a single or few Josephson junctions. Josephson junctions are superconducting circuit elements with an inherently strong inductive nonlinearity with respect to the current traversing them. This is the nonlinearity that enables parametric amplification. However, the relatively large circulating field in JPAs strongly drives the nonlinearity of individual junctions, leading to unwanted higher-order nonlinear processes and saturation that impact squeezing performance 8 , 9 , 10 , 11 , 12 , 13 . Moreover, photon number fluctuations in the pump tone could lead to additional noise that reduces squeezing performance 14 . Fig. 1: Josephson travelling-wave parametric amplifier dispersion-engineered for a bichromatic pump. a , Circuit schematic of a conventional JPA. The resonant-enhancement of the field produces a narrowband frequency response. b , A repeating section of the dual-pump JTWPA. We can identify the L – C ladder that forms a 50-Ω transmission line from lumped elements and the two phase-matching resonators for dispersion-engineering. c , Degenerate four-wave mixing. d , Non-degenerate four-wave mixing. The picture shows the special case when the signal and idler are at the same frequency ω c at the centre between the two pumps. Pairs of two-mode squeezed photons (signal and idler) are created at frequencies symmetric about the centre frequency ω c . When the two photons are frequency-degenerate at ω c , this is referred to as single-mode squeezing. e , Micrograph of a 5-mm × 5-mm JTWPA chip. f , Magnified view of the structure showing the low-frequency lumped-element phase-matching resonator (blue), capacitors to ground C g (orange), high-frequency lumped-element phase-matching resonator (purple) and Josephson junctions (red). The colour-coded elements correspond to the circuit schematic in b . g , The JTWPA in the presence of a bichromatic pump transforms the vacuum field at the input into a squeezed field at the output through non-degenerate four-wave mixing. Full size image Several alternative approaches have been developed that address some of these limitations. For example, the impedance engineering of resonator-based JPAs has increased the bandwidth to the 0.5–0.8-GHz range 15 , 16 , but these devices still have a dynamic range limited to −110 to −100 dBm and sub-gigahertz bandwidth. Alternative approaches using superconducting nonlinear asymmetric inductive elements (SNAILs) for both resonant 17 , 18 , 19 and travelling-wave 20 , 21 parametric amplification feature a higher dynamic range in the −100 to −90-dBm range. However, both architectures require a magnetic field bias, making them subject to magnetic-field noise. Furthermore, the resonant version remains narrowband, and one travelling-wave approach 21 requires additional shunt resistors, which introduce dissipation and unwanted noise. So far, both approaches have been limited to 2–3-dB single-mode and two-mode squeezing. High kinetic inductance wiring has been used in place of Josephson junctions to realize the nonlinearity needed for both resonant 22 and travelling-wave parametric amplification 23 , 24 with higher dynamic range. However, the relatively weak nonlinearity of the wiring translates to a much larger requisite pump power to operate the devices, and the travelling-wave parametric amplifiers have larger gain ripple due to impedance variations on the long (up to 2 m) lines. Furthermore, although a single-mode quadrature noise (variance) reduction has been demonstrated in narrowband resonant nanowire devices, their degree of squeezing in decibels has yet to be quantified using a calibrated noise source 22 . Squeezing always involves two modes, a ‘signal’ and an ‘idler’. We note that there are finite bandwidths associated with measurement in experimental settings. To clarify the terminology used in this Article and draw comparison with other previous works, we define ‘two-mode’ as when the signal and idler are non-degenerate and their mode separation is much larger than the measurement bandwidth | ω s − ω i | ≫ B meas , and ‘single-mode’ as when the signal and idler are both nominally degenerate and within the measurement bandwidth | ω s − ω i | ≤ B meas . In this Article, we demonstrate a broadband single-mode and two-mode microwave squeezer using a dispersion-engineered, dual-pump Josephson travelling-wave parametric amplifier (JTWPA). As shown in Fig. 1b , the JTWPA contains a repeating structure called a unit cell, comprising a Josephson junction (red)—a nonlinear inductor—and a shunt capacitor (orange). Because their physical dimensions (tens of micrometres) are small compared to the operating wavelength (tens of millimetres) in the gigahertz regime, the junctions and capacitors are essentially lumped elements, constituting an effective inductance ( L ) and capacitance ( C ) per unit length. With the proper choice of L and C , the lumped LC -ladder network forms a broadband 50-Ω transmission line, circumventing the bandwidth constraint of the JPA 25 and thereby enabling broadband operation. The use of many junctions—here we use more than 3,000—in a travelling-wave architecture accommodates larger pump currents before any individual junction becomes saturated 26 , resulting in a substantially higher dynamic-range device. Therefore, with proper phase-matching, the JTWPA has the potential to generate substantial squeezing and emit broadband entangled microwave photons through its wave-mixing processes. Like a centrosymmetric crystal, the JTWPA junction nonlinearity features a spatial-inversion symmetry (in the absence of a d.c. current) that results in χ (3) -type nonlinear electromagnetic interactions. These support both degenerate-pump four-wave mixing (DFWM) and non-degenerate-pump four-wave mixing (NDFWM). As shown in Fig. 1c , the DFWM process (2 ω p = ω s + ω i ) converts two frequency-degenerate-pump photons ( ω p ) into an entangled pair of signal ( ω s ) and idler ( ω i ) photons. When ω s ≠ ω p , energy conservation places the idler photon at a different frequency than the signal photon. This leads to two-mode squeezed photons and entanglement. However, DFWM has two drawbacks when considering single-mode squeezing, ω s = ω p . First, the signal and idler frequencies coincide with the strong pump, resulting in self-phase modulation that leads to unwanted phase mismatch, which cannot be compensated through dispersion 26 . Second, it is challenging to later separate the signal and idler photons from the ‘background’ pump photons. In contrast, we use here (Fig. 1d ) a NDFWM process ( ω 1 + ω 2 = ω s + ω i ) that generates both single-mode and two-mode squeezed states far from the pump frequencies ω 1 and ω 2 . To do this, we introduce a JTWPA that uses two pumps and dispersion-engineering to achieve the desired NDFWM interaction. The dual-pump JTWPA is fabricated in a niobium trilayer process on 200-mm silicon wafers. It exhibits a meandering geometry of its nonlinear transmission line with 3,141 Josephson junctions and shunt capacitors (Fig. 1e ). These are parallel-plate capacitors with silicon dioxide as their dielectric material. In addition, the JTWPA features two sets of interleaved phase-matching resonators, one (purple) at ω r1 = 2π × 5.2 GHz and the other (blue) at ω r2 = 2π × 8.2 GHz (Fig. 1f ). The phase-matching resonators comprise lumped-element parallel-plate capacitors with niobium pentoxide dielectric and meandering geometric inductors. As shown in Fig. 2a , the undriven JTWPA transmission S 21 is normalized with respect to the radiofrequency (RF) background of the experimental set-up, utilizing a pair of microwave switches for signal routing (inset). The transmission characterization informs us of important JTWPA parameters, including the frequency-dependent loss, and the frequencies and linewidths of the phase-matching resonators, which guide us in choosing the pump frequencies. Fig. 2: Amplification characteristics. a , Undriven JTWPA transmission S 21 normalized with respect to a through line with an SubMiniature version A (SMA) barrel that accounts for the JTWPA package connectors. Microwave switches route the signal through the two paths of approximately equal length. The JTWPA loss is approximately −0.00163 dB per unit cell, resulting primarily from TLSs 25 from the dielectric material—silicon dioxide—in the parallel plate capacitor, C g . The orange dashed line is a numerical simulation of the JTWPA transmission ( Supplementary Information) . b , Phase-preserving gain measured using a microwave vector network analyser (red line) and a numerical simulation of the gain profile (black dotted line). The total bandwidth between the two pumps is ~2.5 GHz, and the total 3-dB bandwidth across the entire gain spectrum is more than 3.5 GHz. c , Experimental phase-sensitive amplification at ω c = 2π × 6.7037 GHz. The phase-sensitive extinction ratio (PSER) is ~56 dB. Full size image Pumping the JTWPA at two angular frequencies ω 1, 2 generates parametric amplification that satisfies the energy conservation relation ω s + ω i = ω 1 + ω 2 and leads to the desired single-mode and two-mode squeezing. However, NDFWM also creates unwanted photons through the frequency conversion process \(| {\omega }_{{{{\rm{s}}}}}-{\omega }_{{{{{\rm{i}}}}}^{{\prime} }}| =| {\omega }_{1}-{\omega }_{2}|\) , where \({\omega }_{{{{{\rm{i}}}}}^{{\prime} }}\) is an extraneous idler angular frequency. This unwanted by-product does not participate in the desired two-mode squeezing, but rather, it is effectively noise that undermines squeezing performance. Fortunately, these unfavourable conversion processes are susceptible to phase mismatch and can be effectively reduced through dispersion-engineering for a wide range of pump powers. The efficiency of parametric amplification is determined by momentum conservation, that is, phase-matching 25 . To this end, we define a phase-mismatch function for the parametric amplification (PA) process associated with NDFWM: $${{\Delta }}{k}_{{{{\rm{12}}}}}^{{{{\rm{PA}}}}}={\left(1+2{\beta }_{{{{\rm{1}}}}}^{2}+2{\beta }_{{{{\rm{2}}}}}^{2}\right)\left({k}_{{{{\rm{1}}}}}+{k}_{{{{\rm{2}}}}}-{k}_{{{{\rm{s}}}}}-{k}_{{{{\rm{i}}}}}\right)-{\beta }_{{{{\rm{1}}}}}^{2}{k}_{{{{\rm{1}}}}}-{\beta }_{{{{\rm{2}}}}}^{2}{k}_{{{{\rm{2}}}}}},$$ (1) where k x = ω x / c are wavevectors at frequencies ω x , with x being the signal (s), idler (i) and pumps (1 and 2), and c is the speed of light for EM waves travelling in the JTWPA. The parameter β 1, 2 ≡ I 1, 2 /4 I c is a dimensionless pump amplitude scaled by the junction critical current I c , and I 1, 2 are the pump currents at frequencies ω 1, 2 , respectively. The linear wavevectors k x entering the phase-mismatch functions are determined by the unit-cell series impedance and parallel admittance to ground along the JTWPA ( Supplementary Information) . The pump-power-dependent terms in equation ( 1 )—those with the \({\beta }_{{{1,\,2}}}^{2}\) factors—lead to phase mismatch that can be corrected. To achieve this, we adopt the dispersion-engineering approach of ref. 25 and extend it to two phase-matching resonators placed periodically throughout the amplifier. The resonator frequencies are chosen to be near-resonant with the desired pump frequencies. The modified admittance of the transmission line about these resonances leads to a rapid change in phase with frequency. Tuning the pump frequencies across the resonances thereby enables us to retune the pump phases periodically along the device and control the degree of phase-matching. The precise selection of pump frequencies determines the phase-matching condition and thereby enhances and suppresses different nonlinear processes. We preferentially phase-match the parametric amplification process, ω 1 + ω 2 = ω s + ω i . This is achieved if \({{\Delta }}{k}_{12}^{{{{\rm{PA}}}}}\simeq {0}\) (equation ( 1 )), while all other processes are highly phase-mismatched. Experimentally, we sweep the pump powers and frequencies to identify pump parameters that simultaneously maximize the dual-pump gain and minimize the single-pump gain. As shown in Fig. 2b , with both pumps on, we obtain more than 20-dB phase-preserving gain over more than 3.5-GHz total bandwidth—comparable with the single-pump JTWPA 25 and substantially broader than JPAs 9 , 27 , 28 , 29 , 30 . The 1-dB compression point at 20-dB gain is −98 dBm, capable of amplifying more than 35,000 photons per microsecond within the microwave C-band (4–8 GHz) and 20 to 30 dB higher than conventional resonator-based squeezers 12 , 29 , 31 . The large dynamic range enables the JTWPA to be a bright source of squeezed microwave photons. At the centre of the two pump frequencies, ω c = ( ω 1 + ω 2 )/2, the signal and idler interfere constructively or destructively, depending on their relative phase, leading to phase-sensitive amplification and deamplification. We characterize such interference by injecting a probe tone at frequency ω c and measuring the amplifier output as a function of the probe phase, θ probe . Figure 2c shows the JTWPA output phase-sensitive gain with pumps on (orange) normalized to the case with pumps off (grey). The phase-sensitive extinction ratio (PSER), defined as the difference between the maximum phase-sensitive amplification and deamplification, is measured to be 56 dB, a large value compared with those reported so far with superconducting Josephson-junction circuits 12 , 27 , 29 . Considering vacuum as the input to the JTWPA, the squeezing level—the amount of noise reduction relative to vacuum fluctuations in decibels, dB Sqz —can be extracted based on the measurement efficiency η meas of the output chain. Determining the efficiency requires an in situ noise power calibration at the mixing chamber of a dilution refrigerator. Here we employ two independent, calibrated sources: a qubit coupled to a waveguide 32 and a shot-noise tunnel junction 33 . Both give consistent results, and we use these calibrated sources to extract the system noise temperature T sys to calculate the measurement efficiency η meas (ref. 34 ): $${\eta }_{{{{\rm{meas}}}}}={\frac{\hslash \omega }{2{k}_{{{{\rm{B}}}}}{T}_{{{{\rm{sys}}}}}}},$$ (2) where k B and ℏ are the Boltzmann and reduced Planck constants, respectively. For example, T sys from the output of the JTWPA at 30 mK in the dilution refrigerator to the room temperature detectors is ~2.5 K at 6.7037 GHz, corresponding to a measurement efficiency of η meas ≈ 6%. By accounting for the gain and loss in the entire measurement chain, we determine an ‘input-referred’ noise at the JTWPA reference plane. See Supplementary Information for details on the calibration methods and results. We first characterize the single-mode squeezed vacuum of the dual-pump JTWPA. To do this, we apply vacuum to the JTWPA input using a cold 50-Ω resistive load. We measure and compare the output field of the JTWPA for two cases: (1) the output with both pumps off (that is, vacuum) and (2) the output with both pumps on (that is, squeezed vacuum). In both cases, the JTWPA output field propagates up the measurement chain to a room-temperature heterodyne detector comprising an IQ mixer that downconverts the signal into its in-phase (I) and quadrature (Q) components at 50 MHz. These two components are then sampled using a field-programmable gate array (FPGA)-based digitizer with a sampling rate of 500 MS s −1 (S, sample). The components are then digitally demodulated to obtain an I–Q pair from which one can derive the amplitude and phase of the output field. To acquire I–Q pairs, the pumps—and thus the squeezing—are periodically switched on and off with a duration of 10 μs each. For each 10-μs acquisition, only the inner 8 μs is digitally demodulated to eliminate sensitivity to any turn-on and turn-off transients. The 8-μs signal is integrated, corresponding to a measurement bandwidth B meas ≈ 125 kHz, and yields a single I–Q pair. We interleave the squeezer-on and squeezer-off acquisitions to reduce sensitivity to experimental drift between the measurements. When the squeezer is off, we extract an isotropic Gaussian noise distribution for the vacuum state with variance \({{\Delta }}{X}_{{{{\rm{SQZ}}}},\,{{{\rm{off}}}}}^{2}\) . When the squeezer is on, the squeezed vacuum state exhibits an elliptical Gaussian noise distribution as shown in Fig. 3a . In total, we acquire six million I–Q pairs to reconstruct each histogram. We then extract the variance along the squeezing axis \({{\Delta }}{X}_{{{{\rm{SQZ}}}},\,\min }^{2}\) and along the anti-squeezing axis \({{\Delta }}{X}_{{{{\rm{SQZ}}}},\,\max }^{2}\) . Comparing the values \({{\Delta }}{X}_{{{{\rm{SQZ}}}},\,\min }^{2}\) and \({{\Delta }}{X}_{{{{\rm{SQZ}}}},\,\max }^{2}\) to the vacuum level \({{\Delta }}{X}_{{{{\rm{SQZ}}}},\,{{{\rm{off}}}}}^{2}\) along with the measurement gain and efficiency enables us to determine the degree of squeezing and anti-squeezing, respectively (see Supplementary Information for further details on the measurement protocol). Fig. 3: Single-mode squeezed vacuum. a , Output field histogram of an example squeezed vacuum state with different confidence ellipses. The histogram comprises 6 × 10 6 data points. b , At 6.7037 GHz, measurement of the change in squeezing variance (relative to vacuum) versus asymmetry in the pump powers P 1 and P 2 . Coloured vertical lines indicate six different values of P 1 in units of nanowatts, used in the one-dimensional (1D) measurement in d . The power is referred at the input of the squeezer. c , Experimental data for the parametric gain as a function of P 2 , with P 1 fixed at 1.57 nW (at the input of the squeezer). d , Measurement of squeezing and anti-squeezing versus P 2 with six different P 1 configurations (coloured data). The data are presented as mean values of three sets of repeated measurement (each with 6 × 10 6 sample points). Their statistical variation is almost entirely due to the uncertainty in estimating the noise temperature ( Supplementary Information) , which dominates the error bars shown in the plot as an estimation range for the squeezing/anti-squeezing levels. We confirm that there is no squeezing when pumps are turned off. The squeezing level increases as a function of P 2 as gain increases, but eventually degrades as the pumps become too strong and gain decreases. The shaded regions and trend lines corresponding to constant-loss and loss-saturation models are detailed in the Supplementary Information . The observed squeezing levels are consistent with a saturated loss of approximately −1 dB at high gain. Full size image The squeezing process is sensitive to the power of both pumps due to the desired phase-matching condition for parametric amplification (for example, \({{\Delta }}{k}_{12}^{{{{\rm{PA}}}}}\simeq {0}\) in equation ( 1 )) and also residual parasitic processes such as frequency conversion. To maximize the degree of squeezing, we perform a coarse measurement of the \({{\Delta }}{X}_{{{{\rm{SQZ}}}},\,\min }^{2}\) (plotted relative to vacuum) as a function of pump powers. This enables us to empirically identify the pump powers P 1 and P 2 that correspond to higher squeezing levels. For six such near-optimal values (the six different colours in Fig. 3d ), we carry out finer scans of squeezing, anti-squeezing and parametric gain as a function of P 2 for fixed P 1 . Accounting for the measurement efficiency η meas at the output, we extract a squeezing level of \({-11.35}_{-2.49}^{+1.57}\, {\rm{dB}}\) and an anti-squeezing level of \({15.71}_{-0.15}^{+0.14}\, {\rm{dB}}\) at the optimal pump conditions, comparable with the best performance demonstrated by resonator-based squeezers in superconducting circuits 8 , 9 , 11 , 12 , 28 , 29 , 34 , 35 , 36 . Squeezing performance is sensitive to dissipation (loss), which acts as a noise channel. Within our JTWPA, loss primarily originates from defects—modelled as two-level systems (TLSs)—within the plasma-enhanced chemical-vapour-deposited (PE-CVD) SiO 2 dielectric used in the parallel-plate shunt capacitors. Previous studies have shown a quality factor Q ≈ 10 3 associated with this dielectric in the single-photon regime, observed at low power and low temperature. In this limit, the TLSs readily absorb photons from the JTWPA and cause relatively high loss. We observe high levels of squeezing despite the use of such lossy materials in the JTWPA. We conjecture that the reason for this is TLS saturation. At sufficiently high powers (large photon numbers), the TLSs saturate and the loss is reduced 37 . We can understand the net impact of TLSs on squeezing performance by considering the JTWPA to be a cascade of individual squeezers. The amount of added squeezing becomes position-dependent and increases with the increased gain at the output end. The TLSs are also distributed along the JTWPA, and they become saturated towards the output end due to the larger number of photons associated with the higher gain. Therefore, the impact of loss on squeezing performance is reduced towards the output where the marginal squeezing is the largest 38 . As a result, we expect loss saturation at large signal gain to improve squeezing performance, as we observe in our experiment (Fig. 3d at higher pump power P 2 ). To verify this conjecture, we independently measure the JTWPA loss as a function of photon number by varying the JTWPA temperature. The loss at small thermal photon numbers (<50 mK) is around −5 dB. This reduces to −1 dB for large photon numbers (>800 mK). These two limits are shown as dashed lines using a constant-loss model in Fig. 3d . For low pump power P 2 , our data are closer to the −5-dB line. At higher powers, where we see maximal squeezing, the data are more consistent with the −1-dB line corresponding to saturated TLSs. We then use numerical simulations to calculate the photon number in the JTWPA from its input to its output. The photon number is converted to loss from the independent loss-temperature measurement, and we plot the corresponding squeezing due to this distributed loss (solid line). It starts at −5 dB for low powers, and reduces toward −1 dB at high powers due to loss saturation. The high degree of squeezing observed in this device is consistent with the loss saturation model to within ~1–2 dB at high powers. (See Supplementary Information for more details.) At intermediate powers, the agreement is not as good. This is probably due to our optimizing for maximum squeezing at high pump powers. Parasitic processes that are largely absent at high powers may not be completely suppressed at intermediate powers. There is ongoing research to better understand and suppress these unwanted modes 39 , but this is outside the scope of this Article. Using the same optimized pump configuration, we generate and characterize two-mode squeezed vacuum as a function of the frequency separation ω s − ω i between the two modes. We switch to a dual-readout configuration 29 that simultaneously demodulates the signal and idler using two separate FPGA-based digitizers, circumventing bandwidth limitations of the digitizer and other components in the experiment, such as IQ-mixers, low-frequency amplifiers and so on. We directly measure up to a separation of 373 MHz with the maximum squeezing of \({-9.54}_{-1.63}^{+1.11}\,{\rm{dB}}\) , an average squeezing of −6.71 dB, and an average anti-squeezing of 16.12 dB. The noise characterization method limits the measurement efficiency calibration to a frequency range of ~500 MHz, and therefore we cannot directly calibrate the degree of squeezing beyond this range. Nonetheless, squeezing is expected to continue beyond 500 MHz (ref. 40 ). As shown in Fig. 4d , we characterize the variance change between the squeezed and the vacuum quadratures. Below 373 MHz, the results are consistent with the squeezing measured in Fig. 4c . Above 373 MHz, the JTWPA exhibits a consistently low variance out to 1,500 MHz, beyond which we are again limited for technical reasons, in this case, by the onset of a filter roll-off. Because the signal and idler photons propagate at different frequencies, frequency-dependent variations of the loss and nonlinear processes can lead to frequency-dependent two-mode squeezing performance 38 . However, based on the flat and broadband gain profile observed in our JTWPA, we infer consistent squeezing levels out to 1.5-GHz total signal-to-idler bandwidth, and net squeezing out to 1.75-GHz total signal-to-idler bandwidth. These results represent an almost two-orders-of-magnitude increase in two-mode squeezing bandwidth compared to conventional resonator-based squeezers 9 , 11 , 31 , 41 , 42 , 43 . Fig. 4: Broadband two-mode squeezed vacuum. a , Difference in the output field histograms between vacuum (red) and two-mode squeezed vacuum (blue). The histograms show the X and P quadratures (equivalently, the in-phase and quadrature components) of the squeezed and vacuum states with signal and idler 320 MHz detuned from each other and centred at ω c . b , Illustration of the frequency spectrum for the two-mode squeezing process. c , Measurement of two-mode squeezing versus frequency separation | ω s − ω i |/2π between the signal and the idler. Similar to the single-mode squeezing results. The data are presented as mean values of three sets of repeated measurements (each with 6 × 10 6 sample points). Their statistical variation is almost entirely due to the uncertainty in estimating the noise temperature ( Supplementary Information) , which dominates the error bars shown in the plot as an estimation range for the squeezing/anti-squeezing levels.The dashed lines indicate the average values for the measured squeezing/anti-squeezing levels. d , Percent change in variance between squeezed vacuum and vacuum for the X 1 X 2 (or P 1 P 2 ) quadrature as measured using two digitizers (see main text). The beige-coloured shading indicates the region where there is no measurable squeezing. The spike in the blue line plot (squeezing quadrature) around 1,500 MHz corresponds to the extra mode generated by the JTWPA. Full size image In conclusion, we have designed and demonstrated a dual-pump JTWPA that exhibits both phase-preserving and phase-sensitive amplification, and both single-mode and two-mode squeezing. We have measured 20-dB parametric gain over more than 3.5 GHz of total instantaneous bandwidth (1.75 GHz each for the signal and idler) with a 1-dB compression point of −98 dBm. This gain performance is comparable with the single-pump JTWPA, yet it features minimal gain ripple and gain roll-off within the frequency band of interest. This advance alone holds the promise to improve readout of frequency-multiplexed signals 44 . In addition, the favourable performance of this device enabled us to measure a 56-dB phase-sensitive extinction ratio, useful for qubit readout in quantum computing and phase regeneration in quantum communications. We have also achieved a single-mode squeezing level of \({-11.35}_{-2.49}^{+1.57}\, {\rm{dB}}\) , and two-mode squeezing levels averaging −6.71 dB with a maximum value of \({-9.54}_{-1.63}^{+1.11}\, {\rm{dB}}\) measured directly over ~400 MHz and extending to over more than 1.5-GHz total bandwidth (signal to idler frequency separation). The results enable direct applications of the JTWPA in superconducting circuits, such as suppressing radiative spontaneous emission from a superconducting qubit 10 and enhancing the search for dark-matter axions 45 . We have observed high levels of squeezing, despite the presence of dielectric loss from the SiO 2 capacitors, which we attribute predominantly to distributed TLS saturation in the high-gain regions of our JTWPA. Nonetheless, squeezing performance can be further improved by introducing a lower-loss capacitor dielectric. Performance can also be improved by exploring distributed geometries and Floquet-engineered JTWPAs that reduce the impact of unwanted parasitic processes 39 . The broad bandwidth and high degree of squeezing demonstrated in our device represent a resource-efficient means to generate multimode, non-classical states of light with applications spanning qubit-state readout 46 , 47 , quantum illumination 48 , 49 , teleportation 29 , 34 , 50 and quantum state preparation for continuous-variable quantum computing in the microwave regime 40 , 51 . In addition, the technique of using dispersion-engineering to phase-match different nonlinear processes can be extended to explore dynamics within superconducting Josephson metamaterials with engineered properties not otherwise found in nature. Data availability The data supporting the findings of this study are available from the corresponding author upon reasonable request and cognizance of our US Government sponsors who funded the work. Code availability The code used for the analyses is available from the corresponding author upon reasonable request and with the permission of the US Government sponsors who funded the work.
A certain amount of noise is inherent in any quantum system. For instance, when researchers want to read information from a quantum computer, which harnesses quantum mechanical phenomena to solve certain problems too complex for classical computers, the same quantum mechanics also imparts a minimum level of unavoidable error that limits the accuracy of the measurements. Scientists can effectively get around this limitation by using "parametric" amplification to "squeeze" the noise—a quantum phenomenon that decreases the noise affecting one variable while increasing the noise that affects its conjugate partner. While the total amount of noise remains the same, it is effectively redistributed. Researchers can then make more accurate measurements by looking only at the lower-noise variable. A team of researchers from MIT and elsewhere has now developed a new superconducting parametric amplifier that operates with the gain of previous narrowband squeezers while achieving quantum squeezing over much larger bandwidths. Their work is the first to demonstrate squeezing over a broad frequency bandwidth of up to 1.75 gigahertz while maintaining a high degree of squeezing (selective noise reduction). In comparison, previous microwave parametric amplifiers generally achieved bandwidths of only 100 megahertz or less. This new broadband device may enable scientists to read out quantum information much more efficiently, leading to faster and more accurate quantum systems. By reducing the error in measurements, this architecture could be utilized in multiqubit systems or other metrological applications that demand extreme precision. "As the field of quantum computing grows, and the number of qubits in these systems increases to thousands or more, we will need broadband amplification. With our architecture, with just one amplifier you could theoretically read out thousands of qubits at the same time," says electrical engineering and computer science graduate student Jack Qiu, who is a member of the Engineering Quantum Systems Group and lead author of the paper detailing this advance. The senior authors are William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and of physics, director of the Center for Quantum Engineering, and associate director of the Research Laboratory of Electronics; and Kevin P. O'Brien, the Emanuel E. Landsman Career Development professor of electrical engineering and computer science. The paper will appear in Nature Physics. Squeezing noise below the standard quantum limit Superconducting quantum circuits, like quantum bits or "qubits," process and transfer information in quantum systems. This information is carried by microwave electromagnetic signals comprising photons. But these signals can be extremely weak, so researchers use amplifiers to boost the signal level such that clean measurements can be made. However, a quantum property known as the Heisenberg Uncertainty Principle requires a minimum amount of noise be added during the amplification process, leading to the "standard quantum limit" of background noise. However, a special device, called a Josephson parametric amplifier, can reduce the added noise by "squeezing" it below the fundamental limit by effectively redistributing it elsewhere. Quantum information is represented in the conjugate variables, for example, the amplitude and phase of electromagnetic waves. However, in many instances, researchers need only measure one of these variables—the amplitude or the phase—to determine the quantum state of the system. In these instances, they can "squeeze the noise," lowering it for one variable, say amplitude, while raising it for the other, in this case phase. The total amount of noise stays the same due to Heisenberg's Uncertainty Principle, but its distribution can be shaped in such a way that less noisy measurements are possible on one of the variables. A conventional Josephson parametric amplifier is resonator-based: It's like an echo chamber with a superconducting nonlinear element called a Josephson junction in the middle. Photons enter the echo chamber and bounce around to interact with the same Josephson junction multiple times. In this environment, the system nonlinearity—realized by the Josephson junction—is enhanced and leads to parametric amplification and squeezing. But, since the photons traverse the same Josephson junction many times before exiting, the junction is stressed. As a result, both the bandwidth and the maximum signal the resonator-based amplifier can accommodate is limited. The MIT researchers took a different approach. Instead of embedding a single or a few Josephson junctions inside a resonator, they chained more than 3,000 junctions together, creating what is known as a Josephson traveling-wave parametric amplifier. Photons interact with each other as they travel from junction to junction, resulting in noise squeezing without stressing any single junction. Their traveling-wave system can tolerate much higher-power signals than resonator-based Josephson amplifiers without the bandwidth constraint of the resonator, leading to broadband amplification and high levels of squeezing, Qiu says. "You can think of this system as a really long optical fiber, another type of distributed nonlinear parametric amplifier. And, we can push to 10,000 junctions or more. This is an extensible system, as opposed to the resonant architecture," he says. Nearly noiseless amplification A pair of pump photons enters the device, serving as the energy source. Researchers can tune the frequency of photons coming from each pump to generate squeezing at the desired signal frequency. For instance, if they want to squeeze a 6-gigahertz signal, they would adjust the pumps to send photons at 5 and 7 gigahertz, respectively. When the pump photons interact inside the device, they combine to produce an amplified signal with a frequency right in the middle of the two pumps. This is a special process of a more generic phenomenon called nonlinear wave mixing. "Squeezing of the noise results from a two-photon quantum interference effect that arises during the parametric process," he explains. This architecture enabled them to reduce the noise power by a factor 10 below the fundamental quantum limit while operating with 3.5 gigahertz of amplification bandwidth—a frequency range that is almost two orders of magnitude higher than previous devices. Their device also demonstrates broadband generation of entangled photon pairs, which could enable researchers to read out quantum information more efficiently with a much higher signal-to-noise ratio, Qiu says. While Qiu and his collaborators are excited by these results, he says there is still room for improvement. The materials they used to fabricate the amplifier introduce some microwave loss, which can reduce performance. Moving forward, they are exploring different fabrication methods that could improve the insertion loss. "This work is not meant to be a standalone project. It has tremendous potential if you apply it to other quantum systems— to interface with a qubit system to enhance the readout, or to entangle qubits, or extend the device operating frequency range to be utilized in dark matter detection and improve its detection efficiency. This is essentially like a blueprint for future work," he says.
10.1038/s41567-022-01929-w
Other
New 13-million-year-old infant skull sheds light on ape ancestry
Isaiah Nengo et al, New infant cranium from the African Miocene sheds light on ape evolution, Nature (2017). DOI: 10.1038/nature23456 Journal information: Nature
http://dx.doi.org/10.1038/nature23456
https://phys.org/news/2017-08-million-year-old-infant-skull-ape-ancestry.html
Abstract The evolutionary history of extant hominoids (humans and apes) remains poorly understood. The African fossil record during the crucial time period, the Miocene epoch, largely comprises isolated jaws and teeth, and little is known about ape cranial evolution. Here we report on the, to our knowledge, most complete fossil ape cranium yet described, recovered from the 13 million-year-old Middle Miocene site of Napudet, Kenya. The infant specimen, KNM-NP 59050, is assigned to a new species of Nyanzapithecus on the basis of its unerupted permanent teeth, visualized by synchrotron imaging. Its ear canal has a fully ossified tubular ectotympanic, a derived feature linking the species with crown catarrhines. Although it resembles some hylobatids in aspects of its morphology and dental development, it possesses no definitive hylobatid synapomorphies. The combined evidence suggests that nyanzapithecines were stem hominoids close to the origin of extant apes, and that hylobatid-like facial features evolved multiple times during catarrhine evolution. Main Hominoids underwent a major evolutionary radiation during the Miocene epoch, with over 40 widely recognized species in at least 30 genera 1 . Despite this multitude of taxa, only about one-third are known from any cranial remains, and no more than half a dozen preserve any substantial portion beyond the face and palate 2 . Thus, much about hominoid cranial evolution remains poorly understood, especially with respect to the ancestral morphology that gave rise to the clade containing extant apes and humans. Importantly, the African fossil record lacks any reasonably complete hominoid crania between 17 and 7 million years (Myr) ago, and no cranial specimens are known at all from between 14 and 10 Myr (refs 3 , 4 , 5 , 6 ), greatly hampering the analysis of hominoid evolution. The KNM-NP 59050 cranium reported here was recovered from Napudet (South Turkana, Kenya) and dated to 13 Myr; it thus falls within this critical, yet poorly represented, period. The infant specimen is nearly complete, but is missing the deciduous dental crowns ( Fig. 1a–d and Extended Data Fig. 1a–f ). The unerupted adult dentition, brain endocast, and bony labyrinths were visualized using propagation phase-contrast X-ray synchrotron microtomography (PPC-SR-μCT; Fig. 1e–h ) 7 . The crown morphology of the fully formed I 1 s and M 1 s, as well as the partly formed M 2 s ( Fig. 2 and Supplementary Data 1 ), indicate that the specimen warrants attribution to a new species in the genus Nyanzapithecus . Figure 1: KNM-NP 59050. a – d , Specimen as preserved in anterior view ( a ), superior view ( b ), inferior view ( c ), and left lateral view ( d ). e – h , Three-dimensional visualizations based on X-ray microtomography, in views matching a – d , and with the bone rendered transparent to show the deciduous dental roots (beige), the unerupted permanent tooth crowns (grey), the bony labyrinths (green), and the endocast (blue transparent in e – g and beige in h ; the olfactory fossa marked by the blue line placed directly underneath). Scale bar, 5 cm. PowerPoint slide Full size image Figure 2: Unerupted permanent dentition. a – g , Three-dimensional X-ray microtomography-based visualization of the left I 1 to M 2 , respectively, shown from left to right in occlusal, mesial, lingual, distal, buccal/labial views. h – n , The right I 1 to M 2 as shown for the left side. In occlusal view, the lingual side of the crown is down. Scale bar, 5 mm. PowerPoint slide Full size image Systematic palaeontology Order Primates Linneaus, 1758 Suborder Anthropoidea Mivart, 1864 Infraorder Catarrhini Geoffroy, 1812 Superfamily Hominoidea Gray, 1825 Subfamily Nyanzapithecinae Harrison, 2002 Genus Nyanzapithecus Harrison, 1986 Nyanzapithecus alesi sp. nov. Etymology. Specific name taken from the Turkana word for ancestor, Ales . Holotype. KNM-NP 59050, an almost complete infant cranium preserving fully formed but unerupted I 1 and M 1 crowns, as well as partly formed crowns of all other permanent teeth, except the not yet initiated M 3 s. Locality and horizon. Napudet (2° 57′ N, 35° 52′ E), Turkana Basin, Kenya, Emunyan Beds, Brown Bedded Tuffs ( Extended Data Fig. 2a ). Geological age. 13 Myr. Diagnosis. A large species of Nyanzapithecus , with M 1 significantly larger than in N. pickfordi ( P < 0.05), N. harrisoni ( P < 0.01), and probably N. vancouveringorum ( Fig. 3a and Extended Data Table 1a ; one-tailed t -test, Bonferroni corrected). The upper molars of N. alesi differ from those of N. vancouveringorum in being more waisted, and in having higher and more inflated molar cusps, a very restricted trigon, and a mesial shelf. N. alesi differs from N. vancouveringorum and N. harrisoni in having an M 1 with a paracone approximately the same size as the metacone, and a protocone much larger than the hypocone. A reduced lingual cingulum also distinguishes N. alesi from N. harrisoni , but not from either N. vancoveringorum or N. pickfordi . N. alesi further differs from N. pickfordi in that the prehypocrista of the M 1 meets the base of the protocone rather than the crista obliqua. Figure 3: Dental metric comparisons of KNM-NP 59050. a , M 1 area (maximum mesiodistal × maximum buccolingual) compared with that of published Nyanzapithecus species. b , M 1 shape (maximum mesiodistal/maximum buccolingual, MD/BL) compared with extant and fossil hominoids. KNM-NP 59050 (dashed line) falls exclusively within the nyanzapithecine range among fossils. c , Relative I 1 size (I 1 maximum mesiodistal/M 1 maximum mesiodistal) compared with extant and fossil hominoids. Nyanzapithecus , including KNM-NP 59050, falls closest to Symphalangus . Samples are given in Supplementary Data 2 . For each sample the mean, the range between the first and third quartiles (box), and the highest and lowest values (whiskers) are indicated, with small ticks marking measured values in the sample. PowerPoint slide Full size image Cranial morphology KNM-NP 59050 is a nearly complete but somewhat distorted cranium of an infant primate ( Fig. 1 ). The cranium is slightly crushed bilaterally and the posterior portion of the basicranium is both broken and distorted. All the deciduous tooth crowns are broken off, but their roots are preserved. The permanent teeth are unerupted, with the right I 1 being visible in its crypt. The overall dimensions of KNM-NP 59050 are similar to those of Symphalangus crania of equivalent dental age, except for the maxillo-alveolar size, which is simiar to Hoolock ( Extended Data Table 1b ). Relative to overall cranial size, the snout is small as in juvenile hylobatids, and smaller than in extant juvenile hominids ( Extended Data Figs 3 and 4a–c ). This difference between hylobatids and hominids persists into adulthood ( Extended Data Fig. 4b, d ), and assuming that N. alesi followed the same pattern, its snout would have been relatively small as an adult, unlike that of Afropithecus and Saadanius . The orbits appear large, but are well within the expected range for an extant juvenile hominoid of its size ( Extended Data Fig. 4e ). The orbits are slightly taller than wide, which may reflect the bilateral distortion ( Extended Data Fig. 3a, b ). The supraorbital costae/ridges are poorly defined, similar to other infants, and in contrast to the condition observed in most adult extant hominoids. The lateral and inferior margins of the orbit are protruding and bar-like, which is unlike the flat/continuous margin seen among infant and adult great apes but similar to the morphology observed in hylobatids (infants and adults), pliopithecoids, and, to a lesser extent, Micropithecus 8 . The consistency of this feature among extant adult and juvenile apes suggests that it is ontogenetically stable. The interorbital area is relatively wide compared with the condition in extant juvenile hominids, but near the mean width for juvenile hylobatids, again a pattern probably maintained into adulthood and seen in Afropithecus and Turkanapithecus ( Extended Data Fig. 5a, b ). The lacrimal fossa is in line with the medial orbital margin, as in extant hylobatids and Victoriapithecu s, rather than being clearly within the orbit as in Pan and Gorilla , or clearly anterior to the orbital margin as commonly seen in Aegyptopithecus 9 . Nasion is located about a third of the way down between the levels of the inferior and superior orbital margins. Although the edge of the nasal aperture is broken, it is likely that the premaxillae ended at the lower edge of the nasals, as in other Miocene hominoids 10 , but unlike the condition seen in most cercopithecoids, juvenile or adult, or Aegyptopithecus , where the premaxillae extend superiorly between the nasals and the maxillae. The nasals are tall and rectangular; they do not appear to be hour-glass or diamond-shaped, as in chimpanzees of all ages, or to broaden inferiorly as in some cercopithecoids and gorillas (infants and adults). The nasal aperture is relatively narrow, as in Pongo ( Extended Data Fig. 5c, d ), and close to the width of the premaxillae. The malar region is oriented posteroinferiorly, as in adult hylobatids and many extant juvenile catarrhines, contrasting with a more vertical or anteroinferior orientation in most adult Old World anthropoids, including pliopithecoids, Victoriapithecus , Ekembo , and Turkanapithecus . In extant catarrhine taxa, malar orientation changes during ontogeny, becoming more anteroinferior by adulthood. This makes it difficult to predict what the adult condition in KNM-NP 59050 would have looked like, but the preserved morphology is most similar to adult and juvenile hylobatids. The root of the zygomatic arch originates close to the alveolar margin, which is similar to the condition seen in Aegyptopithecus and Rangwapithecus , and probably reflects a primitive retention 11 . Other Miocene catarrhines, as well as hylobatids of all ages, vary in this feature. However, the inferior base of the zygomatic root is positioned relatively higher on the maxilla of juvenile and adult extant great apes, a notable distinction from many other catarrhines. The greatest cranial breadth is between the well-developed supramastoid crests, unlike in hylobatids of the same dental age, where the parietal bosses still project more laterally, and the crests are incipient ( Extended Data Fig. 3b, c ). The coronal suture appears to be oriented in a mediolateral direction, as in extant hominids, Ekembo , and Turkanapithecus . This contrasts with the anteriorly oriented, V-shape coronal suture seen in many cercopithecoids, platyrrhines, and hylobatids ( Extended Data Fig. 3 ). Related to this configuration, the frontal is anteroposteriorly shorter than the parietal in the midsagittal plane ( Extended Data Fig. 5e, f ). The squamous portion of the left temporal is nearly a hemi-circle, with the highest point of the temporoparietal suture near the anteroposterior middle of the bone, above the posterior root of the zygomatic arch, the glenoid fossa, and the external acoustic meatus. This condition is most similar to that seen in Symphalangus infants and adults, but it is different from the condition observed in other hylobatids and great apes, in which the suture is oriented horizontally or anteroinferiorly, respectively. The external acoustic meatus is represented by a completely ossified tubular ectotympanic, unlike pliopithecoids and Pliobates , in which the inferior surface is not fully ossified 12 . It is fused anteriorly to a prominent postglenoid process. The posterolateral orientation of the meatus is most similar to the condition seen in some great apes and cercopithecoids (infants and adults), but is different from that observed among hylobatids, where the ectotympanic is typically oriented more anterolaterally. The lateral pterygoid plates, well preserved on the right side, are large and extend from the maxillary tuberosities to the anteromedial edge of the glenoid fossa, as in Turkanapithecus . They are similar in size and shape to those observed in Symphalangus specimens at a similar stage of dental development, but are relatively larger than the lateral pterygoid plates of other extant hominoids, either immature or adult. Dental morphology and development The permanent dentition is unerupted and includes complete I 1 and M 1 crowns, along with partly formed crowns of I 2 , C–P 4 , and nearly complete M 2 crowns ( Fig. 2 and Supplementary Data 1 ). The morphology of the M 1 matches that described for the genus Nyanzapithecus 13 , 14 , 15 , but differs from previously described species of this genus as noted in the diagnosis. The molars are relatively high crowned, and both M 1 and M 2 display moderate crown waisting. The M 2 is larger than M 1 , possesses a more prominent preprotocrista and prehypocrista compared with M 1 , and the M 2 hypocone is positioned more distally relative to the other cusps than in M 1 . Both M 1 and M 2 have a crown diameter that is greater mesiodistally than buccolingually, and have the distinctive rhomboidal occlusal outline characteristic of other nyanzapithecines ( Nyanzapithecus , Rangwapithecus , Turkanapithecus) , Oreopithecus , Samburupithecus , as well as some extant hominoid specimens ( Fig. 3b ). In fact, the M 1 shape falls exclusively within the nyanzapithecine range among fossil taxa and at the upper range of Symphalangus among extant taxa. The occlusal surfaces of the molars show conical, crowded, but well-defined cusps, combined with a relatively small, restricted trigon, and a distally offset hypocone. A prominent mesial cingulum is present, which continues around the protocone as a lingual cingulum, terminating at the hypocone. A short buccal cingulum is also present between the paracone and metacone. The occlusal surface enamel in KNM-NP 59050 is relatively smooth, with muted cuspal ridging and without the extensive wrinkling and cresting seen in Rangwapithecus . The central incisor of KNM-NP 59050 is stout and the crown is spatulate, with the incisal–cervical long axis of the crown being canted mesially with respect to the long axis of the root. The lingual surface bears a strong cingulum that is angled incisally from distal to mesial, and prominent mesial and distal marginal ridges are present. The cingulum is continuous with the distal marginal ridge, but is separated from the mesial marginal ridge by a shallow cleft, creating a rectangular, arched fovea, with a distinct enamel bulb or tubercle at its superior-lingual edge. The mesiodistal length of the crown is short relative to the length of M 1 ( Fig. 3c ), which is a distinctive feature found in Nyanzapithecus and hylobatids ( Symphalangus , particularly). Overall, I 1 is very similar in shape to the I 1 of KNM-MB 11842, a fragmentary premaxilla/maxilla assigned to N. pickfordi 13 ( Extended Data Table 1 ), and Bar 217′02, an isolated upper left I 1 assigned to N . cf. pickfordi 16 . Dental development was analysed using synchrotron virtual histology 17 , 18 ( Extended Data Fig. 6 ). KNM-NP 59050 presents a long-period line periodicity of 5 days ( Extended Data Fig. 6c ), following expectations for a primate of this size 19 . The neonatal line was identified in the M 1 s and the I 1 s, and a developmental sequence for the permanent teeth could be built using stress lines in dentine to match teeth ( Fig. 4 and Extended Data Fig. 6 ). An age at death of 485 ± 40 days was established. The two-dimensional relative enamel thickness index 20 of the M 1 s, measured in the mesial developmental plane 21 , is 12.2, which is intermediate between Symphalangus (10.8) and Hylobates (13.3) (ref. 22 ). KNM-NP 59050 shows an unusually advanced I 1 , developing at a similar speed to the M 1 . Among catarrhines, this pattern is found only in Hylobates 23 and Hoolock but not in Symphalangus 23 or Nomascus . Figure 4: Dental development of KNM-NP 59050. On the basis of virtual histological slices ( Extended Data Fig. 6 ), the crown (green) and root (purple) development of the upper right dentition is plotted from prenatal initiation of the M 1 and I 1 to death at 485 days after birth. In addition to birth (magenta) and death (black), the coloured lines represent reference stress lines in the dentine shown in the slices ( Extended Data Fig. 6c ) at the following day counts: blue, 265; green, 330; cyan, 365; yellow, 420; and red, 455. Sides of teeth indicated by B, buccal; D, distal; L, lingual; M, mesial. PowerPoint slide Full size image Endocranial volume, olfactory fossa, and inner ear A preliminary reconstruction of the cranium indicates an endocranial volume of 101 ml, which is projected to be close to the adult value if KNM-NP 59050 followed the correlation between brain growth and dental development seen in extant hylobatids ( Extended Data Table 1b ). Scaled against body mass ( Supplementary Note 1 ), this endocranial volume is smaller than seen in hylobatids, and close to values obtained for Turkanapithecus and Oreopithecus ( Extended Data Fig. 7a ). The olfactory fossa is shallow and underneath the frontal lobes ( Fig. 1h ), as seen in extant catarrhines and unlike the larger and rostrally projecting fossae in Aegyptopithecus , Saadanius , Victoriapithecus 24 , and Afropithecus . The bony labyrinth of the inner ear is preserved on both sides. In primates, this structure fully matures before birth, and in KNM-NP 59050 the overall size is closer to that of hylobatids than to hominids ( Extended Data Fig. 7c–n ), reflecting similarities in cranial size. However, in shape the labyrinth of KNM-NP 59050 uniquely shares with extant great apes a distinctly low-arced anterior semicircular canal 25 ( Extended Data Fig. 7c–f and Supplementary Table 1 ). The lateral semicircular canal is low-arced as well, as seen in Pan and Pongo 25 among extant hominoids, and in Oreopithecus and Aegyptopithecus among extinct catarrhines. The arc size of the lateral canal is small relative to those of the anterior and posterior ones, a feature observed in Aegyptopithecus , Saadanius , Oreopithecus , and Rudapithecus , but not in extant hominoids ( Supplementary Table 1 ). Scaled against body mass, the semicircular canals are relatively small, as in extant great apes, Saadanius , Rudapithecus , and Hispanopithecus , and unlike the large canals of hylobatids 26 ( Extended Data Fig. 7b ). In summary, in addition to the dental features given in the species diagnosis, the cranium of N. alesi can be characterized by the following, ontogenetically stable morphology: spatulate incisors, enamel with a relative thickness index of 12.2, a relatively small snout, wide interorbital distance, protruding and bar-like inferior and lateral margins of the orbit, narrow nasal aperture, shallow maxillae, a zygomatic root that originates low on the maxilla, large pterygoid plates, external acoustic meatus represented by a fully ossified ectotympanic tube, shallow and non-projecting olfactory fossae, and semicircular canals that are small-arced relative to body mass. Several of these features are also shared among other related genera for which partial crania are known (for example, Oreopithecus and Turkanapithecus ). Phylogenetic analysis The attribution of KNM-NP 59050 to Nyanzapithecus provides an opportunity to clarify the phylogenetic relationships of this relatively rare genus, on the basis of a much more comprehensive character evaluation than has been previously possible 8 , 14 , 15 , 27 . A cladistic analysis places N. alesi firmly within the nyanzapithecines, along with Rangwapithecus , Turkanapithecus , Oreopithecus , and Rukwapithecus ; these genera, along with afropithecines, form part of a sister clade to the crown hominoids ( Fig. 5 and Extended Data Fig. 8 ) rather than being stem catarrhines (contrary to ref. 15 ). A close relationship between Oreopithecus and Nyanzapithecus has been suggested previously on the basis of dental morphology 13 . Our analysis, which scores ontogenetically stable cranial characters for N. alesi , supports this hypothesis, in contrast to studies that place Oreopithecus among the crown hominoids 28 , 29 , 30 , 31 . Figure 5: Phylogenetic placement of N. alesi . Strict consensus of the ten most parsimonious trees from the unscaled phylogenetic analysis of 265 cranial and postcranial characters (tree length = 1383; consistency index = 0.289; homoplasy index = 0.711; retention index = 0.597). N. alesi is placed within the nyanzapithecines, which with the afropithecines form the sister group to crown hominoids. See also Extended Data Fig. 8 for bootstrap values and results of scaled analysis. PowerPoint slide Full size image Discussion KNM-NP 59050 is the first nearly complete African hominoid cranium recovered from between 17 and 7 Myr ago, and, to our knowledge, the most complete Miocene ape cranium yet described. KNM-NP 59050 provides critical evidence about the cranial anatomy of nyanzapithecines and, more broadly, of hominoids during an under-sampled time period in the African Miocene. Phylogenetic, cranial, and dental analyses of KNM-NP 50950 offer compelling insight into the ancestral morphology that gave rise to the clade containing extant apes and humans. N. alesi is similar to some hylobatids in aspects of overall cranial morphology and dental development, including enamel thickness, advanced I 1 crown formation, and M 1 crown formation time 23 . However, N. alesi is distinctly different from hylobatids in the relatively small size of its semicircular canals. This property is functionally relevant for the perception of motion, and suggests that N. alesi would have exhibited a slower, less agile mode of locomotion than the acrobatic brachiation of extant hylobatids 26 . This finding agrees with a previous study of a partial humerus provisionally attributed to Nyanzapithecus 32 . Some of the cranial similarities with extant hylobatids, including a relatively short face, a broad interorbital distance, and orbits with projecting inferior rims, are present not only in N. alesi , but also in other proposed nyanzapithecines, such as Turkanapithecus and Oreopithecus , as well as pliopithecoids, dendropithecoids, and colobines to varying degrees. This finding emphasizes the fact that multiple clades evolved hylobatid-like craniofacial morphology in parallel during catarrhine evolution. Hence, this general phenotype cannot be taken as evidence of shared ancestry with extant hylobatids in the absence of more convincing synapomorphies. As a case in point, our phylogenetic analysis suggests that the recently described Pliobates cataloniae , considered to be closely related to crown hominoids in part on the basis of hylobatid craniofacial similarities 12 , is more likely to be a member of the stem catarrhine group Pliopithecoidea ( Fig. 5 ). Since its initial description over 30 years ago 13 , the catarrhine genus Nyanzapithecus has been known mostly from isolated dental specimens. KNM-NP 59050 represents the first substantial cranial material of Nyanzaptithecus , and it confirms that the genus, and nyanzapithecines more broadly, possess a fully ossified tubular ectotympanic, a derived feature linking the group with crown catarrhines relative to more primitive taxa such as pliopithecoids. KNM-NP 59050 is currently the only described specimen of the new species N. alesi . However, similarities in the size and shape of the M 1 shared between N. alesi and isolated teeth previously referred to Nyanzapithecus sp. 33 or N . cf. pickfordi 16 from Kipsaramon, Kenya (approximately 15.83–15.36 Myr (refs 33 , 34 , 35 )) suggest that these specimens may also be part of the N . alesi hypodigm. Nyanzapithecines were a long-lived and diverse group of Miocene hominoids that are probably close to the origin of crown hominoids. They first appear in the fossil record during the latest Oligocene of Africa 27 , and persisted until perhaps the late Miocene, if the enigmatic Eurasian species Oreopithecus bamboli is indeed a late-surviving member of this clade 13 , 14 , 15 , 28 . Methods Locality description and specimen discovery Napudet (2° 58.103′ N, 35° 51.969′ E) is located in South Turkana, Turkana Basin, Kenya (see Extended Data Fig. 1g ). Sedimentary strata at Napudet were identified as part of the regional mapping of the area by the Kenya Geological Survey 36 . The Koobi Fora Research Project discovered the locality in 1990. I.N. relocated the site in 2013, with the help of a team from the Turkana Basin Institute, and directed surveys in 2014 and 2015. KNM-NP 59050 was discovered in 2014 by team member J. Ekusi, and prepared by C. Kiarie at the Turkana Basin Institute. Stratigraphy and age The Napudet Hills are formed by footwall uplift resulting from movement along the Napudet–Loperot Fault, with Miocene and Pliocene strata elevated above the surrounding low relief plains of the South Turkana Desert 36 , 37 ( Extended Data Fig. 1g ). At the northern end of the Napudet Hills, sedimentary strata comprise four sequences, the lowest of which is a volcaniclastic interval informally termed the Emunyan beds here ( Extended Data Fig. 2a and Supplementary Table 2 ), containing abundant fossil wood and scattered but well-preserved vertebrate fossils. The hominoid fossil KNM-NP 59050 was a surface find at the edge of exposures of the prominent brown bedded tuffs in the lower part of the Emunyan beds. Matrix sediment removed during preparation of the cranium was consistent with derivation from this sedimentary unit. A basalt flow that underlies the fossiliferous Emunyan sedimentary sequence is dated here by Ar–Ar to 13.31 ± 0.04 Myr. Preliminary alternating field demagnetization of an oriented sample of this basalt yielded a normal geomagnetic polarity, suggesting that the basalt formed during either C5AAn (13.03–13.18 Myr) or C5ABn (13.36–13.61 Myr) 38 . Magnetic polarity stratigraphy of the overlying Emunyan beds demonstrated normal polarity for the level of the hominoid fossil, and a normal-to-reversed transition approximately 5 m above the level of the fossil. Assignment of this transition to the top of Chron C5AAn would suggest an age of slightly older than 13.0 Myr, while correlation to the top of C5ABn would imply a slightly older age of about 13.4 Myr for the hominoid. The 40 Ar– 39 Ar age of sample 15-NPD-03 was determined by high-precision step-heating technique on a MAP-215-50 mass spectrometer in the Noble Gas Laboratory at Rutgers University ( Extended Data Fig. 2b and Supplementary Data 3 ).The sample was petrographically evaluated, crushed, sieved to 300–600 μm size range, washed in distilled water in an ultrasonic bath, and dried in an oven at ~80 °C. The sample was hand-picked and loaded into aluminium irradiation disks along with multiple splits of monitor minerals and wrapped in aluminium foil, and neutron irradiated at 1,000 kW for 20 min using cadmium-foil shielding in the central thimble facility of the US Geological Survey TRIGA reactor in Denver, Colorado. A 40-W CO 2 laser with a jogging square laser beam (6 mm × 6 mm) was used as the thermal source for the incremental-heating experiments. The irradiation parameter J was determined by multiple total-fusion analyses of co-irradiated monitor mineral Fish Canyon Sanidine (FC-2 = 28.201 ± 0.046 Myr) 39 . Age calculations were made using currently accepted decay constants and isotopic abundances 40 (1977): λ ε = 5.81 × 10 −11 yr −1 , λ β = 4.962 × 10 −10 yr −1 , 40 K/K total = 1.167 × 10 −4 . The following interfering neutron reaction from Ca and K values were used 39 , 40 : ( 36 Ar– 37 Ar) Ca = 2.64 ± 0.02 × 10 −4 ; ( 39 Ar– 37 Ar) Ca = 6.73 ± 0.04 × 10 −4 ; ( 38 Ar– 39 Ar) K = 1.34 ± 0.02 × 10 −2 from refs 41 , 42 , and ( 40 Ar– 39 Ar) K = 2.85 ± 0.5 × 10 −4 correction determined from measurements of kalsilite glass. When we plotted the incremental release spectra of the plateau steps identified for sample 15-NPD-03 on an isochron plot, it was possible to observe that the 40 Ar– 36 Ar intercept was not purely atmospheric. Thus, we corrected the data using these trapped 40 Ar– 36 Ar components. The corrected plateau 40 Ar– 39 Ar age obtained, 13.31 ± 0.04 Myr, is considered to be the most representative for this sample ( Extended Data Fig. 2b ). X-ray microtomography KNM-NP 59050 was scanned using propagation phase-contrast X-ray synchrotron microtomography at beamline ID19 of the European Synchrotron Radiation Facility in Grenoble, France. The purpose was to visualize the specimen from a full overview down to virtual histology for the study of dental development. Four configurations were therefore used, providing voxel sizes of 28.06, 12.86, 3.44, and 0.74 μm. All acquisition parameters are summarized in Supplementary Table 3 . Extant hominoid crania were scanned for comparative purposes using beamline BM05 of the European Synchrotron Radiation Facility in polychromatic mode (average energy between 100 and 130 keV), the GE phoenix v|tome|x s240 at the American Museum of Natural History, New York, and the BIR ACTIS 225/300 of the Max Planck Institute for Evolutionary Anthropology, Leipzig. Voxel sizes varied between 22.93 and 53.19 μm depending on the size of the specimens. VGStudioMax 3.0 (Volume Graphics), Avizo 7.1 (FEI), and Amira 5.6 (FEI) were used for two- and three-dimensional visualization, segmentation, reconstruction, and measurements. Preliminary reconstruction of the cranium To make meaningful comparisons possible, a preliminary retrodistortion of KNM-NP 59050 was attempted. This was done first on six orthogonal views using two-dimensional distortion maps to restore symmetry and to compensate for the major fractures ( Extended Data Figs 1a–f and 3a ). The two-dimensional distortion maps were then applied sequentially in three dimensions to the original image volume of KNM-NP 59050 ( Extended Data Fig. 3b ). This approach of correcting plastic deformations and fractures should be reasonably reliable for large-scale aspects and to obtain a preliminary estimate of the endocranial volume, but not necessarily with respect to more detailed morphology. A full reconstruction of the cranium will require extensive segmentation of all the bony components affected by cracks and plastic deformation, and subsequent three-dimensional correction of any distortions and misalignments. Dental development and synchrotron virtual histology Dental development in N. alesi was investigated quantitatively, using virtual synchrotron palaeohistology 17 to examine the incremental lines preserved in the enamel and dentine. Long-period lines and stress patterns were visible in scans with a voxel size of 3.44 μm of complete teeth of KNM-NP 59050, and long-period line periodicity could be seen in scans with a voxel size of 0.72 μm. Diagenetic factors made it difficult to observe the enamel incremental lines in many of the dental germs. Only the right I 2 exhibited a good enough contrast of the microstructures to establish a long-period line periodicity of 5 days ( Extended Data Fig. 6d ), and this figure was used for the rest of the analysis to calculate time in days. Although enamel microstructures were not widely visible across the dentition in the 3.44 μm scans, dentine showed a very clear incremental pattern at this resolution, with good visibility of Andresen lines (equivalent to Retzius lines in enamel), especially in the right I 2 ( Extended Data Fig. 6c ). Andresen lines were also visible in many other teeth, but not at a clarity that would allow the precise timing of tooth initiation to be documented. Nonetheless, all elements of the dentition exhibit a fairly clear stress pattern with recognizable accentuated lines. By comparing the pattern of the right I 2 , calibrated against time using the Andresen line counts and periodicity, and with identification of the same stress lines in other teeth ( Extended Data Fig. 6b, c ), it was possible to retrieve the relative dental development sequence for the whole dentition. For developing tooth germs, the dentine daily secretion rate along the cusp axis (from dentine horn to developing pulp cavity) appears to be relatively constant, as long as the lateral enamel is still extending. We used this property to estimate dental initiation for all the teeth, by measuring the distance along the developmental axis of the cusps, between the dentine horn and the first recognizable reference stress line. Dentine axial daily secretion rates were calibrated for the right I 2 and the left M 1 and then applied to all the other teeth. The first visible accentuated line in the M 1 s was interpreted as the neonatal line, and this line was observable in all cusps except the hypocone. After careful superimposition of the stress pattern across the whole dentition, it was clear that the neonatal line was also visible in the I 1 s as the uppermost stress line, indicating a prenatal initiation not only of the M 1 s, but also of the I 1 s. By combining these data, it was possible to quantify the complete dental development ( Supplementary Data 4 ), and to build developmental charts ( Fig. 4 and Supplementary Data 4 ). The age at death was then measured as 485 days after birth with an error margin of 40 days. Crown formation times of the M 1 s (paracone) and of the I 1 s are 1.05 and 1.21 years, respectively (average of both sides). Of special interest is that the I 1 s in N. alesi initiate development very early and complete their growth at the same time as the M 1 s. Considering the rapid extension of the I 1 roots, which are longer than those of the M 1 s, the I 1 s would probably have erupted before, or at the same time as, the M 1 s. This proposition is strengthened by the advanced root resorption of the deciduous I 1 s and I 2 s, suggesting ongoing eruption of the permanent I 1 s, followed by the I 2 s. The general dental development pattern of KNM-NP 59050, and the advanced I 1 development in particular, were studied in more detail by making comparisons with extant juvenile hominoids and cercopithecoids. These included Pan troglodytes (10), Gorilla gorilla (3), Pongo pygmaeus (4), Homo sapiens (6), Hoolock sp. (4), Hylobates muelleri (1), Nomascus hainanus (1); and the cercopithecoids Papio ursinus (1), Cercopithecus petaurista (1), Macaca sp. (2), and Macaca nigra (1). These specimens are in the collections of the Musée des Confluences de Lyon and were scanned at the European Synchrotron Radiation Facility, except for the Hoolock material, which is housed in, and was scanned at, the American Museum of Natural History (New York), as well as Hylobates and Nomascus specimens, which are housed at the Museum für Naturkunde, Berlin, and were scanned at the Max Planck Institute of Evolutionary Anthropology (Leipzig). Results of these comparisons show that the unusual pattern of advanced development of the I 1 s is found only in Hylobates and Hoolock . Three dental development characters were coded as part of the character matrix used in the phylogenetic analyses ( Supplementary Data 5 ). These were the relative developmental timing of I1 versus M1, M1 versus M2, and M2 versus P4. These data were taken preferentially from upper teeth when available, but also from lower ones where necessary (assuming a similar developmental pattern in upper and lower dentition). The character states were obtained by direct observations (including online tomography database) and from the literature, on the basis of computed tomography of juveniles, documented eruption patterns, or histological data. Semicircular canals of the inner ear The left bony labyrinth of KNM-NP 59050 is best preserved and analysed here. It was compared quantitatively on the basis of arc shape and size of the semicircular canals, using features known to distinguish hominids, hylobatids, and other catarrhines 25 , 26 , 43 , 44 . Data are provided in Supplementary Table 1 . For fossils and the extant hominoids, the arc height and width of each canal 25 were measured from three-dimensional surface models extracted from microtomographic scans ( Extended Data Fig. 7a–l ). In the case of extant hominoids, this concerned a model representing the mean shape and size of the sample. Data were also obtained for additional anthropoid species 43 , 44 , for use in the phylogenetic analyses. The mean radius of curvature of the semicircular canals was scaled against body mass ( Extended Data Fig. 7h , ref. 45 and Supplementary Note 1 ). Morphological and cladistic analyses Specimens examined for the morphological and phylogenetic analyses derived from collections at the American Museum of Natural History, the Harvard Museum of Comparative Zoology, the National Museums of Kenya, Nairobi, and the Center for the Study of Human Origins at New York University. Hominins were not included in the comparative sample, and not considered here when using the terms ‘hominoid’ and ‘hominid’. Extant juvenile specimens were selected to encompass the dental age of KNM-NP 59050, from erupted deciduous dentition only to full eruption of the M 1 and/or I 1 , some with M 2 starting eruption as well ( Supplementary Data 2 ). Adult specimens were defined as those with a completely erupted permanent dentition. Observations and measurements are based on original specimens, as well as on high-quality casts, published photographs, and data provided in the literature. All juvenile data were collected by the authors; adult data were collected by the authors whenever possible, with additional data from ref. 46 , PRIMO (access courtesy of E. Delson), and the literature ( Supplementary Data 2 ). Whenever possible, individual specimens were used for statistical comparisons, but in a few cases published species averages were used to include key taxa. Adult features were assessed for KNM-NP 59050 through quantitative and qualitative comparisons with the above-referenced sample of extant juvenile and adult hominoids. Any given morphological feature was deemed to hold constant throughout ontogeny if present in both adult and juvenile extant ape specimens of the taxa examined 41 . These ontogenetically stable features are discussed throughout the main text. Comparative analyses were based on standard craniometric and dental measurements ( Extended Data Table 1 and Supplementary Data 2 ). Those of KNM-NP 59050 were taken from the original specimen as well as from a high-resolution three-dimensional surface visualization derived from the microtomographic images ( Extended Data Figs 1a–f and 3a, b ). For comparisons of juvenile crania, the measurements were size-adjusted by dividing each by the geometric mean of 13 cranial measurements ( Supplementary Data 2 ). For comparisons of adult crania, the measurements were size-adjusted on the basis of the square root of M 1 area to enable comparisons with key fossil specimens that are invariably too fragmentary to obtain a geometric mean of an adequate set of cranial measurements. The two methods of size adjustment are broadly similar, given that the geometric mean used here and the square root of M 1 area are highly correlated ( r = 0.935; r 2 = 0.874; P < 0.001; 61 adult extant ape specimens). Nevertheless, juvenile and adult specimens were not directly compared, as we only focused on the pattern of morphological differences between taxa. In particular, we examined whether specific patterns were consistently maintained from infants to adults to assess the ontogenetic stability of particular features. For KNM-RU 7290, both the actual M 1 area and a reduced estimated area were used, aimed at correcting for the megadontia exhibited by E. heseloni 47 ( Supplementary Note 1 ). Statistical analyses were performed in PAST 3.14 (ref. 48 ) and SPSS version 22.0. Phylogenetic analyses In addition to general qualitative and quantitative anatomical comparisons, we conducted a parsimony analysis of morphological characters to assess the phylogenetic position and likely evolutionary relationships of N. alesi . The morphological character matrix of ref. 8 was supplemented with cranial characters from ref. 29 and several other sources 12 , 49 , 50 ( Supplementary Data 5 ). Twelve new characters were also included, and several character states were modified to accommodate polymorphisms or features found in the broadened taxonomic sample. In some cases, scores differed from the datasets from which they were derived on the basis of our independent qualitative and quantitative assessments of morphology (as indicated in Supplementary Data 5 ). The taxon sample of ref. 8 was expanded to include additional fossil catarrhines, including Saadanius , Rukwapithecus , Oreopithecus , Afropithecus , Kenyapithecus , Ouranopithecus , Sivapithecus , Lufengpithecus , Hispanopithecus , Pierolapithecus , and the recently described Pliobates 12 . We did not include Mabokopithecus in our analyses because the hypodigm and its relationship to Nyanzapithecus are unresolved and under study 51 . For these additional taxa, scores were based on our own data and observations as well as information provided in the literature; many character codings for Saadanius and Rukwapithecus were taken from ref. 27 , while codings for Pierolapithecus , Hispanopithecus , and Pliobates (with modifications) were taken from ref. 12 . All characters and character states are provided in Supplementary Data 5 and the matrix is provided as a Nexus file in Supplementary Data 6 . In total, 265 characters scored for 47 taxa were included in the analysis, incorporating characters of the skull and postcranial skeleton. Characters were considered ordered whenever it could be assumed that a population probably passed through an intermediate state to get to an extreme state on either side. In these cases, ordering characters is a much more faithful representation of the evolutionary process (for example, a population does not typically evolve from a small body mass to a large body mass without passing through an intermediate body mass through directional selection) 49 . Polymorphisms were coded as intermediate states between two fixed states whenever possible; simulations suggest that this coding system increases the accuracy of the resulting trees 52 . All other characters were left unordered. Two analyses were performed: one with characters unscaled (that is, all steps given equal weight), and one with multistate, polymorphic characters scaled so that these characters had the same weight as binary and unordered characters (that is, polymorphic steps were downweighted relative to steps between fixed states) ( Fig. 5 , Extended Data Fig. 8 and Extended Data Table 2 ). Adult character states were estimated for KNM-NP 59050 through quantitative and qualitative morphological comparisons with the above-referenced sample of extant infant and adult hominoids (see, for example, Extended Data Figs 4 and 5 ). A character state was deemed to hold constant throughout ontogeny if it was present in both adult and infant specimens close to the same dental age as KNM-NP 59050. Wherever it could be safely assumed that a character state held through ontogeny, we scored KNM-NP 59050 accordingly 53 . The resulting matrix was analysed using a 10,000 replication, random addition sequence heuristic search in PAUP 4.10b. To provide an estimate of clade support, a 1,000-replication bootstrap procedure with replacement was performed. A sample of platyrrhines ( Aotus ( Cebus , Saimiri )) along with the primitive catarrhines Catopithecus and Aegyptopithecus were assigned and constrained as successive outgroups, with the ingroup composed of Saadanius , pliopithecoids, Old World monkeys, dendropithecids, proconsulids, Pliobates , hylobatids, fossil hominids ( Sivapithecus , Kenyapithecus , Pierolapithecus , Hispanopithecus , Ouranopithecus , Lufengpithecus , and Oreopithecus ), and extant hominids ( Pongo , Gorilla , and Pan ). Broad-level taxonomy follows ref. 1 . Data availability KNM-NP 59050 is available for study at the National Museums of Kenya, and data analysed here are provided in the article and its Supplementary Information . Image datasets of comparative specimens can be accessed at , , and The species is registered in ZooBank ( ; LSID urn:lsid:zoobank.org:act:BE0A6575-AD3A-4415-A169-6ABC6B8280E2).
The discovery in Kenya of a remarkably complete fossil ape skull reveals what the common ancestor of all living apes and humans may have looked like. The find, announced in the scientific journal Nature on August 10th, belongs to an infant that lived about 13 million years ago. The research was done by an international team led by Isaiah Nengo of Stony Brook University-affiliated Turkana Basin Institute and De Anza College. Among living primates, humans are most closely related to the apes, including chimpanzees, gorillas, orangutans and gibbons. Our common ancestor with chimpanzees lived in Africa 6 to 7 million years ago, and many spectacular fossil finds have revealed how humans evolved since then. In contrast, little is known about the evolution of the common ancestors of living apes and humans before 10 million years ago. Relevant fossils are scarce, consisting mostly of isolated teeth and partial jaw bones. It has therefore been difficult to find answers to two fundamental questions: Did the common ancestor of living apes and humans originate in Africa, and what did these early ancestors look like? Now these questions can be more fully addressed because the newly discovered ape fossil, nicknamed Alesi by its discoverers, and known by its museum number KNM-NP 59050, comes from a critical time period in the African past. In 2014, it was spotted by Kenyan fossil hunter John Ekusi in 13 million-year-old rock layers in the Napudet area, west of Lake Turkana in northern Kenya. "The Napudet locality offers us a rare glimpse of an African landscape 13 million years ago," says Craig S. Feibel of Rutgers University-New Brunswick. "A nearby volcano buried the forest where the baby ape lived, preserving the fossil and countless trees. It also provided us with the critical volcanic minerals by which we were able to date the fossil." A 3-D animation of the Alesi skull, computed from the European Synchrotron Radiation Facility (ESRF) microtomographic data. It shows first the skull in solid 3-D rendering, then transparent surface rendering is used to show the endocast shape (light blue), the internal ears (green) and the permanent teeth germs (grey and brown). Credit: Paul Tafforeau / ESRF The fossil is the skull of an infant, and it is the most complete extinct ape skull known in the fossil record. Many of the most informative parts of the skull are preserved inside the fossil, and to make these visible the team used an extremely sensitive form of 3D X-ray imaging at the synchrotron facility in Grenoble, France. "We were able to reveal the brain cavity, the inner ears and the unerupted adult teeth with their daily record of growth lines," says Paul Tafforeau of the European Synchrotron Radiation Facility. "The quality of our images was so good that we could establish from the teeth that the infant was about 1 year and 4 months old when it died." The unerupted adult teeth inside the infant ape's skull also indicate that the specimen belonged to a new species, Nyanzapithecus alesi. The species name is taken from the Turkana word for ancestor "ales." "Until now, all Nyanzapithecus species were only known from teeth and it was an open question whether or not they were even apes," notes John Fleagle of Stony Brook University. "Importantly, the cranium has fully developed bony ear tubes, an important feature linking it with living apes," adds Ellen Miller of Wake Forest University. Alesi's skull is about the size of a lemon, and with its notably small snout it looks most like a baby gibbon. "This gives the initial impression that it is an extinct gibbon," observes Chris Gilbert of Hunter College, New York. "However, our analyses show that this appearance is not exclusively found in gibbons, and it evolved multiple times among extinct apes, monkeys, and their relatives." Primate paleontologist Isaiah Nengo talks about the day the 13 million-year-old ape fossil skull was discovered. Credit: Isaiah Nengo. Audio © Origin Stories Podcast and The Leakey Foundation. That the new species was certainly not gibbon-like in the way it behaved could be shown from the balance organ inside the inner ears. "Gibbons are well known for their fast and acrobatic behavior in trees," says Fred Spoor of University College London and the Max Planck Institute of Evolutionary Anthropology, "but the inner ears of Alesi show that it would have had a much more cautious way of moving around." "Nyanzapithecus alesi was part of a group of primates that existed in Africa for over 10 million years," concludes lead author Isaiah Nengo. "What the discovery of Alesi shows is that this group was close to the origin of living apes and humans and that this origin was African." Alesi after attached sandstone rock was partially removed at the Turkana Basin Insitute, near Lodwar, Kenya. Credit: © Isaiah Nengo, Photo by Christopher Kiarie
10.1038/nature23456
Medicine
Pioneering single-dose radiotherapy for breast cancer treatment
Jayant S. Vaidya et al, New clinical and biological insights from the international TARGIT-A randomized trial of targeted intraoperative radiotherapy during lumpectomy for breast cancer, British Journal of Cancer (2021). DOI: 10.1038/s41416-021-01440-8 Journal information: British Journal of Cancer
http://dx.doi.org/10.1038/s41416-021-01440-8
https://medicalxpress.com/news/2021-05-single-dose-radiotherapy-breast-cancer-treatment.html
Abstract Background The TARGIT-A trial reported risk-adapted targeted intraoperative radiotherapy (TARGIT-IORT) during lumpectomy for breast cancer to be as effective as whole-breast external beam radiotherapy (EBRT). Here, we present further detailed analyses. Methods In total, 2298 women (≥45 years, invasive ductal carcinoma ≤3.5 cm, cN0–N1) were randomised. We investigated the impact of tumour size, grade, ER, PgR, HER2 and lymph node status on local recurrence-free survival, and of local recurrence on distant relapse and mortality. We analysed the predictive factors for recommending supplemental EBRT after TARGIT-IORT as part of the risk-adapted approach, using regression modelling. Non-breast cancer mortality was compared between TARGIT-IORT plus EBRT vs. EBRT. Results Local recurrence-free survival was no different between TARGIT-IORT and EBRT, in every tumour subgroup. Unlike in the EBRT arm, local recurrence in the TARGIT-IORT arm was not a predictor of a higher risk of distant relapse or death. Our new predictive tool for recommending supplemental EBRT after TARGIT-IORT is at . Non-breast cancer mortality was significantly lower in the TARGIT-IORT arm, even when patients received supplemental EBRT, HR 0.38 (95% CI 0.17–0.88) P = 0.0091. Conclusion TARGIT-IORT is as effective as EBRT in all subgroups. Local recurrence after TARGIT-IORT, unlike after EBRT, has a good prognosis. TARGIT-IORT might have a beneficial abscopal effect. Trial registration ISRCTN34086741 (21/7/2004), NCT00983684 (24/9/2009). Introduction Most patients with breast cancer are suitable for treatment with breast-conserving surgery and adjuvant radiotherapy, rather than total mastectomy. Based on the hypothesis that adjuvant radiotherapy for women with early breast cancer could be limited to the tumour bed and given immediately during breast-conserving surgery (lumpectomy), we developed the concept of TARGeted Intraoperative radioTherapy (TARGIT-IORT). 1 , 2 , 3 , 4 , 5 , 6 TARGIT-IORT aims to achieve an accurately-positioned and rapid form of tumour-bed irradiation, focussed on the target tissues alone, sparing normal tissues and organs such as heart, lung, skin and chest wall structures from unnecessary and potentially damaging radiation treatment. We designed the TARGIT-A randomised trial to test this concept by comparing risk-adapted TARGIT-IORT with conventional whole-breast external beam radiotherapy over several weeks (EBRT). 3 , 7 , 8 The study received ethics approval from the Joint University College London and University College London Hospital committees of ethics of human research (99/0307). The accrual was from March 2000 to June 2012. The long-term results of the trial are described separately and show that TARGIT-IORT is as effective as whole-breast external beam radiotherapy (EBRT) for all breast cancer outcomes, with a significant reduction in mortality from causes other than breast cancer. 9 The trial eligibility was not confined to low-risk patients: they needed to be 45 years or older, with invasive ductal carcinoma that was suitable for breast conservation and preferably less than 3.5 cm in size and unifocal on clinical examination and conventional imaging. Having a grade 3 cancer, involved nodes or higher risk receptor status, did not exclude the patient from participating. Therefore, a large number of patients in each category of higher risk were included, allowing meaningful subgroup analysis. In addition, the follow-up of the TARGIT-A trial was long, with a large number of patients having follow-up for at least 5 years ( n = 2048) and 10 years ( n = 741). So, the number of events for local recurrences and deaths after long-term follow-up were expected to be large enough to assess the prognostic significance of local recurrence. As specified in the protocol, treatment was given using a risk-adapted approach, which meant that patients allocated to receive TARGIT-IORT were recommended to also receive supplemental EBRT, if they were postoperatively found to have specific unsuspected tumour characteristics, in which case the TARGIT-IORT served as a tumour-bed boost. The protocol specified three such factors—an unexpected diagnosis of invasive lobular carcinoma, presence of extensive intraductal component (>25%) and positive margins. Pragmatically, each centre was allowed to pre-specify such criteria and they recorded them in the ‘treatment policy document’ before they started recruitment. Therefore, for an individual case, the use of supplemental EBRT depended on a combination of several factors discussed in the post-operative multidisciplinary team meeting (tumour board). Having known the use of supplemental EBRT within the trial (about 20% of cases) and with the knowledge of the tumour factors, a regression model could be created. This risk-adapted approach also offers an opportunity for another type of analysis investigating the mechanism of the difference we found in non-breast cancer mortality during the main analysis. 9 One needs to recognise that the use of supplemental EBRT after TARGIT-IORT was prompted by specific features of the primary breast cancer. Therefore, there should be no reason for the risk of non-breast cancer mortality to be different between patients who received TARGIT + EBRT vs. those who received EBRT. Since both groups received EBRT, and if the difference was because of EBRT toxicity alone, there should be no difference found in non-breast cancer mortality in this comparison. This paper addresses four important aspects of the trial of TARGIT-IORT vs. EBRT, in which 2298 patients were randomised after their needle biopsy and before any surgical excision of cancer to receive either risk-adapted TARGIT-IORT delivered during the initial excision of cancer, or EBRT. These are: (a) outcome as per well-recognised tumour subgroups, (b) prognostic importance of local recurrence, (c) a predictive model for the use of supplemental EBRT after TARGIT-IORT and (d) an exploration seeking explanation for the differences in non-breast cancer mortality found between the two randomised arms. Methods Data from the TARGIT-A trial ( n = 2298) comparing risk-adapted TARGIT-IORT given during lumpectomy vs. EBRT were used for these analyses. 9 The TARGIT-A trial protocol ( ), including the details of eligibility, methodology and statistical methods, sample size calculations, the process of random allocation, has been previously described. 7 , 8 , 9 Eligible patients diagnosed with invasive malignancy by needle biopsy were randomly assigned before their surgery, in a 1:1 ratio, to receive either a risk-adapted approach using single-dose TARGIT-IORT or EBRT as per standard schedules over several weeks, with randomisation blocks stratified by centre. Therefore, the trial was a comparison of two policies—whole-breast radiotherapy without selection vs. individualised risk-adapted radiotherapy—in which a proportion of patients who received TARGIT-IORT were also given supplemental EBRT if they were found to have any pre-specified tumour factors. The sites participating in the trial were all centres of excellence (almost all were University teaching hospitals) with their own routine quality assurance in place. Every patient was treated as per the treatment guidelines and quality assurance laid down by each of the participating radiotherapy centres. While the collection of specific data relating to quality assurance was not mandatory, the schedule of treatment, total dose, dose per fraction and number of fractions for the EBRT (and the boost when given) were always collected. In the UK, the most widely used dose-fractionation regimen recommended during the time of the study was 40.05 Gy/15 fractions over 3 weeks, i.e., daily dose 2.67 Gy per fraction. In the USA, the commonest recommendation was 50 Gy/25 fractions over 5 weeks. For boost doses, institutional standards were once again routinely employed—mostly 10 Gy/5 fractions. The statistical analysis plan (SAP, submitted with the manuscript) was signed off by the chair of the independent steering committee and an independent senior statistician, before the data were unblinded and sent to the trial statistician for analysis. It specified the primary outcome as local recurrence-free survival. This outcome measured the chance of a patient being alive without local recurrence (any type of local recurrence in the ipsilateral breast) and therefore included local recurrence or death as events, i.e., patients who had died were not censored, which is consistent with the DATECAN 10 and STEEP 11 guidelines for clinical events to be included in the definitions of time-to-event endpoints in randomised clinical trials assessing treatments for breast cancer 12 . All analyses were by intention-to-treat as per the randomisation arm. Firstly, we performed a subgroup analysis for the primary outcome of local recurrence-free survival for the tumour factors such as size, grade, lymph node involvement, ER status, PgR status and HER2 status. Secondly, the concern that a difference in local recurrence might increase long-term mortality prompted us to investigate the assumption that local recurrence is a harbinger of distant disease and ultimately of death. We, therefore, performed Cox regression analysis using local recurrence as a time-dependent covariate, and estimated its interaction for the hazards of distant disease, breast cancer mortality in the two randomised arms. We also assessed this for overall mortality in order to take away any bias from the misclassification of the cause of death. Thirdly, we prepared a regression model using established high-risk factors to predict the use of supplemental EBRT in patients randomised to TARGIT-IORT. Significant factors from the model were used to create an interactive tool that would simulate how patients were treated in the TARGIT-A trial and whether they received supplemental EBRT. Such a tool should help clinicians decide which patients would have received such supplemental EBRT and enable them to translate the risk-adapted approach used within the randomised trial into day-to-day clinical practice. Finally, we explored the reason for the statistically significant difference in non-breast cancer mortality already seen between the two randomised arms. We compared non-breast cancer mortality between those who had received TARGIT-IORT followed by supplemental EBRT vs. EBRT. Any difference between these two groups would be indicative of a beneficial effect of TARGIT-IORT because both groups had received EBRT. The first patient was randomised in March 2000, and the last in June 2012. The reference date for completeness of follow-up was May 2, 2018. The reference date for analysis was July 3, 2019, so that all events in the entire population up until July 2, 2019 were included for analysis of hazard ratios. Point estimates are given for 5 years, at which point the follow-up is complete, and hazard ratios are estimated for the full length of the follow-up period, i.e., the length of time from randomisation to the date of the latest follow-up, for each individual patient. STATA version 16.0 was used for data compilation, validation and analysis. The chief investigator/corresponding author and the trial statistician had access to all data sent by the trial centre for the analysis; all authors were responsible for the decision to submit the manuscript. Since the last analysis, the trial oversight has been provided by an independent steering committee, appointed by the Health Technology Assessment Programme of the National Institute of Health Research, Department of Health, UK. Results In total, 1140 patients were randomised to TARGIT-IORT and 1158 to whole-breast radiotherapy. Patients were recruited from ten countries (24.7% from UK, 65.1% Europe, 9.4% USA/Canada and 0.8% others). Supplementary Table 1 shows the characteristics of trial patients. As previously published, 9 there was no statistically significant difference in local recurrence-free survival (events 167 vs. 147, hazard ratio 1.13, 95% confidence interval 0.91–1.41, P = 0.28), distant disease-free survival (133 vs. 148 events, HR 0.88, 0.69–1.12, P = 0.30), mastectomy-free survival (170 vs. 175 events, 0.96, 0.78–1.19, P = 0.74) or breast cancer mortality (65 vs. 57 events, HR 1.12, 0.78–1.60, P = 0.54). There was a significant reduction in non-breast cancer mortality with TARGIT-IORT (45 vs. 74 events, HR 0.59, 0.40–0.86, P = 0.005). In addition, no difference was found in local recurrence-free survival when the following comparisons were made: EBRT patients vs. TARGIT-IORT patients who received additional EBRT (HR 1.19, 0.83–1.71, P = 0.3422) and EBRT patients vs. TARGIT-IORT patients who did not receive additional EBRT (HR 1.12, 0.88–1.41, P = 0.3661) (Supplementary Fig. 1 ). The new analysis presented in this paper examines four specific aspects of the data accrued from this large, randomised trial. Firstly, the difference in the primary outcome of survival without local recurrence between TARGIT-IORT and EBRT was not significant for any of the tumour subgroups viz pathological tumour size, grade, ER status, PgR status, HER2 status and lymph node status (Table 1 and Fig. 1 ). Prompted by comments from reviewers, we created subgroups using combinations of factors and performed the following analyses. The most substantial of these include 1468 (64%) ‘lower-risk’ patients in whom the tumours were not >2 cm, or grade 3 or ER-negative, irrespective of age or lymph node status (59% were <65 years old and 17% were node-positive). The remaining 830 patients (‘not-lower-risk’) would have at least one of these risk factors. Table 1 Subgroup analysis: number of events for local recurrence and deaths and point estimates for local recurrence-free survival are given for 5 years when the follow-up is complete, as per protocol. Full size table Fig. 1: Forest plot showing local recurrence-free survival and overall survival as per tumour subgroups. Each box represents the amount of the data and horizontal lines show the 95% confidence interval. The dashed vertical line is through the hazard ratio for all patients. Full size image Analysis within each of these two subgroups found no difference in local control between the randomised arms TARGIT-IORT vs. EBRT by intention-to-treat, (‘lower-risk’ n = 1468, HR 1.05 (95% CI 0.77–1.44, P = 0.7450 and ‘not-lower-risk’ n = 830, HR 1.24 95% CI 0.91– 1.70, P = 0.1715), or after excluding those who received supplemental EBRT after TARGIT-IORT ( n = 1331, ‘lower-risk’ HR 1.02 (0.73–1.43), P = 0.8859 and ‘not-lower-risk’ n = 726, HR 1.28 (0.92–1.79), P = 0.1404). Similarly, no difference was found for the higher-risk subgroup of with triple-negative breast cancers by intention-to-treat ( n = 143, HR 0.87 (0.45–1.67), P = 0.6840) or after excluding those who received supplemental EBRT ( n = 131, HR 0.84 (0.43–1.66), P = 0.6300)), or those with HER2-negative tumours which were either ER- or PR-negative by intention-to-treat ( n = 317, HR 1.01 (0.60–1.69), P = 0.9730), or after excluding those who received supplemental EBRT ( n = 281, HR 1.03 (0.60–1.78), P = 0.9039). However, for those 1468 ‘lower-risk’ patients (not > 2 cm, or grade 3 or ER-negative), overall survival with TARGIT-IORT was 4.2% better at 12 years (TARGIT-IORT 91.7% vs. EBRT 87.3%, HR 0.65 (95% CI 0.44–0.96), P = 0.0308). Figure 1 also shows the overall survival outcomes in each main subgroup. The overall survival was significantly better by 4.4% (89.3 vs. 84.9%) at 12 years with TARGIT-IORT compared with EBRT in those with grade 1 or 2 cancers, (Fig. 2 , n = 1797, HR 0.72, 95% CI 0.53–0.98, P = 0.0361). We recognise of course that these are subgroup analyses, with all the usual caveats. Fig. 2: Subgroup analysis: overall survival in those with grade 1 or 2, n = 1797, and those with grade 3 cancers, n = 443. In total, 80% of the patients had grade 1 or 2 cancers. Of those with grade 1 or 2 cancers vs. grade 3 cancers, 20 vs. 30% were node-positive, and 4 vs. 29% were ER-negative, respectively. There was no difference in the rate of additional EBRT given after TARGIT-IORT between these groups. Full size image Secondly, the analysis of an interaction between local recurrence and mortality found that the prognostic significance of local recurrence in the EBRT arm was different to that of local recurrence in the TARGIT-IORT arm. Local recurrence in the EBRT arm but not in the TARGIT-IORT arm predicted a higher risk of distant disease ( P value for interaction P = 0.008, Fig. 3a ), breast cancer mortality ( P value for interaction P = 0.003, Fig. 3b ), and overall mortality ( P value for interaction P = 0.020, Fig. 3c ). This interaction might be better appreciated when seen in terms of the raw numbers of long-term deaths amongst those who had local recurrence within 5 years: 3/24 (13%) died in the TARGIT-IORT vs. 7/11 (63%) died in the EBRT arm. The mean survival duration of patients who had early local recurrence in the TARGIT arm was 8.7 years (SD 3.1) vs. EBRT 6.1 years (SD 3.3). Fig. 3: TARGIT-IORT vs EBRT: Contrasting long-term outcome after local recurrence. The hazard of distant metastasis (top left), breast cancer death (top right) and any death (bottom) —interaction with local recurrence as a time-dependent covariate. The hazards of patients who have local recurrence after EBRT as shown by the rising red line in each graph are significantly higher than those who have local recurrence after TARGIT-IORT, which in turn are the same as those without any local recurrence. Please note that these figures denote cumulative hazards of each interaction groups, whereas the curves in Fig. 4 are Kaplan–Meier estimates of cumulative incidences. Full size image Thirdly, the proportion of patients who ultimately received supplemental EBRT in addition to TARGIT-IORT for each prognostic subgroup is given in Table 2 , which also gives the local recurrence and mortality events, cumulative incidence of local recurrence, and local control rates as per treatment received. The regression model (sensitivity 71%, specificity 67%, correct classification in 68% of cases) for predicting the use of supplemental EBRT in an individual patient is available on the web and can be best understood with direct interaction. We urge the readers to click on the link and input some numbers for a hypothetical patient—this way best illustrates the concept—how a combination of factors influence the decision. Two example cases are illustrated in Supplementary Fig. 2 . In order to achieve results similar to those achieved within the trial, clinicians would want to emulate the way the risk-adapted approach was used within the trial. This interactive tool gives the probability of any individual patient’s receipt of supplemental EBRT if they had participated in the TARGIT-A trial. Using this information could facilitate an informed decision about recommending supplemental EBRT for an individual patient. Table 2 Total number of patients, total numbers in each arm and proportion of patients receiving supplemental EBRT among those randomised to receive TARGIT-IORT. Full size table Finally, an exploratory analysis sought an explanation for the difference in non-breast cancer mortality that was found in the main analysis between the two randomised arms (HR 0.59 (0.40–0.86), P = 0.005). The numbers of non-breast cancer deaths in those who were randomised to TARGIT were 45/1140 (6/241 amongst those who received additional EBRT and 39/899 amongst the others), and 74/1158 amongst those randomised to EBRT. Most of this difference (79% of the difference in the number of deaths) was contributed to by differences in deaths from pulmonary, cardiovascular causes and other cancers. Two of the major risks for these conditions, age and body mass index, were equally distributed in the two randomised arms (Supplementary Table 2 , top). Of the 1140 patients randomised to TARGIT-IORT, 241 patients were deemed to have a higher risk of relapse of breast cancer by the treating multidisciplinary team and therefore were selected to receive supplemental EBRT. While this group would have a higher risk of death from breast cancer, they should not have an increased risk of death from non-breast cancer causes—this was corroborated by the well-balanced distribution of two recorded risk factors (age and BMI, Supplementary Table 2 , bottom). We found that patients who had TARGIT-IORT plus EBRT ( n = 241) had a statistically significant reduction in non-breast cancer mortality (HR 0.38 (95% CI 0.17–0.88), P = 0.009) when compared with those randomised to EBRT ( n = 1158), in addition to the significant difference seen in the remaining 899 patients (HR 0.65 (95% CI 0.44–0.96), P = 0.0265) (Fig. 4 ). Fig. 4: Randomised comparison of non-breast cancer mortality showing signifcantly fewer deaths in patients randomised to TARGIT-IORT(top graph), and non-randomised comparisons to assess the contribution to the difference seen in the randomised comparison: because of the delivery of TARGIT-IORT (bottom left), and the avoidance of EBRT (bottom right). Please note that 40% of patients in the 1158 EBRT arm also received a tumour-bed boost which was not given to those who had received TARGIT-IORT. Full size image Discussion The long-term results of the TARGIT-A trial 9 have shown that there was no statistically significant difference between EBRT and the approach of risk-adapted TARGIT-IORT during lumpectomy, for local recurrence-free survival, invasive local recurrence-free survival, mastectomy-free survival, distant disease-free survival or breast cancer mortality. The mortality from other causes was significantly lower in the TARGIT-IORT arm. In this paper, we found that the results remain the same for each of the tumour subgroups such that no particular subgroup fares better or worse in terms of the difference in local recurrence-free survival for TARGIT-IORT vs. EBRT. This finding could make it easier for clinicians to select patients. In order to be eligible for risk-adapted TARGIT-IORT, patients simply need to fulfil the eligibility criteria for the TARGIT-A trial (≥45 years of age with invasive ductal carcinoma ≤3.5 cm in size and cN0–N1 and suitable for breast conservation). Once the final histopathology is available postoperatively, the interactive tool based on our regression model could facilitate decision-making about the need for supplemental EBRT: a clinician can input values for characteristics for an individual patient and their tumour in this web-based tool ( ), and its output will show the probability that that patient would have received supplemental EBRT after TARGIT-IORT within the TARGIT-A trial. This can help the clinician to make an individualised decision for their patient so that the outcome would be similar to that achieved within the TARGIT-A trial. An important point that traditionally causes concern is the long-term prognosis of a patient with a local recurrence. A local recurrence has been generally regarded as a harbinger of early death. This idea is supported by the results of the meta-analysis of breast-conserving surgery and whole-breast external beam radiotherapy by the Early Breast Cancer Trialists Collaborative Group, 13 which determined that for every four additional women who had a local recurrence one died from their disease. Consistent with this long-held belief, the analysis in the TARGIT-A trial presented in this paper also found that a local recurrence after EBRT was indeed a powerful predictor of distant metastases, breast cancer mortality and overall mortality. In contrast, a local recurrence after TARGIT-IORT did not have any impact on distant metastases, breast cancer mortality and overall mortality (Fig. 3 ). We recognise that the number of events is small, but the statistical significance of this finding is very high ( P = 0.003). This remarkable finding suggests that local recurrences after TARGIT-IORT are not indicative of the expected poor prognosis that is seen with local recurrences after whole-breast external beam radiotherapy. Possible explanations for this important observation need further research, but some suggestions about its mechanisms are the following: A simple explanation might be that majority of local recurrences after TARGIT-IORT are new primaries that normally do not have a poor prognosis while EBRT may be suppressing these good-prognosis cancers. The corroboration of this idea is seen in the much higher DCIS: Invasive ratio (12:32 vs. 1:19) in the TARGIT-IORT arm compared with EBRT, raising the possibility of overdiagnosis and ascertainment bias because of potentially more frequent use of mammography in those randomised to TARGIT-IORT. This may have led to a higher chance of detection of DCIS or invasive cancers that may not have progressed. However, this detection of such good-prognosis cancers in the TARGIT-IORT arm did not cause any reduction in mastectomy-free survival. We might also speculate that after EBRT, a local recurrence has only very aggressive cells that are a marker of incurable distant disease or consist of metastatic cells that grow in the tumour supportive wound environment. TARGIT-IORT appears to favourably influence wound fluid composition, and this may be a mechanism by which it might have unique radiobiology that somehow mainly allows the expression of local recurrences that are curable by earlier surgery and change of systemic (usually endocrine) therapy. By corollary, one might argue that avoiding radiotherapy altogether might have even enhanced such effect—but randomised evidence tells us that it does not—in trials of EBRT vs. no EBRT, for every four local recurrences that occur in the absence of EBRT there is one additional death 13 . So TARGIT-IORT may be stopping the growth of local recurrences that have the potential to spread and cause death, whilst allowing those local recurrences that are a marker of curable distant disease to grow and raise an early flag just like the canary in the coal mine. Further research comparing the molecular characteristics of local recurrences between the two arms of the trial could give more insight into the biological nature of these recurrences. The other striking outcome in the trial was that there was a significant reduction in non-breast cancer mortality in patients randomised to TARGIT-IORT. Now, within those randomised to TARGIT-IORT, there were some patients ( n = 241) who also had received supplemental EBRT because they had a higher risk of breast cancer relapse. However, their risk of non-breast cancer death should not be any different from those who were randomised to EBRT. Surprisingly, there was a statistically significant difference in non-breast cancer mortality (HR 2.62 (1.14–6.04), P = 0.0093) between them, and those allocated to EBRT. As both these groups received EBRT, the reduction in non-breast cancer mortality cannot be attributed to the absence of EBRT, but rather must be attributed to the presence of TARGIT-IORT. There is however one caveat—40% of those in the EBRT arm also received a tumour-bed boost (reminding us that TARGIT-A was a medium-risk cohort), so this higher dose may have contributed to the effect. In any case, the baseline major risk factors for these deaths (age and BMI) were well balanced between these non-randomised groups (Supplementary Table 1 ). This long-term outcome is consistent with previous reports and prompts the hypothesis that a single large dose of radiation such as TARGIT-IORT given during the trauma of surgery might possibly have an abscopal effect, i.e., an effect away from the site of irradiation, by influencing the tumour microenvironment or by immunological mechanisms. 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 Strange as it may seem, such an abscopal effect appears to give long-term protection against deaths from cardiovascular causes and other cancers. The early separation of lines in the K–M curves that starts soon after randomisation also suggests such a ‘drug-like’ effect, while a separation starting a few years later in the comparison of TARGIT-IORT alone vs. EBRT suggests an effect of avoiding EBRT (Fig. 4 ). We believe that for the effect of immediate TARGIT-IORT on wound fluid, and its potential abscopal effects, the temporal proximity of TARGIT-IORT to surgery is crucially important. This TARGIT-IORT delivery to the fresh tumour bed immediately after lumpectomy, without any additional trauma, did not happen in the delayed IORT trial. 29 The IORT in the experimental arm in that separate study 29 was delivered at a median of 37 days postoperatively, by re-opening the wound. This difference in timing of radiotherapy may well offer an explanation for the difference in non-breast cancer mortality outcomes. Of course, we need to recognise that these data only generate the hypothesis, and do not prove an abscopal effect. The TARGIT-B superiority trial, in which patients are being recruited from 38 centres in 15 countries, is comparing TARGIT-IORT boost during lumpectomy, in addition to post-operative whole-breast radiotherapy, vs. conventional EBRT (i.e., TARGIT-IORT + EBRT vs. EBRT). It will provide randomised data to assess such putative abscopal effects. In conclusion, these long-term data from the TARGIT-A trial show that for every subgroup of patients with breast cancer who meet our trial selection criteria, risk-adapted single-dose TARGIT-IORT during lumpectomy is an effective and safe alternative to several weeks’ course of post-operative EBRT. The observation that local recurrence after TARGIT-IORT, unlike after EBRT, does not have a poor prognosis is reassuring. The potential beneficial effect of TARGIT-IORT during surgery on non-breast cancer mortality seen in this trial has increased the importance of forthcoming randomised data on non-breast cancer mortality from the TARGIT-B trial.
A breast cancer therapy that requires just one shot of radiotherapy is as effective as traditional radiotherapy, and avoids potential damage to nearby organs, according to a paper by UCL experts. The results, published in the British Journal of Cancer, mean that eight out of ten patients who receive the treatment, TARGIT-IORT, will not need a long course of post-operative external beam radiotherapy (EBRT). These results strengthen and expand previously published outcomes. Patients who received the treatment are less likely to go on to experience fatal cardiovascular disease such as heart attacks, lung problems or other cancers. As well as avoiding scattered radiation from EBRT that can damage nearby vital organs, delivering TARGIT-IORT during the lumpectomy procedure seems to lower the likelihood of death if patients do go on to develop cardiovascular disease, protecting in a drug-like manner. This was the case even when EBRT was also given post-operatively, and is thought to be because the treatment changes the microenvironment in the lumpectomy wound. The researchers say that delivering radiation immediately to the site where the tumor was can reduce the adverse effects of surgical trauma make the site less conducive for cancer growth and could have an 'abscopal' (distant) effect. This is where a treatment such as radiotherapy has a positive effect on tissue away from the operation site, which is increasingly recognized as a beneficial immunological action. Previous studies have shown that the treatment has fewer radiation-related side effects compared with conventional whole breast radiotherapy, with less pain, a superior cosmetic outcome with fewer changes to the breast as a whole and a better quality of life. Lead author Professor Jayant Vaidya (UCL Surgery & Interventional Science) said: "With TARGIT-IORT, women can have their surgery and radiation treatment for breast cancer at the same time. This reduces the amount of time spent in hospital and enables women to recover more quickly, meaning they can get back to their lives more quickly." TARGIT-IORT is delivered immediately after tumor removal (lumpectomy), and under the same anesthetic, via a small ball-shaped device placed inside the breast, directly where the cancer had been. The single-dose treatment lasts for around 20-30 minutes and replaces the need for extra hospital visits in eight out of ten cases. Further tumor subgroup analysis also found that there was a significant overall survival benefit with TARGIT-IORT in patients with grade 1 or 2 cancer. Professor Vaidya added: "These new results make it clear that the TARGIT-IORT is effective in all tumor subgroups of invasive duct cancer, the most common type of breast cancer. Our new online tool can help clinicians make a decision about additional radiotherapy (recommended in a small proportion of cases) for each individual patient. "The finding that fewer deaths are from the avoidance of scattered radiation and the possible abscopal effect of TARGIT-IORT is important and should fuel further research, opening doors to new treatments." For the clinical trial, which started in March 2000, 2,298 women aged 45 or over with invasive breast cancer and a tumor up to 3.5cm in diameter were randomly assigned to receive either TARGIT-IORT during lumpectomy or post-operative EBRT. The trial was designed and run from UCL, involved 32 hospitals and medical centers in ten countries: the UK, France, Germany, Italy, Norway, Poland, Switzerland, the U.S., Canada and Australia. Professor Michael Baum (UCL Surgery & Interventional Science) said: These results are the highest level of evidence proving not only the effectiveness of TARGIT-IORT but confirming that it avoids deaths from other causes. "The new data is biologically very interesting and the new tools will make its application in routine clinical practice much easier. I am pleased that it will benefit thousands of breast cancer patients around the world." Professor Jeffrey S Tobias (Professor of Clinical Oncology, UCL and UCLH) said: With "TARGIT-IORT, the majority of patients presenting with early localized breast cancer will never need any further radiotherapy. "They will avoid all the side effects of whole breast radiotherapy. The chance of remaining free of local recurrence (in the breast itself) is the same as with traditional treatment, but our new analysis shows that even if they do get a local relapse, it will not detract from an excellent prognosis—as good as not having a relapse—a rather different state of affairs from the more serious outlook if this were to happen after EBRT." To date, 45,000 patients in 260 centers in 38 countries have received TARGIT-IORT). The clinicians hope that following the latest results, more patients can be offered the treatment both in the UK and around the world instead of EBRT.
10.1038/s41416-021-01440-8
Earth
New research predicts a doubling of coastal erosion by mid-century
"Doubling of coastal erosion under rising sea level by mid-century in Hawai'i." Natural Hazards DOI: 10.1007/s11069-015-1698-6
http://dx.doi.org/10.1007/s11069-015-1698-6
https://phys.org/news/2015-03-coastal-erosion-mid-century.html
Abstract Chronic erosion in Hawaii causes beach loss, damages homes and infrastructure, and endangers critical habitat. These problems will likely worsen with increased sea level rise (SLR). We forecast future coastal change by combining historical shoreline trends with projected accelerations in SLR (IPCC RCP8.5) using the Davidson-Arnott profile model. The resulting erosion hazard zones are overlain on aerial photos and other GIS layers to provide a tool for identifying assets exposed to future coastal erosion. We estimate rates and distances of shoreline change for ten study sites across the Hawaiian Islands. Excluding one beach (Kailua) historically dominated by accretion, approximately 92 and 96 % of the shorelines studied are projected to retreat by 2050 and 2100, respectively. Most projections (~80 %) range between 1–24 m of landward movement by 2050 (relative to 2005) and 4–60 m by 2100, except at Kailua which is projected to begin receding around 2050. Compared to projections based only on historical extrapolation, those that include accelerated SLR have an average 5.4 ± 0.4 m (±standard deviation of the average) of additional shoreline recession by 2050 and 18.7 ± 1.5 m of additional recession by 2100. Due to increasing SLR, the average shoreline recession by 2050 is nearly twice the historical extrapolation, and by 2100 it is nearly 2.5 times the historical extrapolation. Our approach accounts for accretion and long-term sediment processes (based on historical trends) in projecting future shoreline position. However, it does not incorporate potential future changes in nearshore hydrodynamics associated with accelerated SLR. Access provided by DEAL DE / Springer Compact Clearingstelle Uni Freiburg _ Working on a manuscript? Avoid the common mistakes 1 Introduction Coastal erosion negatively affects Hawaii’s tourism-based economy, limits public beach access and cultural practices, and damages homes, infrastructure, and critical habitats for endangered wildlife. Fletcher et al. ( 2013 ) found that seventy percent of all sandy shoreline on the islands of Oahu, Maui, and Kauai are chronically eroding; nine percent of these shorelines were completely lost to erosion during their 80-year analysis period. As global mean sea level is predicted to rise dramatically over the next century (Church et al. 2013 ; Kopp et al. 2014 ), government officials, nonprofit groups, and property owners wonder how increased sea level rise (SLR) will affect their ongoing struggle to manage retreating shorelines. Tidal records indicate that the Hawaiian Islands of Maui, Oahu, and Kauai have experienced at least a century of relative SLR at rates from 1.50 to 2.32 mm/y. Romine et al. ( 2013 ) investigated shoreline trends on islands with different SLR rates and concluded that SLR is linked to coastal erosion in Hawaii. However, shoreline change rates around each island vary greatly (erosion rates up to −1.8 ± 0.3 m/year and accretion rates up to 1.7 ± 0.6 m/year; Romine and Fletcher 2013 ), where segments of erosion and accretion were separated by tens to hundreds of meters alongshore despite rather homogeneous island-wide SLR trends. This suggests that the influence of SLR on shoreline change is presently minor compared with sediment availability (sum of sources and sinks) related to human impacts and persistent physical processes such as eolian transport, cross-shore transport, and gradients in longshore sediment transport. Future accelerated SLR is expected to have an increased effect on coastal morphology (Stive 2004 ) and to promote erosion of numerous Hawaiian beaches (Romine et al. 2013 ). The Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5; 2013) projects 0.52–0.98 m of SLR by 2100 relative to 1986–2005 for Representative Concentration Pathway (RCP) 8.5 (the “business as usual” scenario; Church et al. 2013 ). This gives a rate during 2080–2100 of 8–16 mm/year, up to an order of magnitude larger than the Honolulu tide gauge SLR rate (1.50 ± 0.25 mm/year; ) for the previous century (1905–2006) when the Honolulu SLR trend was similar to the estimated global mean trend (e.g., 1.7 ± 0.2 mm/year; Church and White 2011 ). The current IPCC projections may underestimate SLR because they do not include the results of recent studies indicating increased ice melt for Greenland (Helm et al. 2014 ) and West Antarctica (Joughin et al. 2014 ; Rignot et al. 2014 ). Sediment transport, and thus shoreline migration, is the result of multiple nonlinear processes that dynamically interact with existing morphology over a variety of time and spatial scales (Stive et al. 2002 ; Hanson et al. 2003 ). As a result of increased SLR, sediment-deficient low-lying coastal areas will experience enhanced erosion and inundation determined by sediment availability and local coastal slope. Beaches will be further shaped by changes in sediment transport patterns as a result of higher water levels over fringing reefs (Grady et al. 2013 ), climate-related modifications in reef geomorphology and sediment production (Perry et al. 2011 ), and changes in storminess and wave climate (Aucan et al. 2012 ). Numerical models have the potential to describe beach evolution more accurately than long-term trends, but often require data at spatial and temporal densities that are not available. Such methods are therefore difficult to apply to the multidecadal timescales that are the focus of this paper (Hanson et al. 2003 ). For baseline assessment over large coastal regions, it is therefore necessary to develop empirical methods that provide a first-order approximation of erosion exposure and its uncertainty. Communicating hazard uncertainty to coastal managers (Pilkey et al. 1993 ; Thieler et al. 2000 ; Pilkey and Cooper 2004 , etc.) enables them to make decisions based on levels of risk. It also helps coastal mangers understand that as new data become available, updated projections will replace old ones (Pilkey and Cooper 2004 ). Historical data are commonly used to provide long-term information on coastal erosion (Fig. 1 ), but extrapolating historical trends is insufficient given the projected accelerations in SLR. Hwang ( 2005 ) suggests multiplying the historical trend by a SLR adjustment factor, such as 10 %, and then extrapolating the adjusted trend. Although easy to implement, this approach does not allow for acceleration and assumes that the effect of accelerated SLR on coastal erosion is proportional to the historical rate, which is not physically justifiable. Komar et al. ( 1999 ), following Gibb ( 1995 ), combine the extrapolated long-term trend, a rate of beach retreat due to projected SLR, and dune erosion due to extreme storms to determine a coastal hazard zone (CHZ); however, since the authors found sediment movement within the Oregon Coast study area to be dominated by episodic events, only the dune recession component was used to determine the CHZ. Hawaii beaches, in contrast, are highly influenced by sediment flux due to persistent or seasonal wave conditions. Fig. 1 Relative cross-shore positions are recorded along transects ( yellow vertical lines ) spaced 20 m apart along the shore. The shoreline change rate is the slope of the line fit to the historical data Full size image Pilkey and Cooper ( 2004 ) suggest extrapolating historical shoreline trends in combination with an “expert eye” to assess the effects of geologic constraints, sediment availability, and engineered structures on shoreline migration. Yates et al. ( 2011 ) combine historical trend extrapolation with the Bruun rule (Bruun 1962 ), as suggested by the EUROSION ( 2004 ) project; they also give an example of Pilkey and Cooper’s ( 2004 ) “expert eye,” by averaging trends within a homogeneous region and then combining the extrapolated average with Bruun estimates. Houston and Dean ( 2014 ) use sediment budgets to quantify sources of shoreline change in Florida. Like Yates et al. ( 2011 ), they use the Bruun model to account for the effects of SLR. Recently, Ranasinghe et al. ( 2011 ) tested their process-based probabilistic coastline recession model on Narrabeen Beach, Australia. Although they used a temporally dense, 30-year collection of wave- and water-level data, the authors speculate that global wave hindcast models (e.g., WAVEWATCH III) would produce similar coastal recession predictions. Gutierrez et al. ( 2011 ) produced probabilistic predictions of shoreline retreat under accelerated SLR using a Bayesian network (BN). Their BN identified SLR as the major influence on shoreline stability in their application to the Atlantic coast. Using the same parameters as Gutierrez et al. ( 2011 ), Yates and Le Cozannet ( 2012 ) identified geomorphology (i.e., rocky cliffs and platforms, erodible cliffs, beaches, and wetlands) as the major influence on shoreline stability, finding that the inclusion of alongshore sediment transport, sediment budget, and anthropogenic activities may improve BN performance on European coasts. For recent reviews of shoreline change prediction incorporating SLR, see Cazenave and Le Cozannet ( 2013 ), Fitzgerald et al. ( 2008 ), and Shand et al. ( 2013 ). The aforementioned approaches of Komar et al. ( 1999 ), Yates et al. ( 2011 ), and Houston and Dean ( 2014 ) estimate shoreline change by quantifying the separate mechanisms of beach change and assuming their effects are additive. Each approach includes an estimate of shoreline change due to projected SLR. Yates et al. ( 2011 ) and Houston and Dean ( 2014 ) employ the often-used geometric relation known as the Bruun rule (Bruun 1962 , 1988 ; Schwartz 1967 ). Based on the conservation of volume, Bruun ( 1962 ) proposed that, in the absence of sediment sources and sinks, a beach profile gradually re-equilibrates after a rise in relative mean sea level, as sediment is eroded from the upper beach profile and deposited onto the adjacent seafloor. Bruun’s rule (Fig. 2 a): Fig. 2 a According to the Bruun rule, an increase in sea level S causes a shoreline retreat R due to erosion of the upper beach and sediment deposition offshore. b In contrast, the R-DA model assumes that all sediment is transported landward while still resulting in an upward and landward translation of the nearshore profile and dune. Arrows indicate the general direction of net sediment movement Full size image $$\Delta y = - S \times \frac{L}{h + B} = - \frac{S}{\tan \beta }$$ (1) relates shoreline change Δ y (Δ y < 0 indicates retreat) to sea level rise S , where L is the horizontal length of the active profile, h is the depth of the active profile base, and B is the berm crest elevation above sea level. Here tan β is the average slope of the active profile (e.g., Komar 1998 ). On its own, the Bruun model is virtually unusable in open-ocean coastal environments due to the theory’s limiting assumptions of physical setting (constant longshore transport, no sediment sources or sinks) (List et al. 1997 ; Thieler et al. 2000 ; Cooper and Pilkey 2004 ). The assumption of no alongshore change in the sediment budget is an important limitation for Pacific Island beaches where sediment exchange, especially longshore transport, can dominate shoreline morphology (Dail et al. 2000 ; Norcross et al. 2003b ). Although the Bruun rule projects only shoreline recession due to SLR, we find shorelines accreting where sediment gain offsets the landward migration due to SLR. Variations of the Bruun rule have been proposed that include landward transport to dunes (Rosati et al. 2013 ) and net longshore sediment movement (Hands 1980 , 1983 ; Dean and Maurmeyer 1983 ; Everts 1985 ). Similarly, Yates et al. ( 2011 ) and Houston and Dean ( 2014 ) combine the Bruun rule with estimates of the net sediment budget. The shoreface translation model (Cowell et al. 1995 ) also assumes that the profile shape remains constant and is translated in response to a rise in relative sea level, based on conservation of volume (like Bruun), net sediment budget, and surrounding geology. Allowing sediment sources to offset the “Bruun effect” (beach profile readjustment in response to SLR) has been found to improve model predictions (SCOR Working Group 89, 1991 ). However, large uncertainties in sediment budget estimates can diminish their value in improving shoreline forecasts based on the Bruun approach (List et al. 1997 ). Even with terms representing the sediment budget, the Bruun model remains controversial. Some field and laboratory experiments support the Bruun model (e.g., Hands 1979 , 1980 , 1983 ; Mimura and Nobuoka 1995 ; Zhang et al. 2004 ), while others argue that experimental flaws hinder such experiments from validating the model (e.g., SCOR Working Group 89, 1991 ; Thieler et al. 2000 ; Cooper and Pilkey 2004 ; Davidson-Arnott 2005 ). To date, no study has produced comprehensive, well-accepted verification of the Brunn model (Ranasinghe and Stive 2009 .) In reviewing earlier studies, however, the Scientific Committee on Ocean Research (SCOR working Group 89 1991 ) found that the Bruun model was valid in its upward and landward translation of the profile, but that its quantitative estimates are very coarse approximations. A geometric model that has emerged as an alternative to the Bruun rule was proposed by Davidson-Arnott ( 2005 ). The model, referred to as R-DA, is similar to the Bruun model, in that an upward and landward translation of the profile is predicted, but the underlying assumptions of the two models are quite different. In the R-DA model, it is assumed that as sea level rises, the beach and foredune are eroded and sediment is transported landward, causing a landward and upward migration of the beach–foredune intersection. Similarly, there is a net onshore migration of sediment in the nearshore, causing an upward and landward migration of the shoreline and the seaward limit of the active profile (Fig. 2 b). Davidson-Arnott ( 2005 ) notes that this landward sediment movement is consistent with observed landward dune migration, and inconsistent with the Bruun assumption that sediment is strictly eroded from the beach and deposited in the nearshore. The expression for shoreline migration given by R-DA is identical in form to that of Bruun, in that landward migration Δ y is equal to (tan β ) −1 times the amount of SLR (right side of Eq. 1 ), but in R-DA, the term tan β is the nearshore slope averaged over only the submerged portion of the active beach profile, not the entire profile, so it does not depend on B . By explicitly including beach–dune sediment exchange and landward eolian sediment transport, the R-DA model allows for preservation of the foredune system under rising sea levels. Davidson-Arnott ( 2005 ) hypothesizes that increased sea level will cause more frequent scarping of the foredune, which decreases vegetative covering; because sediment is more frequently exposed, there is an increase in the sediment transported from the face of the dune to the leeward dune slope. He further notes that when nearshore bars are present, they tend to oscillate about an equilibrium depth and distance to shore depending on wave activity. As sea level rises, the oscillating bar position gradually shifts landward as it adjusts to the new equilibrium depth and distance. He argues that this behavior holds for all sediment within the nearshore portion of the profile, providing a sediment source for the landward migrating beach and dune complex. Although the R-DA model, like Bruun, assumes an entirely sand-bottomed profile, the R-DA hypotheses may be a more realistic basis for understanding a Hawaiian fringing reef setting where reef-fringed dune–beach complexes are seen migrating landward across the underlying limestone platform that contours the slope of the shallow nearshore. We follow Yates et al. ( 2011 ), by modeling shoreline change with a combination of historical rates and a SLR-based mechanism, but we use R-DA instead of Bruun. Here, historical rates are used to implicitly include net sediment fluxes. Compared with process-based models, the empirical method provides a computationally efficient way of estimating future coastal erosion hazards over large geographic regions (spanning islands) using existing historical shoreline data and repeated beach surveys. This approach is particularly useful for reef-fringed islands where seasonal wave regimes interact with intricate reef morphology, thus complicating typical methods of estimating decadal patterns of net sediment transport such as associating transport with incoming wave angle or other process-based methods. We develop probabilistic (80 %) erosion hazard zones which are then overlain on geologic and/or development layers in a geographic information system (GIS). Future rates of shoreline change and distances of retreat (or advance) are also calculated. We analyze ten Hawaiian Island beach study sites representing varying conditions of geology, wave climate, and density of coastal development. We pay close attention to sources of uncertainty and the resulting uncertainty in the projected hazard areas. 2 Methods Shoreline change in Hawaii is directly related to coastal setting, which varies greatly around each island. After introducing regional coastal settings in Hawaii, we describe the data and our procedure for determining exposure to future erosion hazards. 2.1 Regional setting and study sites The fringing reef assemblage in Hawaii (Fig. 3 ) is the result of carbonate accretion and erosion over recent glacial cycles (reviewed in Fletcher et al. 2008 ). The reef occurs as a shallow insular shelf that slopes gently seaward in the depth range of 0–20 m and abruptly drops off to a deeper, partially sand-covered terrace near 30 m depth. Eolianites originating during the last interglacial and the Holocene (Fletcher et al. 2005 ) are found in the nearshore and coastal plain. Beachrock slabs exist in the intertidal zone of some beaches, as well as in the nearshore. During glacial periods when sea level was lower, paleochannels were carved into the reef shelf sub-perpendicular to shore, and karstification of the exposed limestone created depressions and bathymetric complexity at depths now less than 10 m (Purdy 1974 ; Grossman and Fletcher 2004 ; Bochicchio et al. 2009 ). Fig. 3 a – b Fringing reefs dominate coastal geomorphology and are an integral part of sediment dynamics on Hawaii beaches [ a modified from Romine et al. (in review); b photo courtesy of the University of Hawaii Coastal Geology Group)]. c The dominant swell regimes following Moberly and Chamberlain ( 1964 ) are shown with monitoring buoy locations (from Vitousek and Fletcher, 2008 ). d The ten study locations span three Hawaiian Islands Full size image Sandy beaches in Hawaii are generally white in color and lack a terrigenous source. They are the product of reef bioerosion and mechanical erosion and direct production of calcareous material by reef organisms such as foraminifera and echinoderms. Sand grains are mainly biogenic carbonate (Moberly and Chamberlain 1964 ; Harney et al. 2000 ), with a small contribution from eroded volcanic rock. The most abundant accumulation of sand on typical low-lying Hawaiian coasts lies in coastal plains that accreted during a late Holocene fall in sea level from around 3000 BP to the pre-modern era (Fletcher and Jones 1996 ; Grossman and Fletcher 1998 ). Within the nearshore environment, sediment can accumulate in isolated reef-top karst depressions and in paleochannels (Conger 2005 ; Bochicchio et al. 2009 ). Radiocarbon dating of beach, reef-top, and coastal plain sands indicates that most beach sand originated in the late middle to late Holocene, with a notable lack of modern sand (Calhoun and Fletcher 1996 ; Fletcher and Jones 1996 ; Grossman and Fletcher 1998 ; Harney et al. 2000 ). Consequentially, Hawaiian beaches are often the eroded seaward edge of sand-rich coastal plains (refer to Fig. 3 b). Wave climate in Hawaii is related to shoreline aspect, with four general wave regimes impacting distinct island regions (Moberly and Chamberlain 1964 ; Fig. 3 ). The average directional wave spectrum is dominated by northeast tradewinds and North Pacific swells (Aucan 2006 ). The persistent tradewinds generate choppy seas with average deepwater wave heights of 2 m from the northeast, during about 75 % of the year (Bodge and Sullivan 1999 ). North Pacific swells, which peak in the winter, typically generate waves around 4 m while maximum swell events can generate wave heights up to 7.7 m annually (Vitousek and Fletcher 2008 ). Kona southerly storm waves and southern swell can have episodic impacts on leeward shores. Interannual and decadal cycles such as ENSO and the Pacific Decadal Oscillation contribute to the wave climate variability and thus to episodic coastal erosion (Rooney and Fletcher 2005 ). Because the wave regimes are directional, beach morphology is dependent on shoreline aspect (Moberly and Chamberlain 1964 ). Beaches on north- and west-facing shorelines tend to be the longest and widest, with reefs that are narrower, deeper, and more irregular. These north- and west-facing beaches exhibit large seasonal fluctuations due to oblique approaches of seasonally alternating swell directions. Our ten study areas were selected throughout the Hawaiian Islands of Kauai, Oahu, and Maui (Fig. 3 ) based on diversity of shoreline aspect, nearshore morphology, and density of development. The characteristics of each site are given in the online supplemental resource (Table S1). 2.2 Projected sea level rise We use the IPCC AR5 high-end representative concentration pathway (RCP) 8.5 scenario—the “business as usual” scenario (Church et al. 2013 ). This scenario was selected after discussions with local government agency staff in Hawaii who, like others (e.g., Katsman et al. 2011 ), prefer the most cautious predictions for long-range planning purposes. We make the simplifying assumption that predicted sea level is normally distributed and is centered about the IPCC projected median with variance defined as the square of the average distance from the IPCC median estimate to the upper and lower limit of the “likely” range projections (Church et al. 2013 ). 2.3 Vertical land motion and local SLR Moore ( 1970 ) and others attribute variations in relative SLR rates along the Hawaii Archipelago to variations in lithospheric flexure with distance from Hawaii Island. Because the century-long Honolulu Harbor tide gauge record indicates that sea level has risen at a rate similar to global mean sea level estimates, Moore ( 1970 ) and others conclude that the island of Oahu is vertically stable and is located on the lithospheric rise. Caccamise et al. ( 2005 ) suggest that variations in upper ocean water masses also contribute to the SLR rate difference between Honolulu and Hawaii Island; however, the authors mention that their current findings cannot be extended to multidecadal timescales due to the limited length (6 years) of their data. Thus, we follow Moore ( 1970 ) and assume that the island of Oahu is vertically stable; the Honolulu record then gives absolute SLR. We use the linear trend of the Honolulu record as a proxy for absolute SLR of waters surrounding Oahu, Maui, and Kauai. Acceleration of local SLR has not been detected in Hawaii tide gauge records, over the century of record, likely because its signal has been masked by variability in climate (e.g., tradewinds; Merrifield and Maltrud 2011 ). Our approach (explained in detail in Sect. 2.6 ) involves extrapolating the historical shoreline change trend, which inherently includes the effects of historical rates of relative SLR, including island subsidence. We assume that vertical land velocity was constant during the historical period; hence, SLR in excess of the historical trend is the IPCC AR5 projected global mean sea level estimate ( absolute future sea level projection) minus the linearly extrapolated Honolulu tide gauge trend (proxy for absolute historical sea level in Hawaii; Fig. 4 ). This excess SLR is the same for each island. The Honolulu SLR trend is 1.50 ± 0.25 mm/year for the period 1905–2006 ( ), which spans the period of historical shoreline data in the study areas. The variance of the excess SLR is the sum of the variances of the IPCC projection and the Honolulu tide gauge projection. Fig. 4 Monthly mean sea level at Honolulu Harbor between 1905 and 2006 is shown with the trend ( thin black line ) and 95 % confidence band ( light gray band ), and the IPCC AR5 RCP 8.5 sea level projection median ( thick black line ) and “likely” range ( dark gray band ) Full size image 2.4 Beach profiles The US Geological Survey (USGS), in coordination with the University of Hawaii, conducted biannual surveys of cross-shore beach profiles during a 5-year study (1994–1999) of beaches on the islands of Oahu and Maui (Gibbs et al. 2001 ). University researchers have extended this survey to include biannual beach profiles over the period 2006–2008 at 35 locations in Oahu and at 27 locations in Kauai. Footnote 1 During each survey, specific morphologic features along the profile were recorded such as the berm crest, high water, and the beach toe. The beach toe (Bauer and Allen 1995 ) is the base of the foreshore and is commonly used to demark the shoreline location on Hawaii beaches (e.g., Fletcher et al. 2003 ; Norcross et al. 2003a ). The profiles at some locations do not extend seaward past what is typically defined as the depth of closure (DoC) (e.g., Hallermeier 1981 ) because of the presence of shallow fringing reefs. Here we follow Cowell and Kench ( 2000 ) who suggest that the intersection of the sandy profile with the reef platform is effectively the seaward extent of the active profile and is thus the DoC on reef-bottomed profiles. For a sandy bottom, the seaward extent of the active profile is taken to be the point at which the profiles from biannual surveys converge. The nearshore slope of the active profile, defined here as the slope between the seaward extent of the active profile and the beach toe, is estimated at each alongshore location. Histograms of the slopes (one histogram for each alongshore location) suggest that slopes are normally distributed. When more than one profile location is present within a study area, cubic splines are used for interpolation. Summary data for the profiles are provided in Table S2 of the online supplemental resource. 2.5 Historical shoreline change Shoreline positions were extracted from high-resolution aerial photographs and NOAA topographic charts (T-sheets) by University of Hawaii researchers as part of the USGS National Assessment of Shoreline Change (Fletcher et al. 2013 ). Approximately shore-normal transects were cast 20 m apart in the alongshore direction, and the relative cross-shore distance from each shoreline to the offshore baseline was measured, creating a time series of shoreline positions at each alongshore location (refer to Fig. 1 ). Five to eleven historical shorelines were used in each of the study areas from 1900 to 2008. The Baldwin study area on the Island of Maui used the least number of historical shorelines (five) because the data prior to 1975 were dropped to exclude the effects of sand mining that occurred up until the early 1970s. Temporal and spatial data extents for each study area, along with ranges of data uncertainty, are provided in the online supplemental resource (Table S3). The equation \(y(t) = b + r\left( {t - \bar{t}} \right)\) is fit to the N historical shoreline data points at each transect using weighted least squares (WLS) regression (e.g., Douglas and Crowell 2000 ). Here, b is the intercept, r is the rate (positive indicates accretion), and \(\bar{t}\) is the mean of historical survey times which is used to condition matrices in regression procedures. To reduce large fluctuations in rates among adjacent transects, rates are smoothed in the alongshore direction using a running [1 3 5 3 1] average. The survey errors used in the WLS procedure (see Table S3, online resource) are calculated by the method in Romine et al. ( 2013 ) from seven types of data error. In order to be robust to data outliers, the extrapolated shoreline positions are given a generalized Student’s t distribution (e.g., Davison 2003 , p. 140) with N − 2 degrees of freedom, and the least-squares mean and standard deviation are used for the location and scale parameters, respectively. 2.6 Determining future hazard areas To apply a simple model to a complex system, it is necessary to make some simplifying assumptions. We assume that there exists an equilibrium profile shape under constant forcing conditions (e.g., Fenneman 1902 ; Bruun 1962 ; Dean 1991 ). Storm and seasonal swells perturb the profile shape, while subsequent persistent wave conditions and sediment supply steer it back toward its median sea state equilibrium. Hence, in the absence of any change in relative sea level, the beach profile can be thought to migrate seaward (or landward) when sediment is added (or lost) (Fig. 5 ) over multi-year to decadal timescales. In fringing reef environments, Muñóz-Pérez et al. ( 1999 ) used over 50 profiles from seven beaches to confirm that reef-fronted beaches can have an equilibrium shape; however, they caution there is theoretically no equilibrium profile within a distance of about 10–30 times the depth at the reef edge. Since there are currently no observational studies validating this principle, and Hawaiian beaches typically exceed this distance, we assume that profiles can reach equilibrium on beaches not satisfying this condition. Fig. 5 a Sediment loss in the absence of any sea level change causes the shoreline to retreat. Conversely, b sediment gain causes the shoreline to advance seaward Full size image In the presence of strictly sea level rise (no sediment gain/loss), equilibrium profile theory assumes that beaches keep their general shape, while readjusting to persistent wave conditions at elevated sea levels (Bruun 1962 ). The presence of a fringing reef challenges this assumption because heightened water level over the reef changes the amount of wave energy that impacts the beach. Although recent studies have made progress in understanding hydrodynamic flow over fringing reefs under potential climate-induced changes in storminess and sea level (e.g., Péquignet et al. 2014 ), these processes remain poorly understood. Thus, we make the simplifying assumption that the profile shape of reef-protected beaches remains constant as sea level rises. Our treatment here is similar to that of Yates et al. ( 2011 ) with differences that will be noted below. Shoreline change Δ y total (negative change indicates retreat) is the sum of the change due to net sediment availability, Δ y sed , and the change due to profile readjustment after a rise in relative mean sea level, Δ y SL . Here, sediment availability includes all sources and sinks: cross-shore mechanisms (e.g., eolian transport, sediment lost/gained from the seaward edge of the active profile), and sediment changes due to spatial variations in alongshore transport, reef sediment production, dredging, nourishment, etc. It is helpful to regard the sea level adjustment Δ y SL as the sum of (1) Δ y SL_hist , the portion of the extrapolated historical change due to historical SLR, and (2) Δ y SL_ex , the change in response to excess SLR, i.e., SLR that exceeds the extrapolated historical trend of SLR (Fig. 6 a). Similarly, the term Δ y sed can be expressed as the sum of (1) Δ y sed_hist , the extrapolated historical change due to sediment availability, (2) Δ y sed_SL , the change due to sediment availability caused by excess SLR, and (3) Δ y sed_CC , the change due to non-SLR influences on the sediment budget; these include all processes that may change with future climate change such as wave climate, storm frequency and amplitude, and ENSO patterns. Fig. 6 a In the schematic, increased SLR (in excess of historical trends) contributes to increased shoreline recession on an eroding beach. The ratio of sea level induced shoreline change to overall change increases from 2005 to 2100 due to varying acceleration in the sea level curve over time. Historical shoreline data (exes with error bars ) from one transect in the historically accreting Kailua b study area and one transect in the historically eroding Hauula ( c, d ) area are plotted against time, along with the extrapolated historical trend (no future increase in SLR; thin solid line ), and probable estimates of future shoreline position (with future increase in SLR; thick solid line and shaded regions ). In ( b ), the historically accreting beach is expected to begin retreating by 2050 due to increased SLR. The estimated pdf of shoreline position for year 2050 is shown as an inset in ( c ) and ( d ), with two-tailed confidence intervals in ( c ) and on-tailed confidence intervals in ( d ); each can be used to define erosion hazard areas depending on the planning objective. In ( d ), for example, there is a 95 % probability that the shoreline will be below the light dashed line in any given year Full size image The total change, Δ y total , is thus the sum of five terms: Δ y SL_hist , Δ y SL_ex , Δ y sed_hist , Δ y sed_SL , and Δ y sed_CC in which the pair Δ y sed_hist + Δ y SL_hist comprises historical change, Δ y hist . With this replacement, the total change at any given alongshore location becomes $$\varDelta y_{\text{total}} = \varDelta y_{\text{hist}} + \varDelta y_{{{\text{SL\_ex}}}} + \varDelta y_{{{\text{sed\_SL}}}} + \varDelta y_{{{\text{sed\_CC}}}}$$ (2) Substituting Δ y total = y ( t f ) − y ( t 0 ) then gives shoreline location at future time t f as $$y(t_{f} ) = y(t_{0} ) + \varDelta y_{\text{hist}} + \varDelta y_{{{\text{SL\_ex}}}} + \varDelta y_{{{\text{sed\_SL}}}} + \varDelta y_{{{\text{sed\_CC}}}}$$ (3) in which t 0 is a time origin chosen for convenience. In this study, we use \(\bar{t}\) , the mean of historical survey times, as the time origin. If we assume that historical shoreline change follows a linear trend, and profile readjustment, Δ y SL_ex , follows the geometric adjustment outlined in Davidson-Arnott ( 2005 ) [in other words that Δ y SL_ex is given by the right-hand side of Eq. 1 ], then Eq. ( 3 ) becomes $$y(t_{f} ) = y(t_{0} ) + r(t_{f} - t_{0} ) - \frac{{[S(t_{f} ) - S_{\text{hist}} (t_{f} )]}}{\tan \beta } + \varDelta y_{{{\text{sed\_SL}}}} + \varDelta y_{{{\text{sed\_CC}}}}$$ (4) The first two terms, y ( t 0 ) + r ( t f − t 0 ), are determined from the historical shoreline change model described in Sect. 2.5 , and the average nearshore slope, tan β , is estimated from profile surveys (Sect. 2.4 ). The difference S ( t f ) − S hist ( t f ) is the difference between predicted sea level and extrapolated historical sea level at future time t f , as described in Sect. 2.3 . The last two terms, Δ y sed_SL and Δ y sed_CC , are neglected in this study on the assumption that SLR will not significantly alter the sediment budget and that changes in wave climate and storm frequency will not affect it either; recent studies suggest that there will be no significant changes in the twenty-first-century North Pacific wave climate (Hemer et al. 2013 ; Wang et al. 2014 ). We include these terms as placeholders mainly to raise awareness that these other influences exist and warrant attention. Absent the last two terms, Equation 4 is similar to Equation 2 in Yates et al. ( 2011 ), except that here the R-DA model is being used instead of the Bruun model, and all sediment is assumed to have a large enough grain size to remain within the active profile ( P = 1). Our treatment of uncertainty also differs from Yates et al. ( 2011 ). A probability density function (pdf) of the total projected shoreline position, y ( t f ), is estimated from the pdfs of each contributing unit (Fig. 7 ). Individual pdfs are first created for (1) the extrapolated historical shoreline position, y ( t 0 ) + r ( t f − t 0 ), (2) the difference in the projected and current sea levels, S f − S hist , and (3) the average profile slope, tan β . Combining the pdfs in Eq. ( 4 ) is performed numerically; the quotient pdf is calculated using Equation 3.2 in Curtiss ( 1941 ) and then convolved with the pdf of the historical extrapolation to produce the final pdf of y ( t f ). From this final pdf, we obtain the mean and median values, as well as the quantiles y ɛ = F −1 ( ɛ ), where F is the cumulative distribution function of the pdf for the projected shoreline. For example, Fig. 6 b shows the contours of the mean, and y 0.1 and y 0.9 quantiles (80 % confidence interval) of the projected shoreline change for each year between 2005 and 2100 at one historically accreting alongshore location (positive indicates advance); in this example, a retreat is projected by mid-century. Figure 6 c shows the historical extrapolation, the modeled mean, and the 80, 90, and 95 % confidence intervals at one historically retreating alongshore location. For the same location as in Fig. 6 c, d depicts y 0.8 , y 0.9 , and y 0.95 , the positions at which, with 80, 90, and 95 % probabilities, respectively, the future shoreline will be landward of the contour line. Figure 8 shows the results for multiple alongshore locations (20 m spaced transects) at specific times; the left column shows net shoreline position change relative to 2005 (negative indicates landward migration), for the years 2050 (black line) and 2100 (gray line), while the right column shows shoreline change rates for the historical time period (dashed line), 2050 (black solid line), and 2100 (gray solid line). Fig. 7 For the year 2100 ( t f = 2100), the pdfs for a the difference in the projected and extrapolated historical sea levels, b the average profile slope, and d the extrapolated historical shoreline position relative to 2005 are combined to produce the e pdf for the total projected shoreline relative to 2005 at one transect location. In this example, projected accretion following historical trends d is buffered by the landward retreat component in response to increased SLR ( c ). The mean is depicted by the dark vertical line , and the median is the light line (may not be visible when the median is nearly identical to the mean) Full size image Fig. 8 Left column shoreline change (positive for accretion) relative to 2005 is shown with 80 % confidence band at select locations for 2050 ( dark solid line with gray-filled band ) and 2100 ( gray solid line with diagonal striped band ). Right column change rates, shown with 80 % confidence bands, illustrate similar behavior along each shore for the historical time period ( dashed line with whiskers ), the 2050 projection ( dark solid line with gray-filled band ), and the 2100 projection ( gray solid line with diagonal striped band ). Transects are spaced 20 m apart Full size image The probability-based approach facilitates financial risk assessments needed for near shore conservation or development. For example, in an area with a 70 % probability of erosion in 50 years, it is sensible to permit construction of a fence but not a residence. It is possible that shoreline managers will prefer erosion hazard zones based on a single level of confidence (e.g., 95 % confidence interval) at a future time, overlaid on geographic layers (property TMKs, aerial photos, special management areas, etc). 3 Results The mean, and 80 % confidence (bounded by y 0.1 and y 0.9 ) for projected net shoreline change are determined at each transect for the years 2050 and 2100, relative to 2005 (e.g., Fig. 8 , left column). As mean and median values are similar (pdfs are only slightly skewed), we report only the mean, using it as an indicator of the amplitude of projected change. Based on time series of projected shorelines, shoreline change rates at study areas (Fig. 8 , right column) were calculated for the years 2005 (historical), 2050, and 2100. It can be seen that projected net shoreline change and change rates varied spatially within all study areas. As expected, in areas where sediment gain overpowers any profile readjustment due to SLR, such as in Kailua and portions of Kaanapali, shoreline accretion continues, though at a reduced rate. Figure 9 shows areas exposed to erosion hazards (defined as the 80 % confidence interval) for the years 2050 (yellow) and 2100 (red) at Kaanapali, Maui, projected back into map coordinates and displayed atop a vertical aerial photograph. Mapped hazard areas are truncated by the current shoreline location at their seaward extent for improved usability. Uncertainty values are large, as expected, providing only a broad assessment of potential erosion hazard. Fig. 9 Example of an erosion hazard area (80 % confidence interval) is shown overlain on an aerial photograph, with a layer displaying public infrastructure (e.g., roads, parks) Full size image The distributions of projected shoreline migration amplitude (one amplitude at each transect) within each study site (Fig. 10 ) indicate that shoreline recession dominates the 2050 and 2100 projections in all study areas except Kailua. Kailua shows an average seaward migration of 7.1 ± 3.2 m by 2050 that declines to 4.9 ± 6.8 m by 2100. The Ehukai and Sunet, Baldwin, and Kaanapali locations show the most dispersion in migration, as alternating cells of retreat and accretion exist within each study area. Shoreline change rates also indicate dominant retreat historically (Fig. 11 ), except for Kailua. Fig. 10 Box and whisker plots show the distribution of net shoreline change for the time periods 2005–2050 ( light boxes ) and 2005–2100 ( dark boxes ) at each study site. Box widths indicate the first and third quartiles, i.e., 50 % of the transects within a study area reside within the box limits. Vertical lines show the mean ( light-colored line ) and median ( black line ) of net change estimates. Whiskers indicate the first and tenth deciles, containing 80 % of transects. Net shoreline recession between 2005 and 2050 is the dominant trend at all study sites, except for Kailua. Shoreline projections indicate that recession will continue through 2100 Full size image Fig. 11 Shoreline change rates become more recessional over time as a result of modeled recession in response to increased rates of SLR. Shoreline advance at most Kailua transects reverses to recession by 2100 Full size image The alongshore averages—the mean of all individual means—of projected net shoreline migration and shoreline change rates for each study area are given in Tables 1 and 2 , respectively, along with the percent of transects that indicate retreat (negative change rate; Table 2 ). Because of the small spacing between transects (20 m), we follow Hapke et al. ( 2010 ) and Romine et al. ( 2013 ), by using the effective number of independent observations (Bayley and Hammersley 1946 , Eq. 1 ) to adjust for correlated data in the computation of the alongshore means. The average net change and average rate over all transects in the ten study sites indicate less severe retreat compared to individual study areas except Kailua. However, these averages are not likely indicative of Hawaii beaches in general because the anomalous accretion in the Kailua area, which comprises roughly 20 percent of combined study area transects, heavily influences the overall averages. To reduce this bias, combined averages excluding the Kailua area are given in Tables 1 and 2 . Table 1 Mean projected net shoreline change (±std) and range of net change for each study area are shown based on historical extrapolation only, additional SLR only, and the total net change Full size table Table 2 Mean shoreline change rates (±std) for historical, 2050, 2100 at each study site, and percent of retreating shorelines at each study site Full size table For comparison, Table 1 includes alongshore averages and the range of net change based on: (1) historical extrapolation only, (2) additional SLR only, and (3) the total change. Results, excluding Kailua, indicate that the average amount of shoreline recession roughly doubles by 2050 with increased SLR, compared to historic extrapolation alone. By 2100, accelerated SLR results in nearly 2.5 times the amount of shoreline recession based on historic extrapolation alone. The standard deviations (stds) for the pdfs of net shoreline change vary between sites (Fig. 12 ) and range from 1.9 to 53.4 m in 2050 and 4.8 to 96.0 m in 2100. The median standard deviation over all locations is 11.0 m in 2050 and 20.7 m in 2100. We use the coefficient of variation (CV) to compare the ratios of the standard deviation (dispersion) to the mean (magnitude) (Fig. 13 ). The absolute value is appropriate because we are more interested in the ratio of the dispersion to the mean rather than whether the mean is negative (landward) or positive (seaward). Areas with high seasonal fluctuations that likely mask any underlying trend such as Ehukai and Sunset give larger CV values, whereas areas such as Hauula, where erosive trends are substantial and data errors are relatively small, produce smaller CV values. Fig. 12 Distributions of the standard deviation for each projected shoreline show that areas with large seasonal fluctuations (e.g., Ehukai and Sunset), large errors in profile slope (e.g., Lydgate), and short time series of historical data (e.g., Baldwin) have less precise projections Full size image Fig. 13 Box plots of CV values for each study area are displayed to compare the ratio of dispersion to magnitude. Areas such as Ehukai and Sunset have large CV values where noisy shorelines, as a result of large Pacific NW swells, create high uncertainty compared to relatively small predicted net change (any trends are likely masked by the noise in the data). Substantial trends in comparison with relatively small data errors generate smaller CV values. The mean of CV values is not shown in the box plot because it is not well behaved due to outlying values Full size image 4 Discussion 4.1 Sources of uncertainty In the extrapolation of historical trends, areas with shorter time series, such as Baldwin and Kailua, have greater uncertainty in the long-term rate. Also, since shoreline aspect is related to wave climate, large seasonal fluctuations cause high uncertainty in historical shoreline models along north- and west-facing shorelines such as Sunset and Ehukai, Baldwin, and Kaanapali (refer to Fig. 3 d). Alternative historical shoreline change models may improve predictions. Methods that include data at neighboring transects (instead of treating each transect independently), such as basis function methods (Frazer et al. 2009 ; Anderson and Frazer 2014 ) and regularization methods (Anderson et al. 2014 ), have been shown to improve long-term shoreline change modeling while slightly reducing the uncertainty in predictions. In the R-DA model, uncertainty in the nearshore profile slope and SLR projections affect projected shorelines. Since shoreline response to relative SLR due to vertical land motion is accounted for in the historical trend, the uncertainty in the IPCC SLR projection is the same for all study sites. Uncertainty in the nearshore slope, however, does vary significantly between sites. At Lydgate, mid-Kaanapali, and the southern portion of Kailua, a shallow sand–reef intersection determines the seaward extent of the active profile. This, in combination with fluctuations in the beach toe position due to seasonal fluctuations at Kaanapali and Kailua, and tradewind fluctuations at Lydgate, causes high variability in the calculated nearshore slope. Thus, locations with both shallow reef platforms and unstable toe positions have larger uncertainty. This study included the simplifying assumption that the profile shape of a beach adjacent to a reef platform would remain constant. It is likely, however, that changes in wave energy that reach the beach will alter the slope of the profile. More study is needed to see how beach profiles that terminate on reef platforms respond to sea level rise. Finally, it is prudent to keep in mind that in addition to the sources of uncertainty in model inputs discussed above, there is additional uncertainty in using the R-DA model in general, and uncertainty in neglecting changes in sediment transport patterns as a result of SLR and other climate-change-related aspects. 4.2 Assessment of the IPCC SLR scenario The IPCC AR5 RCP 8.5 scenario shows the largest acceleration in SLR rate during the middle of the present century. In most areas (~80 %), projected net shoreline change (Fig. 10 ) for 2050 hovers between 1–24 m of landward migration, excluding the Kailua study area (discussed below). As a point of reference, most historical shoreline data errors range between 7–10 m. By 2100, on the other hand, projected net change increases to roughly 4–60 m of recession. It is important to keep in mind that these results do not include changes in sediment transport patterns due to changes in hydrodynamics resulting from increased water levels interacting with the fringing reef complex. Such changes are likely to reshape the equilibrium beach profile and lead to departures from the historical trends that are an important simplifying assumption of this model. Understanding of how these processes will affect future shorelines can be improved through a combination of hydrodynamical modeling and field monitoring at key representative beach sites. 4.3 Accretion at the reef-fringed pocket beach of Kailua, Oahu The Kailua area was selected for its anomalously steady accretion over the last seven decades (Hwang 1981 ; Sea Engineering 1988 ; Norcross et al. 2003a ). We find that Kailua Beach will continue accreting up to 2050 (Fig. 11 ), after which most of the beach will turn to an erosive state as SLR dominates, as indicated by the shoreline change rates estimated for 2100. However, the net migration projected for 2100 is predominantly positive (seaward of current shoreline; Figs. 8 , 10 ) because, despite its erosive future behavior, it is not expected to erode past present locations by that time. Since Kailua bay is bounded by basaltic headlands and there is a lack of modern sediment production (Harney et al. 2000 ), it is speculated that the sand-filled paleochannel that bisects the fringing reef in the middle of the bay acts as a conduit, supplying the beach with sediment from offshore. An example of a sand-filled paleochannel is shown in Fig. 3 b. Since sand-filled channels such as this may often be the only offshore sand source for otherwise reef-fronted beaches, the effects of heightened sea level on sediment movement through channels warrant more research. 4.4 Spatial variation and limitations of linear extrapolation The range of migration values (width of the boxes in Fig. 10 ) increases with time in all study areas. Extrapolating the historical shoreline change model into the future inherently assumes that sediment gain or loss rate is constant in time. Therefore, if a portion of a beach has historically lost sediment, but another portion of the same beach is, for some reason, gaining sediment, then over time, the two beach positions will continue to grow farther apart in the cross-shore direction. The continually diverging behavior is unrealistic over time. Yates et al. ( 2011 ) used expert knowledge to determine sections of the shoreline in which the alongshore average of the predicted shoreline could be used to represent the defined section. Others have taken a more objective approach, using an information criterion (e.g., Akaike, Schwartz) with alongshore basis functions (Frazer et al. 2009 ; Anderson and Frazer, 2014 ) or regularization (Anderson et al. 2014 ) to reduce the high-frequency fluctuations in rates and projections alongshore. The latter methods reduce unrealistic extremes at high spatial frequencies, while allowing long-wavelength variations in shoreline behavior. However, both methods rely on a time-linear sediment flux model, which will inevitably result in amplification of the alongshore variations over time (Anderson and Frazer 2014 ). So, while the magnitudes of rates are adjusted over time to reflect an increase in SLR, the range of the rates within an area will remain fairly consistent over time (Fig. 11 ). Conversely, because the sediment gain or loss rate at particular transects is kept constant over time, the alongshore gradients in net migration must increase with time, depending on the historical shoreline change model employed (Fig. 10 ). 4.5 Sediment transport and SLR In models based on the south Molokai, Hawaii coast, Grady et al. ( 2013 ) found that future SLR and climate-related reef degradation result in differing amounts of wave energy flux and thus alongshore sediment transport, over reefs of differing width. This indicates that reefs of varying width will experience alongshore changes in shoreline erosion and accretion under SLR. But how will increased SLR affect the highly variable geomorphology of Hawaiian beaches; will reef-fronted beaches (historically not exposed to high wave energy) experience enhanced recession compared to beaches adjacent to sand-filled paleochannels (historically exposed to high wave energy)? Results of hydrodynamic modeling for Molokai, Hawaii (Storlazzi et al. 2011 ) suggest that a 0.5–1.0-m SLR would increase coastal erosion and other physical processes (e.g., sediment re-suspension, mixing, and circulation). Climate-induced coral degradation may also increase wave energy reaching reef-fronted beaches as dissipation over the fringing reef is reduced (Sheppard et al. 2005 ). Thus, it is likely that increased water level over reef flats, in conjunction with potential reef degradation in Hawaii, will allow more wave energy to mobilize beach sand. Sand deposited beyond the outer reef edge during storm or high-swell events is subsequently prevented from returning to the beach by steep drop-offs at the outer reef edge. As noted above, Hawaii beaches generally lack any terrigenous sediment source and are composed of mainly ancient marine sediment grains, suggesting that modern sediment production does not sustain beaches, and therefore cannot be expected to mitigate the effects of SLR. Thus, it is likely that future SLR will result in a net loss of beach sand and exacerbate coastal erosion of Hawaiian reef-fronted beaches. The results of this study may then underestimate future recession. Portions of north- and west-facing shores with deeper, more elongated, more irregular reefs may not experience as much erosion as other areas because the relative change in depth over reefs will be less. In their shoreline change analysis of the east coast of Florida, Houston and Dean ( 2014 ) find that an additional sediment source is required to account for chronic shoreline advance. They attribute onshore sediment transport from beyond the DoC as the source, probably as a result of episodic storms. Similarly, it is possible that the few Hawaii beaches fronted by paleochannels and/or deep reefs may gain sediment from offshore, thus buffering potentially erosive effects of SLR. Previous historical shoreline analysis (Fletcher et al. 2013 ) did not indicate less erosion of beaches fronted by deeper reefs, but such shorelines are typically associated with increased seasonal fluctuation that can mask the underlying long-term trend. The potential landward movement of offshore sediment, at both beaches adjacent to paleochannels and reef-fronted beaches, requires further study. 5 Conclusions We analyzed net shoreline change at ten study sites across Hawaii for the years 2050 and 2100 under the IPCC AR5 RCP 8.5 SLR scenario. By combining historical extrapolation and the R-DA model for shoreline response to SLR, we produced probabilistic estimates of coastal erosion hazard areas. Erosion hazard zones can then be overlaid on GIS layers of interest, and statistical analyses of coastal exposure to projected erosion hazards and shoreline change rates performed. These probabilistic model results provide a long-range management tool by identifying coastal lands and resources that are exposed to erosion under rising sea levels. Approximately 92 and 96 % of the shorelines in the study area (excluding Kailua) are expected to be retreating by 2050 and 2100, respectively. Due to increasing SLR, the average shoreline recession by 2050 is nearly twice the historical extrapolation, and by 2100 it is nearly 2.5 times the historical extrapolation. Most projections (~80 %) range between 1–24 m of landward movement by 2050 and 4–60 m by 2100, except for Kailua. The average net shoreline change projected for Kailua is 7.1 ± 3.2 m of shore advance by 2050, reducing to 4.9 ± 6.8 m by 2100. Compared with net shoreline change projections based on historical extrapolation alone, projections that include excess SLR showed an average of about 6 m of additional shoreline recession by 2050 (relative to 2005) and 20 m by 2100 over all beaches in the study. The average standard deviation for individual projections at all study sites is roughly 13 m for 2050 and 23 m for 2100. North- and west-facing beaches show increased uncertainty in erosion hazard projections as a result of Pacific NW swells in winter. The Baldwin area, in which the historical data series was truncated to reduce the effects of sand mining, also has larger uncertainty. Areas fronted by shallow reef flats that are susceptible to fluctuations in beach toe position due to NW swells or tradewinds have high inherent uncertainty due to the instability of the nearshore slope. Increased water levels over fringing reefs and potential climate-related reef degradation will likely cause an increase in wave energy reaching the beach, which will mobilize more sediment. Sediment lost to offshore deposits beyond the outer reef drop-off will be isolated from the active beach system causing increased coastal erosion, especially given the lack of modern beach sediment production in Hawaii. Beaches fronted by deep or minimal reef cover may be able to keep pace with SLR. More studies of potential offshore sediment sources either directly over the DoC or through deep, sand-filled paleochannels are warranted. Notes Cross-shore elevations of the beach were collected by University of Hawaii researchers using the same methods described in Gibbs et al. ( 2001 ). The University of Hawaii Coastal Geology Group provided the raw data.
Chronic erosion dominates the sandy beaches of Hawai'i, causing beach loss as it damages homes, infrastructure and critical habitat. Researchers have long understood that global sea level rise will affect the rate of coastal erosion. However, new research from scientists at UH Mānoa and the state Department of Land and Natural Resources brings into clearer focus just how dramatically Hawai'i's beaches might change. For the study, published this week in Natural Hazards, the research team developed a simple model to assess future erosion hazards under higher sea levels – taking into account historical changes of Hawai'i shorelines and the projected acceleration of sea level rise reported from the Intergovernmental Panel on Climate Change (IPCC). The results indicate that coastal erosion of Hawai'i's beaches may double by mid-century. Like the majority of Hawaiʻi's sandy beaches, most shorelines at the 10 study sites on Kauaʻi, Oʻahu and Maui are currently retreating. If these beaches were to follow current trends, an average 20 to 40 feet of shoreline recession would be expected by 2050 and 2100, respectively. "When we modeled future shoreline change with the increased rates of sea level rise (SLR) projected under the IPCC's 'business as usual' scenario, we found that increased SLR causes an average 16 to 20 feet of additional shoreline retreat by 2050, and an average of nearly 60 feet of additional retreat by 2100," said Tiffany Anderson, lead author and post-doctoral researcher at the UHM School of Ocean and Earth Science and Technology. "This means that the average amount of shoreline recession roughly doubles by 2050 with increased SLR, compared to historical extrapolation alone. By 2100, it is nearly 2.5 times the historical extrapolation. Further, our results indicate that approximately 92% and 96% of the shorelines will be retreating by 2050 and 2100, respectively, except at Kailua, Oʻahu, which is projected to begin retreating by mid-century." The model accounts for accretion of sand onto beaches and long-term sediment processes in making projections of future shoreline position. As part of ongoing research, the resulting erosion hazard zones are overlain on aerial photos and other geographic layers in a geographic information system to provide a tool for identifying resources, infrastructure and property exposed to future coastal erosion. "This study demonstrates a methodology that can be used by many shoreline communities to assess their exposure to coastal erosion resulting from the climate crisis," said Chip Fletcher, Associate Dean at the UHM School of Ocean and Earth Science and Technology and co-author of the paper. Mapping historical shoreline change provides useful data for assessing exposure to future erosion hazards, even if the rate of sea level rise changes in the future. The predicted increase in erosion will threaten thousands of homes, many miles of roadway and other assets in Hawai'i. Globally the asset exposure to erosion is enormous. "With these new results, government agencies can begin to develop adaptation strategies, including new policies, for safely developing the shoreline," said Anderson. To further improve the estimates of climate impacts, the next step for the team of researchers will be to combine the new model with assessments of increased flooding by waves.
10.1007/s11069-015-1698-6
Computer
Artificial soft surface autonomously mimics shapes of nature
Xiaoyue Ni, A dynamically reprogrammable metasurface with self-evolving shape morphing, Nature (2022). DOI: 10.1038/s41586-022-05061-w. www.nature.com/articles/s41586-022-05061-w Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-05061-w
https://techxplore.com/news/2022-09-artificial-soft-surface-autonomously-mimics.html
Abstract Dynamic shape-morphing soft materials systems are ubiquitous in living organisms; they are also of rapidly increasing relevance to emerging technologies in soft machines 1 , 2 , 3 , flexible electronics 4 , 5 and smart medicines 6 . Soft matter equipped with responsive components can switch between designed shapes or structures, but cannot support the types of dynamic morphing capabilities needed to reproduce natural, continuous processes of interest for many applications 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 . Challenges lie in the development of schemes to reprogram target shapes after fabrication, especially when complexities associated with the operating physics and disturbances from the environment can stop the use of deterministic theoretical models to guide inverse design and control strategies 25 , 26 , 27 , 28 , 29 , 30 . Here we present a mechanical metasurface constructed from a matrix of filamentary metal traces, driven by reprogrammable, distributed Lorentz forces that follow from the passage of electrical currents in the presence of a static magnetic field. The resulting system demonstrates complex, dynamic morphing capabilities with response times within 0.1 second. Implementing an in situ stereo-imaging feedback strategy with a digitally controlled actuation scheme guided by an optimization algorithm yields surfaces that can follow a self-evolving inverse design to morph into a wide range of three-dimensional target shapes with high precision, including an ability to morph against extrinsic or intrinsic perturbations. These concepts support a data-driven approach to the design of dynamic soft matter, with many unique characteristics. Main Soft matter that can dynamically reconfigure its shape upon interactions with environment or perceptions of information is thriving 31 . Pioneering studies rely on an exploitation of responsive materials (for example, liquid crystal elastomers 8 , 9 , dielectric elastomers 10 , responsive hydrogels 11 , 12 , 13 and others 14 ) and multimaterial structures 7 , 15 to enable large deformation, but face challenges in implementing fast control to refined structures. The design of a shape-morphing process usually requires a prerequisite modelling effort to be programmed into the fabrication process, and this is therefore hard to reprogram on-the-fly (for example, 3D printing 7 , 11 , 19 , 24 , 30 , magnetization 19 , 32 , laser or wafer-jet cutting 26 , 27 , 33 , and mechanical buckling 25 ). The desire to shift shapes among a number of configurations invites investigations of various architectures and programmable stimuli (for example, temperature 8 , light 34 , 35 , magnetic field 20 , 36 , electric field 10 and Lorentz-force actuation 22 , 23 , 37 ). Traditional inverse design of the input–output relationships in the resulting non-linear and high-dimensional system can, however, lead to difficulties in establishing analytical solutions or problems of high computational costs. Also, existing computer-aided methods usually leave the inclusion of imperfections, damages or coupling between the system and the unforeseen environment. Incorporating instant feedback is necessary for the morphing process to see the deployment scheme to precisely account for specific, multifunctional or time-varying requirements 38 . Programmable electromagnetic actuation A materials architecture consisting of a mesh of optimized, planar conductive features operating in a magnetic field and with programmable control over distributions of electrical current, as introduced here, presents an intriguing set of opportunities. The metasurface takes the form of interconnected, serpentine-shaped beams that consist of a thin conductive layer of gold (Au, thickness h Au = 0.3 µm, width b Au = 130 µm) encapsulated by polyimide (PI, thickness h PI = 7.5 μm, width b PI = 160 μm) (see Methods section ‘Sample fabrication’, Supplementary Fig. 1 and Supplementary Note 1 for details). The intersections of the beams form an N × M mesh as shown in Fig. 1a ( N = M = 4, sample size L = W = 18.0 mm, column and row serpentine beam length L N / M = 3.60 mm). A tailored design ensures sufficiently large, fast and reversible out-of-plane deformation ( u / L ≈ 30%) (in-plane deformation less than 0.01 L ; response time less than 0.07 s) of the serpentine beam, driven by a modest electric current ( I < 27.5 mA) in an approximately uniform magnetic field B (magnitude B = 224 ± 16 mT) (see Extended Data Fig. 1 , Supplementary Figs. 2 – 6 and Supplementary Notes 2 – 5 for details). An analytical model validated by experiment can be used to guide design choices for a tunable electromagnetic response in a broad range of magnetic field strengths (for example, B reduced to 25 mT; see Extended Data Fig. 2 and Supplementary Note 3.1 ). Figure 1b shows that independent voltages ( V = { V j }) of size 2( N + M ) applied to the peripheral ports define the distribution of current density ( J ) in the conductive network (see Methods section ‘Digital control’ and Supplementary Fig. 7 for details) and therefore control the Lorentz force, F EM = J × B . The spatially distributed actuation F EM ( J ) determines the local, out-of-plane ( Z direction) deformations ( u = { u i }, where u i is the displacement of the i th node) of the sample in a magnetic field B aligned with its diagonal, enabling a large set of accessible three-dimensional (3D) shapes from the same precursor structure. Fig. 1: Mechanical metasurfaces driven by reprogrammable electromagnetic actuation. a , Schematic illustration (enlarged view) of a representative square mesh sample constructed from the serpentine beams consisting of thin PI and Au layers. b , Schematic illustration of a 4 × 4 sample placed in a magnetic field (in-plane with the sample in a diagonal direction). Portal voltages define the current density distribution ( J ) in the sample and hence control the local Lorentz-force actuation. c , FEA provides a linear-model approximation of the nodal displacement in response to the portal voltage input for the 4 × 4 sample. Experimental characterization using a side camera agrees with the FEA prediction. d , FEA and experimental results of a 4 × 4 and 8 × 8 sample morphing into four target abstract shape-shifting processes with control of the instantaneous velocity and acceleration of the dynamics. Scale bars, 5 mm. Full size image Model-driven inverse design The unusual structure and material design enables the system to adopt an approximate, linearized model, such that the nodal displacement response to the input voltages is as follows: $${u}_{i}=\mathop{\sum }\limits_{j=1}^{2(N+M)}{C}_{ij}{V}_{j},{\rm{for}}\,i=1,\ldots ,N\times M,$$ (1) where the coupling matrix C = { C ij } fully characterizes the electro-magneto-mechanical system. Figure 1c shows the finite-element analysis (FEA) and the experimental characterization of the coupling coefficients C ij for representative nodes in the actuation range of 0–4 V for the 4 × 4 sample in the magnetic setup. Linear regression of the FEA results predicts C . The analytical model and the FEA studies, together with experimental validation, provide a scaling law of the coefficients as C ij ∝ ( BLH 2 b Au h Au )/( E PI b PI h PI 3 ρ Au ) (where H is the serpentine beam width, E PI is the PI Young’s modulus and ρ Au is the Au electrical conductivity; see Supplementary Figs. 8 and 9 and Supplementary Notes 3.2 and 3.3 for details). Following this linear approximation, a model-driven approach attempts to zero the errors, e i ( V )=( u i ( V ) − u i * )/ L (the difference between the output deformation, u i ( V ), from the target, u i * , normalized by the system size L ), to optimize the voltages for the precursor surfaces to deform to a target shape. Given a convex problem with linear target functions and constraints, a gradient-descent-based algorithm iterates over V to minimize a loss function, f ( V ) = ∑ i e i 2 ( V ) with a maximum-current constraint (see Methods section ‘Optimization algorithm’ and Supplementary Note 6 for details). The linearized model-driven approach yields a prediction for V within 0.01 s. The same approach driven by numerical methods (for example, FEA) without linearization is not possible because of unaffordable computational costs (around 10 days using a workstation with 40-core, 2.4 GHz CPU and 64 GB memory). Figure 1d shows FEA and experimental results of an inverse-designed, continuous shape morphing of a 4 × 4 and an 8 × 8 sample ( L = W = 22.4 mm, L N / M = 2.48 mm, see Supplementary Note 7.1 for a detailed discussion of scalability). The process consists of four phases: rising up, moving around, splitting and oscillating, with a prescribed control of the instantaneous velocity and acceleration of the dynamics (Supplementary Video 1 , Supplementary Figs. 10 – 13 and Supplementary Note 8 ). In addition to the abstract shapes, the reprogrammable metasurface demonstrates an ability to reproduce dynamic processes in nature that involve a temporal series of complex shapes, provided with the inversely designed current distributions. Figure 2a shows an array of eight serpentine beams ( L = 10.4 mm, W = 20.6 mm, L N = 5.2 mm, L M = 2.52 mm, Supplementary Fig. 14 and Supplementary Note 9 ) morphing into the two-dimensional profile of a droplet dripping from a nozzle (see Methods section ‘Target shapes of the droplets’ and Supplementary Fig. 15 ). Shapes I–III describe the growth of a pendant drop to its critical volume. Shapes IV–V capture the following pinch-off process. Figure 2b presents the 4 × 4 and 8 × 8 samples simulating the 3D surface of a droplet falling onto a rigid surface in five stages: hitting the surface, spreading out, bouncing back, vibrating and stabilizing (see Methods section ‘Target shapes of the droplets’, Supplementary Video 2 and Supplementary Figs. 16 – 19 ). Numerical analysis further illustrates that the mesh structure can morph into an extensive set of target shapes (Supplementary Figs. 20 – 27 and Supplementary Notes 8 , 10 and 11 ). Increasing the number of control inputs and introducing a time-varying magnetic field or a field gradient enhance the range of target shapes that can be morphed with sufficient accuracy (Extended Data Fig. 3 , Supplementary Figs. 28 and 29 , and Supplementary Notes 7 and 12 ). Fig. 2: Model-driven inverse design of the metasurfaces for dynamic, complex shape morphing. a , FEA and experimental results of an array of eight serpentine beams morphing into the growth and pinch-off of a droplet dripping from a nozzle. Scale bars, 2.5 mm. b , FEA and experimental results of a 4 × 4 and an 8 × 8 sample morphing into the dynamic process of a droplet hitting a solid surface, spreading out, bouncing back, vibrating and stabilizing. Scale bars, 5 mm. The target shapes in a were reconstructed with permission from the images in Fig. 3 in ref. 41 (The American Physical Society). The target shapes in b were reconstructed with permission from the frames in supplementary video 1 in ref. 42 (Elsevier). Full size image The linearized model-driven approach accomplishes an inverse design when a modest error from the non-linearity is tolerable. Extending the model-driven approach to include non-linearity is challenging owing to the large computational expense (Supplementary Note 13 ) or difficulties in establishing analytical solutions. The open-loop model-based inverse design has constraints on the design space and cannot account for non-ideal factors, such as environmental changes or defects in the sample. The existing limitations motivate the development of sensing feedback for a closed-loop self-evolving inverse design approach. Experiment-driven self-evolving process Figure 3a illustrates an experiment-driven process in comparison with the linearized model-driven process. Whereas the model-driven route relies on the presumption of a linear and stationary model, the experimental method takes the in situ measurement of the system output and feeds the difference between the current state and the target state for actuation regulation. In this work, a custom-built stereo-imaging setup using two webcams enables a 3D reconstruction of the nodal displacement at a rate of 30 frames per second, with a displacement resolution of around 0.006 mm and a measurement uncertainty of ±0.055 mm (see details of 3D imaging in Methods , Supplementary Fig. 30 and Supplementary Note 14 ). After each update of the actuation ( V ), the real-time imaging provides an in situ nodal displacement error analysis. An optimization algorithm (the same one as used in the model-driven approach but wrapping the 3D imaging process) performs the experimental iterations over V to minimize f ( V ). For a 4 × 4 sample morphing into a representative target shape ( f ( V = 0 ) = 0.05–0.35), the optimization process takes 5–15 iterations (Extended Data Fig. 4a–c ). Each feedback control cycle in the current setup takes around 0.25 s due mainly to the time overhead from the image processing algorithm but this time is ultimately limited by the mechanical response time (which is less than 0.1 s) (Supplementary Table 1 and Supplementary Note 6 ). A hybrid method, taking a model-driven prediction as the initial input, reduces the number of iterations to around three. The experiment-driven process opens opportunities for the metasurface to self-evolve to target shapes without any previous knowledge of the system (Supplementary Video 3 ). Figure 3b–d and Extended Data Fig. 5 provide a quantitative comparison between the model-driven and experiment-driven morphing results from the same 4 × 4 precursor, targeting representative shapes (Supplementary Video 4 , Supplementary Figs. 31 – 33 and Supplementary Note 8 ). The resulting errors from the model-driven approach follow a wide (over ±5%), mostly skewed distribution (Fig. 3d , considering 441 points from the interpolated 3D surface; Supplementary Note 14 ). The experiment-driven approach, accounting for the subtle non-linear deviation, yields a relatively narrow (±2%), symmetric error distribution. The dominant sources of errors are the discreteness in the input voltages and the uncertainties associated with the 3D imaging (Extended Data Fig. 6 and Supplementary Note 14 ). Experimental noise also adds complexity to the error function and, when pronounced, requires global optimization solvers (Extended Data Fig. 4d–f and Supplementary Note 6 ). Fig. 3: The experiment-driven self-evolving process in comparison with the model-driven approach. a , A flow diagram of the model-driven inverse design approach (top, blue) and an experiment-driven self-evolving process enabled by in situ 3D imaging feedback and a gradient-descent-based optimization algorithm (bottom, red). b , The target abstract shape and optical image of the experiment-driven morphing result of the 4 × 4 sample. c , 3D reconstructed surfaces overlaid with contour plots of the minimized errors. d , Histogram plot of the minimized errors for model-driven and experiment-driven outputs. Scale bars, 5 mm. Full size image The experiment-driven process works as a physical simulation to accommodate appreciable non-linearity without a substantial increase in the computational cost. Figure 4a introduces a 2 × 2 sample ( L = W = 25.0 mm, L N / M = 10.25 mm) morphing into the same target shape in Fig. 3b . Centred in the same magnetic setup, the sample shows an amplified non-linear mechanical behaviour in response to input voltages due to the reduced arc length of each serpentine beam (Extended Data Fig. 7 and Supplementary Note 15 ). The model-driven approach based on the assumption of a linear system results in an absolute maximum error of around 8%. The experiment-driven approach achieves a more accurate morphing result in around 20 iterations with absolute errors below 1%. Fig. 4: Self-evolving shape morphing against extrinsic or intrinsic perturbations. a – d , Experimental results of a 2 × 2 sample ( a ) and a 4 × 4 sample ( b – d ) morphing into the same target shape (Fig. 3b ) via model-driven and experiment-driven processes. A modified serpentine design that amplifies the non-linearity of the voltage-driven deformation ( a ), and the introduction of an extrinsic magnetic perturbation by displacing the sample from the original, centred position (Δ x = 8 mm, Δ y = 12 mm, Δ θ = 15°) ( b ), an extrinsic mechanical perturbation by applying an external mechanical load (around 0.1 g) on a serpentine beam ( c ) and intrinsic damage by cutting one beam open, causing substantial changes in both mechanical and electrical conductivity of the sample ( d ). Left: schematic illustration of the experimental configuration. Middle: optical images and 3D reconstructed surfaces superimposed with an error map. Right: histogram plots of errors. Scale bars, 5 mm. Full size image Guided by the experiment-driven process, the metasurface can also self-adjust to morph against unknown perturbations. Figure 4b–d shows three representative cases in which a 4 × 4 sample morphs with perturbed magnetic field, external mechanical load and intrinsic damage, respectively. In all cases, the model-driven approach following the original inverse design results in absolute maximum errors of around 8–10%. In comparison, the experiment-driven approach adapts the shape to reach the target with absolute errors below around 3%, which is comparable with that of an intact sample (around 2%) (Supplementary Video 5 ). The boosted accuracy level demonstrates the ‘self-sustained’ morphing ability enabled by the experiment-driven process. Shape learning and multifunctionality The adaptive, self-evolving metasurface platform delivers a semi-real-time morphing scheme to learn the continuously evolving surface of a real object. In this experiment, a duplicated stereo-imaging setup measures the displacement of a 4 × 4 array of markers (with interspacing a 0 = 15 mm) on the palm (Extended Data Fig. 8a ). The optimization acts directly to minimize the displacement difference between the 16 markers and their corresponding nodes in the 4 × 4 sample. Given continuity, the gradient-descent process takes the last morphing result as the initial state for the next morphing task. This differential method (with the target descent Δ f ( V ) ≈ 0.0032) requires only at most three iterations (approximately 20 s) to reach the optimum. Figure 5a shows the representative frames from a video recording a hand making eight gestures with different fingers moving (see Supplementary Video 6 and Extended Data Fig. 8b,c for complete results of all gestures). All morphing results agree with the target with absolute errors below 2%. Fig. 5: Self-evolving shape morphing towards semi-real-time shape learning and multifunctionality. a , Morphing results of representative frames from a recording of a hand making eight gestures with different fingers moving. b , Schematic illustration of a 3 × 3 sample with gold patches (2 mm × 2 mm in size, 0.3 µm in thickness) mounted on the nodes reflecting a laser beam from an incident angle. A top-positioned camera monitors the laser spot projected on a paper screen. c , An optical image of a 3 × 3 sample with nine reflective patches morphing via a hybrid experiment-driven and model-driven process to perform two functions: (1) reflecting and overlapping two laser beams (red, green) with different incident angles ([ θ X r , θ Z r ], [ θ X g , θ Z g ]) and (2) achieving the target displacement (−0.5 mm) of the central node ( u 5 ) of the sample. d , Imaging of the screen from the camera provides experimental feedback of the distance between the two laser spots. e , Model predictions of the displacement profile of the sample (cross-sectional view) when overlapping the laser spots with the highest-possible (blue), lowest-possible (green) and optimized (red) central positions. Ex situ stereo imaging provides 3D reconstructed measurement of the optimized deformation (orange) that validates the in situ model predictions. Scale bars, 5 mm. Full size image In addition to self-evolving to optimize shapes, the metasurface can self-evolve to optimize functions. Setting multiple target functions drives the optimization towards emergent multifunctionality, with the ability to decouple naturally coupled functions. Figure 5b,c illustrates a scheme in which a 3 × 3 sample ( L = W = 14.8 mm, L N / M = 4.06 mm) with nine reflective gold patches at the nodes attempts to perform an optical and a structural function: (1) reflecting and overlapping two laser beams (red, green) with different incident angles ([ θ X r , θ Z r ], [ θ X g , θ Z g ]) on a receiving screen (Extended Data Fig. 9a ) and (2) achieving the target displacement of the central node of the sample. The optimization takes a hybrid strategy combining the model-driven and experiment-driven processes (Supplementary Note 16 ). While the voltages control the reflected beam paths, a top camera provides imaging feedback of the distances between the beam spots on the screen. The model-driven process predicts the difference between the central nodal displacement and the target. The total loss takes a linear combination of the two errors (Extended Data Fig. 9b and Supplementary Note 16 ). Figure 5d shows the self-evolving results of three optical configurations with distinctive incident beam angles. Figure 5e shows that the metasurface can morph to overlap the laser spots on the receiving screen with a range of possible shapes (Extended Data Fig. 10a ). By enforcing both functions, the sample overlaps the spots and settles its central node to a target displacement. A post analysis through ex situ 3D imaging validates that the final experimental central nodal displacement reaches the target within an error of ±2% (Supplementary Video 7 and Extended Data Fig. 10b ). Discussion The work presents a reprogrammable metasurface that can precisely and rapidly morph into a wide range of target shapes and dynamic shape processes. The Lorentz-force-driven serpentine mesh construction supports an approximately linear input–output response with easily accessible solutions to the inverse problem. The highly integrable digital–physical interfaces incorporating actuation, sensing and feedback enable an in-loop optimization process to attain model-free solutions when the system deviates from the linear, time-invariant response. The experiment-driven shape-shifting capability addresses theoretical and computational challenges in complex, non-linear systems, bringing new opportunities for physical simulations for a real-time, data-driven inverse design process. Such a scheme enables an autonomous materials platform to promptly change structures, actively explore the design space and responsively reconfigure functionalities. The platform is compatible with the typical materials, structures and thin-film fabrication techniques used in existing flexible electronics frameworks. It supports optimized choices of materials, geometries, layouts, control systems and magnetic setups for design flexibility and potential scalability, which promises a wide, versatile application scenario in wearable techniques, soft robotics and advanced materials. Many possibilities exist to improve this system, such as incorporating a mechanical locking mechanism (for example, applying phase transition materials 21 , 39 or a jamming configuration 40 could hold the morphed shapes without actuation). Exploring constructions with low in-plane stiffness will enable additional deformation modes of the metasurface (Supplementary Fig. 34 ). The demonstration of the current modular platform invites higher levels of integration to embed functional materials and components into the morphing matter, to support on-board power sources (supercapacitors), sensors (strain gauges), feedback control mechanisms (analogue devices), computational resources (microcontrollers) and wireless communication capabilities (radios). Using advanced data-driven techniques in the loop (for example, Bayesian optimization, deep learning and reinforcement learning) will enhance the capabilities of self-evolving designs for artificial matter in pursuit of functions or performance inspired by those in their natural counterparts, paving the way for new classes of intelligent materials that adopt spatiotemporally controlled shapes and structures for advanced on-demand functionalities. Methods Sample fabrication The fabrication process (Supplementary Fig. 1 ) began with the spin coating of a thin layer of PI (HD Microsystems PI2545, 3.75 μm in thickness) on a silicon wafer with poly(methyl methacrylate) (Microresist 495 A5, 0.08 µm in thickness) as the sacrificial layer. Subsequent lift-off processes patterned the metal electrodes and serpentine connections (Ti/Au, 10 nm/300 nm in thickness). Spin coating another layer of PI (HD Microsystems PI2545, 3.75 μm in thickness) covered the metal pattern. Photolithography and oxygen plasma etching of PI defined the outline of the sample. Undercutting the bottom layer of poly(methyl methacrylate) allowed the transfer of the sample to a water-soluble polyvinyl alcohol tape (3M) from the silicon wafer. Digital control The digital control system used (1) pulse-width modulation (PWM) drivers (PCA9685, 16-channel, 12-bit), (2) voltage amplifier circuits (MOSFET, IRF510N, Infineon Tech) and (3) a single-board computer (Raspberry Pi 4) remotely connected to an external computer (Intel NUC, Intel Core i7-8559U CPU@2.70 GHz). The external computer ran the optimization algorithm and sent the updated values of the voltages wirelessly to the single-board computer through Python Socket network programming. The PWM driver received the actuation signals from the single-board computer. Each PWM channel, operated at a frequency of 1,000 Hz, generated an independent voltage in the range of 0–6 V with 12-bit (around 0.0015 V) resolution. The single stage MOSFET provided a reversely linear amplification to the PWM output with a gate voltage, V gs(th) = 4 V, and an external power supply, V ex = 6 V (Supplementary Fig. 7 ). Optimization algorithm Sequential least squares programming with a three-point method (SciPy-Python optimize.minimize function) computed the Jacobian matrix in the loop to minimize the loss function f ( V ). The model-driven approach adopted the same optimization algorithm, with f ( V ) evaluated by equation ( 1 ) and a maximum of around 10,000 iterations. For the experiment-driven approach, a maximum final loss value of 0.005 f ( V = 0 ) and a maximum of 15 iterations set the stopping criteria for the optimization process. Each iteration required 4( N + M ) + 2 function evaluations for an N × M sample (Supplementary Note 6 ). Target shapes of the droplets The target shapes in Fig. 2a were reconstructed from the images in Fig. 3 in ref. 41 . The target shapes in Fig. 2b were reconstructed from frames of the supplementary video in ref. 42 . 3D models of the target shapes were built and rendered using Solidworks (Dassault Systèmes). The target shapes and slow-motion video in Supplementary Video 2 were reconstructed (00:00:15–00:04:23, 0.6× playback) and reproduced from ref. 42 . 3D imaging The multiview stereo-imaging platform consisted of two cameras (Webcams, ELP, MI5100, 3,840 × 2,160-pixel resolution, 30 frames per second) connected to the external computer taking top-view images of the sample from symmetric angles (Supplementary Fig. 30a ). A calibration algorithm (OpenCV-Python calibrateCamera function) applied to a collection of images of a chequerboard (custom-made, 7 × 8 squares, 2 mm × 2 mm per square) returned the camera matrix, distortion coefficients, rotation and translation vectors to correct for the lens distortion of the images (OpenCV-Python undistort function). The nodes of the mesh samples provided a distinguishable geometry for image registration. A template matching algorithm (OpenCV-Python matchTemplate function) returned the locations of the nodes in the images from the two cameras. A perspective projection algorithm (OpenCV-Python reprojectImageTo3D function) transformed the disparity map into the nodal heights in units of pixels. An additional side camera provided ground-truth measurements of the displacement of the discernible nodes and provided a linear-model prediction of the 3D-recontructed nodal displacement (Supplementary Fig. 30b,c and Supplementary Note 14 ). Data availability All data are contained within the manuscript. Raw data are available from the corresponding authors upon reasonable request. Code availability The codes that support the findings of this study are available from the corresponding authors upon reasonable request.
Engineers at Duke University have developed a scalable soft surface that can continuously reshape itself to mimic objects in nature. Relying on electromagnetic actuation, mechanical modeling and machine learning to form new configurations, the surface can even learn to adapt to hindrances such as broken elements, unexpected constraints or changing environments. The research appears online September 21 in the journal Nature. "We're motivated by the idea of controlling material properties or mechanical behaviors of an engineered object on the fly, which could be useful for applications like soft robotics, augmented reality, biomimetic materials, and subject-specific wearables," said Xiaoyue Ni, assistant professor of mechanical engineering and materials science at Duke. "We are focusing on engineering the shape of matter that hasn't been predetermined, which is a pretty tall task to achieve, especially for soft materials." Watch this thin, flexible material teach itself to mimic ocean waves and flexing palms in real-time. Relying on electromagnetic actuation, mechanical modeling and machine learning to form new configurations, the surface can even learn to adapt to hindrances such as broken elements, unexpected constraints or changing environments. Credit: Veronique Koch, Duke University Previous work on morphing matter, according to Ni, hasn't typically been programmable; it's been programmed instead. That is, soft surfaces equipped with designed active elements can shift their shapes between few shapes, like a piece of origami, in response to light or heat or other stimuli triggers. In contrast, Ni and her laboratory wanted to create something much more controllable that could morph and reconfigure as often as it likes into any physically possible shapes. To create such a surface, the researchers started by laying out a grid of snake-like beams made of a thin layer of gold encapsulated by a thin polymer layer. The individual beams are just eight micrometers thick—about the thickness of a cotton fiber—and less than a millimeter wide. The lightness of the beams allows magnetic forces to easily and rapidly deform them. To generate local forces, the surface is put into a low-level static magnetic field. Voltage changes create a complex but easily predictable electrical current along the golden grid, driving the out-of-plane displacement of the grid. "This is the first artificial soft surface that is fast enough to accurately mimic a continuous shape-shifting process in nature," Ni said. "One key advance is the structural design that enables an unusual linear relationship between the electrical inputs and the resulting shape, which makes it easy to figure out how to apply voltages to achieve a wide variety of target shapes." The new "metasurface" shows off a wide array of morphing and mimicking skills. It creates bulges that rise and move around the surface like a cat trying to find its way out from under a blanket, oscillating wave patterns, and a convincing replication of a liquid drop dripping and plopping onto a solid surface. And it produces these shapes and behaviors at any speed or acceleration desired, meaning it can reimagine that trapped cat or dripped droplet in slow motion or fast forward. With cameras monitoring the morphing surface, the contortionist surface can also learn to recreate shapes and patterns on its own. By slowly adjusting the applied voltages, a learning algorithm takes in 3D imaging feedback and figures out what effects the different inputs have on the metasurface's shape. In the paper, a human palm spotted with 16 black dots slowly shifts under a camera, and the surface mirrors the movements perfectly. "The control doesn't have to know anything about the physics of the materials, it just takes small steps and watches to see if it's getting closer to the target or not," Ni said. "It currently takes about two minutes to achieve a new shape, but we hope to eventually improve the feedback system and learning algorithm to the point that it's nearly real-time." Because the surface teaches itself to move through trial and error, it can also adapt to damage, unexpected physical constraints or environmental change. In one experiment, it quickly learns to mimic a bulging mound despite one of its beams being cut. In another, it manages to mimic a similar shape despite a weight being attached to one of the grid's nodes. There are many immediate opportunities to extend the scale and configuration of the soft surface. For example, an array of surfaces can scale the size up to that of a touching screen. Or fabrication techniques with higher precision can scale the size down to one millimeter, making it more suitable for biomedical applications. Moving forward, Ni wants to create robotic metasurfaces with integrated shape-sensing functions to perform real-time shape mimicking of complex, dynamic surfaces in nature, such as the water ripples, fish fins or the human face. The lab may also look into embedding more components into the prototype, such as on-board power sources, sensors, computational resources or wireless communication capabilities. "Along with the pursuit of creating programmable and robotic materials, we envision future materials will be able to alter themselves to serve functions dynamically and interactively," said Ni. "Such materials can sense and perceive requirements or information from the users, and transform and adapt according to the real-time needs of their specific performance, just like the microbots in Big Hero 6. The soft surface may find applications as a teleoperated robot, dynamic 3D display, camouflage, exoskeleton or other smart, functional surfaces that can work in harsh, unpredictable environments."
10.1038/s41586-022-05061-w
Physics
Deep learning improves image reconstruction in optical coherence tomography using less data
Yijie Zhang et al, Neural network-based image reconstruction in swept-source optical coherence tomography using undersampled spectral data, Light: Science & Applications (2021). DOI: 10.1038/s41377-021-00594-7 Journal information: Light: Science & Applications
http://dx.doi.org/10.1038/s41377-021-00594-7
https://phys.org/news/2021-07-deep-image-reconstruction-optical-coherence.html
Abstract Optical coherence tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units (GPUs), removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× spectral undersampling. Furthermore, an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network, which improved the overall imaging performance using less spectral data points per A-line compared to 2× or 3× spectral undersampling results. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio. Introduction Optical coherence tomography (OCT) is a non-invasive imaging modality that can provide three-dimensional (3D) information of optical scattering properties of biological samples. The first generation of OCT systems were based on time-domain (TD) imaging 1 , using mechanical path-length scanning. However, the relatively slow data acquisition speed of the early TDOCT systems partially limited their applicability for in vivo imaging applications. The introduction of the Fourier Domain (FD) OCT techniques 2 , 3 with higher sensitivity 4 , 5 has contributed to a dramatic increase in imaging speed and quality 6 . Modern FDOCT systems can routinely achieve line rates of 50–400 kHz 7 , 8 , 9 , 10 , 11 , 12 and there have been recent research efforts to further improve the speed of A-scans to tens of MHz 13 , 14 . Some of these advances employed hardware modifications to the optical set-up to improve OCT imaging speed and quality, and focused on, e.g., improving the OCT system design, including improvements in high-speed sources 13 , 15 , 16 , also opening up new applications such as single-shot elastography 17 and others 18 , 19 , 20 . Recently, we have experienced the emergence of deep-learning-based image reconstruction and enhancement methods 21 , 22 , 23 to advance optical microscopy techniques, performing e.g., image super resolution 23 , 24 , 25 , 26 , 27 , 28 , autofocusing 29 , 30 , 31 , depth of field enhancement 32 , 33 , 34 , holographic image reconstruction, and phase recovery 35 , 36 , 37 , 38 , among many others 39 , 40 , 41 , 42 . Inspired by these applications of deep learning and neural networks in optical microscopy, here we demonstrate the use of deep learning to reconstruct swept-source OCT (SS-OCT) images using undersampled spectral data points. Without the need to perform any hardware modifications to an existing SS-OCT system, we show that a trained neural network can rapidly process undersampled spectral data and match, at its output, the image quality of standard SS-OCT reconstructions of the same samples that used 2-fold more spectral data per A-line. A major challenge in reducing the number of spectral data points in an OCT system without sacrificing resolution is the aliasing artifacts introduced by undersampling. According to the Nyquist sampling theorem, the maximum axial depth within the tissue that can be imaged without spatial aliasing is proportional to 43 : $$z_{\max } \propto \left| {\frac{\pi }{{2 \cdot \delta _{\mathrm{s}}k}}} \right| = \left| {\frac{{\lambda _0^2}}{{4 \cdot \delta _{\mathrm{s}}\lambda }}} \right|$$ (1) where δ s k is the spectral sampling interval in k space, δ s λ is the wavelength sampling interval, and λ 0 is the central wavelength. When the spectral sampling interval increases, it reduces the maximum depth that can be imaged without spatial aliasing artifacts. In our approach, we first reconstructed each A-line with 2× less spectral data (eliminating every other spectral sample), which resulted in severe spatial aliasing artifacts. We then trained a deep neural network to remove these aliasing artifacts that are introduced by spectral undersampling, matching the image reconstruction results that used all the available spectral data points. To demonstrate the success of this deep learning-based OCT image reconstruction approach, we used an SS-OCT 3 system to image murine embryo samples. The trained neural network successfully generalized, and removed the spatial aliasing artifacts in the reconstructed images of new embryo samples that were never seen by the network before. We further extended this framework to process 3× undersampled spectral data per A-line, and showed that it can be used to remove even more severe aliasing artifacts that are introduced by 3× spectral undersampling, although at the cost of some degradation in the reconstructed image quality compared to 2× spectral undersampling results. As an alternative approach, we also introduced an A-line-optimized spectral sampling framework to further reduce the acquired spectral data per A-line. The spectral sampling locations and the corresponding OCT image reconstruction network were jointly optimized during the training process, allowing this method to use less spectral data, while achieving better image reconstruction performance compared to 2× or 3× spectral undersampling results. In addition to overcoming spectral undersampling related image artifacts, the inference time of the deep neural network is also optimized, achieving an average image reconstruction time of 6.73 ms for 512 A-lines, processed all in parallel using a desktop computer; this inference time is further improved to 0.59 ms by simplifying the neural network architecture and using multiple GPUs. We believe that this deep learning-based OCT image reconstruction method has the potential to be integrated with various swept-source or spectral-domain OCT systems, and can potentially improve the 3D imaging speed without a sacrifice in resolution or signal-to-noise of the reconstructed images. Results To demonstrate the efficacy of this deep learning-based OCT image reconstruction framework, which we term DL-OCT, we trained and tested a deep neural network (see “Materials and methods” section) using SS-OCT images acquired on mouse embryo samples. Our 3D image data set consisted of eight different embryo samples, where five of them were used for training and the other three were used for blind testing. For each one of these embryo samples, 1000 B-scans (where each B-scan consists of 5000 A-lines, and each A-line has 1280 spectral data points) were collected by the SS-OCT system shown in Fig. 1a ; see “Materials and methods” section for more details. During the network training phase, the original OCT fringes per A-line were first reconstructed using a Fourier transform-based image reconstruction algorithm to form the network’s target (i.e., ground truth) images. Then, the same spectral fringes were 2× down-sampled (by eliminating every other spectral data point), zero interpolated, and reconstructed using the same Fourier transform-based image reconstruction algorithm to form the input images of the network, each of which showed severe aliasing artifacts due to the spectral undersampling (Figs. 1 and 2 ). Both the real and imaginary parts of these aliased OCT images were used as the network input, where only the amplitude channel of the ground truth was used for the target image during the training phase. After the network training process, which is a one-time effort, taking e.g., ~18 h using a desktop computer (see “Materials and methods” section), the trained neural network successfully generalized and could reconstruct the images of unknown, new samples that were never seen by the network before, removing the aliasing related artifacts as shown in Fig. 1 . Figure 2 further reports a detailed comparison of the network’s input, output, and ground truth images corresponding to different fields of view of mouse embryos, also quantifying the absolute values of the spatial errors made. Fig. 1: Schematic of the DL-OCT image reconstruction framework. a Training phase of DL-OCT. Raw OCT fringes were captured by an SS-OCT system. The network target (ground truth) was generated by direct reconstruction of the original OCT fringes as detailed in the “Materials and methods” section. The network input was generated by a 2-fold down-sampling of the spectral data for each A-line, zero interpolation, and reconstruction of the resulting fringes. b Testing phase of the DL-OCT. We passed the 2× undersampled OCT image (real and imaginary parts) through a trained network model to create an aliasing-free OCT image, matching the ground-truth reconstruction that used all the spectral data points (see “Materials and methods” section for details). Full size image Fig. 2: Blind testing performance of the DL-OCT framework. The network input, output, and ground-truth images of three mouse embryo samples (never seen by the network before) at different fields-of-view are shown in the first three columns. The error maps of the input and output with respect to the corresponding ground-truth image are provided in the 4th and 5th columns, respectively. The peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) values were also computed and displayed for each one of these sample fields-of-view. Full size image The reconstruction results reported in Figs. 1 and 2 clearly reveal that the trained network does not simply keep the connected upper part of the input image as the output. For example, in Fig. 2 g, the signal in the ground truth image crosses both the upper and the lower parts of the field-of-view, and in the red circled region, there is an abrupt change, breaking the horizontal connectivity of the image. The DL-OCT network learned to reconstruct the output images by utilizing a combination of the vertical morphological information exhibited in the target images and the special corrugated patterns caused by aliasing. In an OCT system, the illumination beam naturally forms an axially decaying pattern, where the surfaces or structural discontinuities usually have a stronger signal than the internal structure of the sample 43 . This characteristic information was effectively captured by the neural network inference, as shown in for example Fig. 2g . This also explains the occasional weak artifacts observed at the network output (see e.g., the yellow circled region in Fig. 2g ) for features that lack detectable morphological information along the vertical axis. In general, the trained neural network uses both the vertical and horizontal information at the input image (within its receptive field) to remove various challenging forms of aliasing artifacts such as those emphasized with red color in Fig. 2d . Next, to quantify the performance of DL-OCT image reconstructions, two quantitative metrics were calculated for 13,131 different test image patches: peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) (see “Materials and methods” section for details). PSNR is a non-normalized metric that represents an estimation of the human perception of the image reconstruction quality. For images with pixels ranging from 0 to 1 with double-precision (such as the test images in our framework), a 20–30 dB PSNR value is generally acceptable for noisy target images 44 . The SSIM, on the other hand, is a normalized metric that focuses more on image structure similarity between two images. This metric can take a value between 0 and 1 (where 1 represents an image that is identical to the target) 44 . Overall, compared to the target (ground truth) images that used all the spectral data points, the spectrally undersampled input images with aliasing artifacts achieved a PSNR and an SSIM of 18.3320 dB and 0.2279, respectively, averaged over 13,131 test image patches. Both of these metrics were significantly improved at the network’s output images, achieving 24.6580 dB and 0.4391, respectively, also averaged over 13,131 test image patches. Some examples of these image comparisons with the resulting PSNR and SSIM values are also reported in Fig. 2 . To further test the robustness of the DL-OCT approach, it was also tested on other types of samples (i.e., human finger, human nail, human palm, human wrist, the limbus of human eye, anterior chamber of the human eye, and mouse eye). In total, 7 different samples for each type of tissue were imaged (except for mouse eye, where only 4 samples were imaged) by another SS-OCT imaging system (see Supplementary Methods for details). A single image reconstruction network was trained with all these types of tissue, where one sample for each type was reserved for blind testing. During the testing phase, the network consistently achieved high-quality image reconstructions (Supplementary Fig. S4 ) and obtained an average PSNR of 28.7683 dB and an SSIM of 0.7239 on all the testing image patches (see Supplementary Methods for details). We also used spatial frequency analysis to further quantify our network inference results against the ground truth images. To perform this comparison, we converted the network input, output, and ground truth images into the spatial frequency domain by performing a 1D Fourier transform along the vertical axis (for each A-line). The results of this spatial frequency comparison for each A-line are shown in Fig. 3d–f , which further reveal the success of the network’s output inference, closely matching the spatial frequencies of the corresponding ground truth image. The quantitative comparison in Fig. 3g–i also demonstrates that the network output very well matches the ground truth images for both the low and high-frequency parts of a sample. Fig. 3: Frequency spectrum analysis of DL-OCT. a – c SS-OCT images of a sample field-of-view, forming the network input, network output and ground truth, respectively. d – f Log-scaled spatial frequency spectra of a – c represented in spectral-spatial domain using 1D Fourier transform along the A-line direction of each image. g , h Averaged intensity of the spectral profile over two specific spatial regions ( ➀ and ➁ shown in f ). i is the same as in g , h , except that it is averaged over the entire spatial axis, shown in d – f . Full size image Discussion In our results reported so far, we used zero interpolation to pre-process the 2× undersampled spectral data per A-line, before generating the network’s input image with severe spatial aliasing. Alternatively, zero-padding is another method that can be used to pre-process the undersampled spectral data for each axial line. However, other spectral interpolation methods such as the nearest neighbor, linear, or cubic interpolation may result in various additional artifacts due to the non-smooth structure of each spectral line. We performed a comparison of these different interpolation methods used to pre-process the same undersampled spectral data, the results of which are summarized in Fig. 4 ; in these results, each DL-OCT network was separately trained using the same undersampled spectral data, pre-processed using a different interpolation method. Among these interpolation methods, cubic interpolation was found to generate the most severe spatial artifacts at the network output. Both zero padding and zero interpolation methods shown in Fig. 4 consistently resulted in successful image reconstructions at the network output, removing aliasing artifacts observed at the input images, providing a decent match to the ground truth. On the contrary, other interpolation methods, such as cubic interpolation, introduced additional artifacts at the network output image (see, e.g., the red circled region in Fig. 4c ) due to the inconsistent interpolation of missing spectral data points at the input. To further quantify this comparison, we also calculated the SSIM and PSNR values between the network output images and the corresponding ground truth SS-OCT images for five different pre-processing methods (Table 1 ). This quantitative analysis reported in Table 1 reveals that the zero interpolation method (presented in the “Results” section) achieves the highest PSNR and SSIM values for reconstructing SS-OCT images using a 2-fold undersampled spectrum per A-line. It is also worth noting that the zero interpolation and zero padding methods achieve very close quantitative results, and significantly outperform the other spectral interpolation methods, including cubic, linear and nearest-neighbor interpolation, as summarized in Table 1 . Fig. 4: Comparison of different pre-processing methods for DL-OCT. a Raw SS-OCT spectral fringes and the corresponding reconstructed OCT image (ground truth), where f c indicates the cut-off frequency of the spectral data. b 2-fold undersampled OCT fringes. c Undersampled OCT fringes that are pre-processed using different interpolation methods. Three separate neural networks were trained for each one of the pre-processing methods to generate the network outputs. PSNR and SSIM values are also displayed for each one of these fields-of-view. Full size image Table. 1 Comparison of PSNR and SSIM values between the network output images and the corresponding ground truth SS-OCT images for five different pre-processing methods (also see Fig. 4 ). Full size table However, all these interpolation/padding methods require a similar amount of time to generate the network input images compared to reconstructing the conventional OCT images without undersampling, which might partially limit the adaptability of DL-OCT to high-speed imaging applications. An alternative pre-processing method that requires approximately m- fold less reconstruction time for m × spectral undersampling is reported in Supplementary Information. This method squeezes the spectral data by m- fold compared to its original size after undersampling, and applies a Fast Fourier Transform (FFT) directly onto the squeezed spectral data. Then, through simple copy/flip and concatenation processes, a network input that is equivalent to the zero interpolation method can be obtained (Supplementary Methods). Visual inspection and quantitative results also suggest that this method can achieve identical performance to the zero interpolation method (Supplementary Fig. S2 and Supplementary Table S1 ). We also analyzed the inference speed of the trained DL-OCT network to reconstruct SS-OCT images with undersampled spectral measurements. For a batch size of 128 B-Scans, where each B-scan consists of 512 A-lines (with 640 spectral data points per A-line), the neural network is able to output a new OCT image in ~6.73 ms per B-scan using a desktop computer (Fig. 5 ). This inference time can be further reduced with some simplifications made in the neural network architecture; for example, a reduction of the number of channels from 48 to 16 at the first layer of the neural network (Fig. 6 ) helped us reduce the average inference time down to ~1.69 ms per B-scan (512 A-lines). Through visual inspection, one can see that the 16-channel network can reconstruct decent OCT images compared with the 48-channel network results (shown in Fig. 5 ). Quantitatively compared using 13,131 image patches, the average SSIM and PSNR values downgraded, due to the reduced number of channels, from 0.4391 to 0.4122 and from 24.6580 dB to 24.2523 dB, respectively. Furthermore, with additional parallelization through the use of a larger number of GPUs, the inference speed per B-scan can be further improved. For example, with the use of 8 NVIDIA Tesla A100 GPUs (Nvidia Corp., Santa Clara, CA, USA) in parallel, the inference time was further reduced to ~1.42 ms and ~0.59 ms per B-scan for 48-channel and 16-channel networks, respectively (shown in Fig. 5 ). This can be used to better serve various applications that demand rapid reconstruction of 3D samples. Fig. 5: DL-OCT inference time as a function of the B-Scan batch size for blind testing. a With increasing batch size, the average inference time per B-Scan (512 A-lines) rapidly decreases owing to the parallelizable nature of the neural network computation. The average inference time converged to ~6.73 ms per B-Scan for a batch size of 128. If the number of channels in the neural network’s first layer is reduced from 48 down to 16, the average inference time further improved to ~1.69 ms per B-Scan. Our GPU memory size limited further reduction of the average inference time of DL-OCT. By using 8 NVIDIA Tesla A100 GPUs in parallel, the inference time was further reduced to ~1.42 ms and ~0.59 ms per B-scan for the 48-channel and 16-channel networks, respectively. All inference times were obtained by averaging 1000 independent runs, computed on a desktop computer (see “Materials and methods” section). b Sample fields-of-view are shown for network input, network output (using 48 channels vs. 16 channels in the first layer), and ground truth images. PSNR and SSIM values are also displayed for each one of these fields-of-view. Full size image Fig. 6: Network architecture of the encoder-decoder used in DL-OCT framework. A modified U-net architecture with residual connections was used to eliminate the aliasing-related spatial artifacts due to undersampled spectral data. Full size image Finally, we explored to see whether DL-OCT can be extended to use an even smaller number of spectral data points ( N spec ) per A-line to perform an image reconstruction. First, we investigated the case for 3× undersampled spectral data per A-line. For this, we used the same neural network architecture as before, which was this time trained with input SS-OCT images that exhibited even more extensive spatial aliasing since for every spectral measurement data point that is kept, 2 neighboring wavelengths were dropped out, resulting in N spec = 427 spectral data points contributing to an A-line, whereas the ground truth images of the same samples had 1280 spectral measurements per A-line. In addition to this, we implemented an A-line-optimized undersampling method, where the number of spectral data points per A-line was further reduced to N spec = 407 (see “Materials and methods” section). The image reconstruction results for the 3× undersampling method ( N spec = 427) and A-line-optimized undersampling method ( N spec = 407) are reported in Fig. 7 , in comparison with the 2× undersampling method ( N spec = 640). This comparison in Fig. 7 reveals that, while DL-OCT can successfully process 3× undersampled spectral data with decent image reconstructions at its output, it also starts to exhibit some spatial artifacts in its inference when compared with the ground truth images of the same samples (see, e.g., the red marks in Fig. 7 ). Furthermore, we observe that the A-line-optimized undersampling method can visually achieve almost identical performance to the 2× undersampling results. A quantitative comparison of these three methods is reported in Table 2 . It is worth mentioning that the A-line-optimized undersampling method achieved the best quantitative reconstruction performance among the three methods (Table 2 ) because this framework can learn and optimize both the A-line spectral undersampling grid and the OCT image reconstruction neural network, which makes it easier for this framework to better fit to the target data and imaging task. Fig. 7: Comparison of DL-OCT blind testing results using 3× undersampled, 2× undersampled, and A-line-optimized input spectral data. Three mouse embryo samples (never seen by either of these DL-OCT networks) are imaged for blind testing. PSNR and SSIM values are also displayed for each one of these fields-of-view. Ground truth images used 1280 spectral data points per A-line, whereas 2× and 3× DL-OCT networks used N spec = 640 and N spec = 427 spectral data points per A-line, respectively. The A-line-optimized network used N spec = 407 spectral data points per A-line; also see Supplementary Fig. S5 . Full size image Table. 2 Comparison of PSNR and SSIM values between the network output images and the corresponding ground truth SS-OCT images for three different undersampling methods using zero interpolation (also see Fig. 7 ). Full size table In summary, we demonstrated the ability to rapidly reconstruct SS-OCT images using a deep neural network that is fed with undersampled spectral data. This DL-OCT framework, with its rapid and parallelizable inference capability, has the potential to speed up the image acquisition process for various SS-OCT systems without the need for any hardware modifications to the optical setup. Although the efficacy of this presented framework was demonstrated using an SS-OCT system, DL-OCT can also be used in various spectral-domain OCT systems that acquire spectral interferometry data for 3D imaging of samples. Materials and methods Data acquisition All the animal handling and related procedures were approved by the Baylor College of Medicine (University of Houston, USA) Institutional Animal Care and Use Committee and adhered to its animal manipulation policies. The animal protocol for mouse embryo imaging was the University of Houston (UH) 16-026. The mouse eye imaging reported in the Supplementary Information was under animal protocol UH: PROTO202000028. All human skin and human eye samples in the Supplementary Information were obtained under IRB UT Health (University of Texas Health Science Center) HSC-MS-16-0383 and UH: STUDY00001723, respectively. Timed matings of CD-1 mice were set up overnight. The presence of a vaginal plug was considered 0.5 days post coitum (DPC). At 13.5 DPC, embryos ( N = 8) were dissected out of the mother and immediately prepared for OCT imaging. Special care was taken to ensure that the yolk sac was not damaged during dissection. The embryos were immersed in Dulbecco’s Modified Eagle Media (DMEM) in a standard culture dish and imaged with the SS-OCT system (OCS1310V2, Thorlabs Inc., NJ, USA). The OCT system had a central wavelength of ~1300 nm, a sweep range of ~100 nm, and an incident power of ~12 mW. The axial and transverse resolutions of the system have been characterized as ~12 µm and ~10 µm, respectively, in air. More details on the performance of the OCT system can be found in the previous work 45 . In this work, a sample area of 12 mm × 12 mm × 6 mm ( X , Y , Z ) was imaged. Each raw A-scan consisted of 1280 spectral data points that were sampled linearly in the wavenumber domain by a k-clock on the OCT system. 3D imaging was performed by raster scanning the OCT beam across the sample with a pair of galvanometer-mounted mirrors. Each B-scan consisted of 5000 A-scans, and each sample volume consisted of 1000 B-scans. Image processing After the data acquisition, the raw OCT fringes were processed using 2× down-sampling (by eliminating every other spectral data point), followed by zero interpolation to generate the 2× spectrally undersampled SS-OCT reconstruction (which is used as the network input). Reconstruction of the target SS-OCT image (ground truth) from the raw spectral data was performed using multiple steps. First, to decrease the effect of sharp transitions and spectral leakage, each raw A-scan was windowed with a Hanning window. Next, the filtered fringes were processed by an FFT to get complex OCT data. Then, the norm of the complex vector was converted to dB scale, and the complex conjugate was discarded. A background subtraction step was performed by subtracting the mean of all the A-scans in each OCT volume from each A-scan. The resulting B-scans (after the background subtraction and windowing) was utilized as the network training targets (ground truth). For 2× down-sampling of the measured spectral data points, the even elements of the acquired spectrum for each A-line were removed. For 3× down-sampling results reported in Fig. 7 , two successive spectral measurements were eliminated, in a repeating manner, for each spectral data point that was kept. Next, zeros were interpolated in the exact same positions, where the spectral data points were removed. Then, the mean of the zero interpolated spectral data was subtracted out before applying the FFT function. Both the real and imaginary parts of the down-sampled OCT complex data, resulting from the FFT, were kept as input data for the network. Each pair of input and ground truth images were normalized such that they have zero mean and unit variance before they were fed into the DL-OCT network. DL-OCT network architecture, training, and validation For DL-OCT, we used a modified U-net architecture 46 as shown in Fig. 6 . Following the processing of the down-sampled OCT reconstructions and regular OCT images (ground truth images, using all the spectral data points), the resulting volumetric images were partitioned into patches of 640×640 pixels, forming training image pairs (B-scans); all blank image pairs (without sample features) were removed from training. The training loss function was defined as: $$l = {\mathrm{L}}_1\left\{ {z_{{\mathrm{label}}},{\mathrm{G}}\left( {x_{{\mathrm{input}}}} \right)} \right\}$$ (2) where G(·) refers to the output of the neural network, z label denotes the ground truth SS-OCT image without undersampling, and x input represents the network input. The mean absolute error, L 1 norm, was used to regularize the output of the network and ensure its accuracy. The modified version of the U-net architecture is shown in Fig. 6 , which has five down-blocks followed by five up-blocks. Each one of the down-blocks consists of two convolution layers and their activation functions, which together double the number of channels. A max-pooling layer with a stride and kernel size of two is added after the two convolution layers to downsample the features. The up-blocks first upscale the output of the center layer using bilinear interpolation by a factor of two. And then two convolution layers and their activation functions, which decrease the number of channels by a factor of two, are added after the upscaling. Between each one of the up- and down-sampling blocks of the same level, a skip connection concatenates the output of the down-blocks with the up-sampled images, enabling the features to be directly passed at each level. After these down- and up-blocks, a convolution layer is used to reduce the number of channels to one, which corresponds to the reconstructed output image, approximating the ground truth OCT image. Throughout the U-net structure, the convolution filter size is set to be 3×3; the output of these filters is followed by a Leaky ReLU (Rectified Linear Unit) activation function, defined as: $${\mathrm{Leaky}}{\kern 1pt} {\mathrm{ReLU}}\left( x \right) = \left\{ {\begin{array}{*{20}{c}} x & {{\mathrm{for}}{\kern 1pt} x > 0} \\ {0.1x} & {{\mathrm{otherwise}}} \end{array}} \right.$$ (3) The learnable variables were updated using the adaptive moment estimation (Adam 47 ) optimizer with a learning rate of 10 -4 . The batch size for the training was set to be 3. Quantitative metrics PSNR is defined as: $${\mathrm{PSNR}} = 10 \times \log _{10}\left( {\frac{{{\mathrm{MAX}}_{\mathbf{I}}^2}}{{{\mathrm{MSE}}}}} \right)$$ (4) where MAX I is the maximum possible pixel value of the ground truth image. MSE is the mean squared error between the two images being compared, which is defined as: $${\mathrm{MSE}} = \frac{1}{{n^2}}\mathop {\sum}\limits_{i = 0}^{n - 1} {\mathop {\sum}\limits_{j = 0}^{n - 1} {\left[ {{\mathbf{I}}\left( {i,j} \right) - {\mathbf{K}}\left( {i,j} \right)} \right]^2} }$$ (5) where I is the target image, and K is the image that is compared with the target. SSIM is defined as: $${\mathrm{SSIM}}\left( {a,b} \right) = \frac{{\left( {2\mu _a\mu _b + C_1} \right)\left( {2\sigma _{a,b} + C_2} \right)}}{{\left( {\mu _a^2 + \mu _b^2 + C_1} \right)\left( {\sigma _a^2 + \sigma _b^2 + C_2} \right)}}$$ (6) where μ a and μ b are the mean values of a and b , which represent the two images being compared, σ a and σ b are the standard deviations of a and b , σ a , b is the cross-covariance of a and b , respectively, and C 1 and C 2 are constants that are used to avoid division by zero. Note that both PSNR and SSIM metrics can be affected by background noise in an OCT image. Therefore, to compute these two metrics we used the network output and target (ground truth) images that are over the noise level (70 dB in our SS-OCT system) and then converted them into grayscale with a range from 0 to 1, using double precision. A-line-optimized spectral undersampling method The workflow of the A-line-optimized undersampling method is shown in Fig. 8 . The 2× undersampling method was used as the baseline, and further optimization/learning was applied upon it to be able to use even less spectral data points for OCT image reconstruction. A continuous trainable vector was firstly generated, and it was binarized by thresholding (with a threshold of T = 0.5, shown by the red dashed line in Fig. 8 ) to form a binary grid. Then, this binary grid was applied to the regular 2× undersampling grid to generate the final optimized undersampling grid with a total number of spectral data points less than 640. After the optimized undersampling grid was obtained, the same pre-processing and U-net training protocol was adopted as in the regular 2× undersampling method. During the network training process, the continuous trainable vector (for spectral sampling) and the variables of the U-net were jointly optimized by the backpropagated gradient of the training loss. Fig. 8: Schematic of the A-line-optimized spectral undersampling method. A continuous, trainable vector was initialized and then binarized by a rounding function with a threshold of T = 0.5. The pointwise multiplication of the binarized vector and a regular 2× undersampling spectral grid forms the A-line-optimized undersampling grid, which was then applied to raw OCT fringes to generate undersampled fringes with N spec < 640. After the training process, the converged A-line optimized undersampling grid is shown in Supplementary Fig. S5. Full size image Implementation details The network was implemented using Python version 3.6.0, with TensorFlow framework version 1.11.0. Network training was performed using a single NVIDIA GeForce RTX 2080Ti GPU (Nvidia Corp., Santa Clara, CA, USA) and testing was performed using a desktop computer with 4 GPUs (NVIDIA GeForce RTX 2080Ti). The data set used for our training contained ~20,000 image pairs (640 A-lines in each image), which was split into training and validation sets with a ratio of 9:1. The training process took about 18 h for 22 epochs. DL-OCT inference times as a function of the batch size are reported in Fig. 5 . Data availability The deep-learning models reported in this work used standard libraries and scripts that are publicly available in TensorFlow. All the data and methods needed to evaluate the conclusions of this work are present in the main text. Additional data can be requested from the corresponding author (A.O.).
Optical coherence tomography (OCT) is a non-invasive imaging method that can provide 3D information of biological samples. The first generation of OCT systems were based on time-domain imaging, using a mechanical scanning set-up. However, the relatively slow data acquisition speed of these earlier time-domain OCT systems partially limited their use for imaging live specimen. The introduction of the spectral-domain OCT techniques with higher sensitivity has contributed to a dramatic increase in imaging speed and quality. OCT is now widely used in diagnostic medicine, for example in ophthalmology, to noninvasively obtain detailed 3D images of the retina and underlying tissue structure. In a new paper published in Light: Science & Applications, a team of UCLA and University of Houston (UH) scientists have developed a deep learning-based OCT image reconstruction method that can successfully generate 3D images of tissue specimen using significantly less spectral data than normally required. Using standard image reconstruction methods employed in OCT, undersampled spectral data, where some of the spectral measurements are omitted, would result in severe spatial artifacts in the reconstructed images, obscuring 3D information and structural details of the sample to be visualized. In their new approach, UCLA and UH researchers trained a neural network using deep learning to rapidly reconstruct 3D images of tissue samples with much less spectral data than normally acquired in a typical OCT system, successfully removing the spatial artifacts observed in standard image reconstruction methods. The efficacy and robustness of this new method was demonstrated by imaging various human and mouse tissue samples using 3-fold less spectral data captured by a state-of-the-art swept-source OCT system. Running on graphics processing units (GPUs), the neural network successfully eliminated severe spatial artifacts due to undersampling and omission of most spectral data points in less than one-thousandth of a second for an OCT image that is composed of 512 depth scans (A-lines). "These results highlight the transformative potential of this neural network-based OCT image reconstruction framework, which can be easily integrated with various spectral domain OCT systems, to improve their 3D imaging speed without sacrificing resolution or signal-to-noise of the reconstructed images," said Dr. Aydogan Ozcan, the Chancellor's Professor of Electrical and Computer Engineering at UCLA and an associate director of the California NanoSystems Institute, who is the senior corresponding author of the work. This research was led by Dr. Ozcan, in collaboration with Dr. Kirill Larin, a Professor of Biomedical Engineering at University of Houston. The other authors of this work are Yijie Zhang, Tairan Liu, Manmohan Singh, Ege Çetintaş, and Yair Rivenson. Dr. Ozcan also has UCLA faculty appointments in bioengineering and surgery, and is an HHMI Professor.
10.1038/s41377-021-00594-7
Physics
NCNR neutrons highlight possible battery candidate
X. Li, X. Ma, D. Su, L. Liu, R. Chisnell, S.P. Ong, H. Chen, A. Toumar, J-C. Idrobo, Y. Lei, J. Bai, F. Wang, J.W. Lynn, Y.S. Lee and G. Ceder. "Direct Visualization of the Jahn-Teller Effect Coupled to Na Ordering in Na5/8MnO2." Nature Materials, DOI: 10.1038/nmat3964, May 18, 2014. Journal information: Nature Materials
http://dx.doi.org/10.1038/nmat3964
https://phys.org/news/2014-05-ncnr-neutrons-highlight-battery-candidate.html
Abstract The cooperative Jahn–Teller effect (CJTE) refers to the correlation of distortions arising from individual Jahn–Teller centres in complex compounds 1 , 2 . The effect usually induces strong coupling between the static or dynamic charge, orbital and magnetic ordering, which has been related to many important phenomena such as colossal magnetoresistance 1 , 3 and superconductivity 1 , 4 . Here we report a Na 5/8 MnO 2 superstructure with a pronounced static CJTE that is coupled to an unusual Na vacancy ordering. We visualize this coupled distortion and Na ordering down to the atomic scale. The Mn planes are periodically distorted by a charge modulation on the Mn stripes, which in turn drives an unusually large displacement of some Na ions through long-ranged Na–O–Mn 3+ –O–Na interactions into a highly distorted octahedral site. At lower temperatures, magnetic order appears, in which Mn atomic stripes with different magnetic couplings are interwoven with each other. Our work demonstrates the strong interaction between alkali ordering, displacement, and electronic and magnetic structure, and underlines the important role that structural details play in determining electronic behaviour. Main NaTMO 2 (TM = 3 d transition metal ions) compounds with alternating Na and TM layers have been studied extensively for their potential application in rechargeable batteries 5 , 6 , 7 or as the parent materials of the superconductive cobaltate 5 , 8 , 9 , 10 . Na can be electrochemically and reversibly removed from these materials creating Na x TMO 2 (0 < x < 1) compounds in a process called de-intercalation. Superstructures due to Na-vacancy (V Na ) ordering have been observed and identified to be dominated by the electrostatic interactions in Na x VO 2 (ref. 6 ) and Na x CoO 2 (refs 5 , 8 ). However, Na x MnO 2 is expected to be more complicated as it mixes Mn 3+ ions, which exhibit one of the largest Jahn–Teller distortions in transition metal compounds, and forms antiferromagnetic (AF) Mn 3+ atomic stripes 11 , 12 , 13 , with Mn 4+ ions, which are not Jahn–Teller active and can form ferromagnetic or AF nearest-neighbour couplings, depending on the competition between different direct and indirect exchange mechanisms 14 . As such, Na x MnO 2 is well suited to study the interplay between the V Na ordering, CJTE and the magnetic properties. There have been continuous efforts to directly visualize the CJTE using scanning/transmission electron microscopy 3 , 15 , 16 , 17 (S/TEM). Here, electron diffraction, synchrotron X-ray diffraction (XRD), density functional theory (DFT) and aberration-corrected atomic-resolution STEM imaging are used to determine the superstructure of electrochemically formed Na 5/8 MnO 2 , and to visualize in sodium intercalation compounds the cooperative distortion of the Mn Jahn–Teller centres and their coupling to Na ordering. Rather than being dominated by electrostatic interactions, we show here direct experimental evidence from STEM imaging that the superstructure in Na 5/8 MnO 2 is mainly controlled by Jahn–Teller distortions which induce specific long-ranged Na–V Na interactions through Mn charge and d -orbital orderings. We use neutron powder diffraction, magnetic susceptibility measurements and DFT computations to demonstrate that a ‘magnetic stripe sandwich’ structure is formed at low temperatures, which causes a pronounced change of the magnetic properties. Electrochemical Na removal from NaMnO 2 initially is known to occur through a two-phase reaction, forming a new phase Na x MnO 2 (ref. 18 ). Figure 1a,b shows the structure of conventional monoclinic NaMnO 2 , which is used to index the electron diffraction patterns of Na x MnO 2 shown in Fig. 1c–e . The formation of a superstructure is clear from the (200), (1–22), (12–2) diffraction spots. In the Z -contrast image shown in Fig. 2b , each dot corresponds to either a Na or Mn atomic stripe projected along the [010] or b direction. The periodic intensity modulation of one bright and three dark dots in the Na plane is proportional to the Na concentration in these stripes. The superstructure hkl peaks and STEM Z -contrast information efficiently limit the possible Na orderings in this compound. To determine a unique order, we performed an exhaustive search of the possible superstructures in the supercells up to 32 formula units for several x values in Na x MnO 2 . The only superstructure that matches all of the electron diffractions and STEM images occurs at x = 5/8. The synchrotron XRD refinement ( Supplementary Fig. 1 and Tables 2 and 3 ) was performed on this particular superstructure model starting with DFT relaxed ion coordinates and shows a good fit. Furthermore, DFT calculations show that this superstructure has the lowest energy among the 300 different Na arrangements that we calculated for x = 5/8, with its energy below the tie line connecting the two lowest energy structures at neighbouring Na compositions, indicating that it is thermodynamically stable ( Supplementary Fig. 2 ). On the basis of the experimental and computational data, we conclude that this Na 5/8 MnO 2 superstructure is the new phase formed in the first voltage plateau when Na is de-intercalated from NaMnO 2 (ref. 18 ). It is worth noting that the Na-vacancy arrangement with the lowest electrostatic energy (labelled as Ewald_0 in Supplementary Fig. 2 ) at x = 5/8 is significantly higher in energy, indicating that the electrostatic interactions do not dominate in this structure. Figure 1: Superstructure hkl spots in electron diffraction patterns show the ordering of Na + in Na x MnO 2 . a , The monoclinic structure (C2/m) of pristine NaMnO 2 with the conventional definition of the lattice parameters. The direction of the Jahn–Teller distortion is marked by the white arrows. The grey arrow shows the [011] direction, and the (200) plane is in green. b , Structure looking along [011]. The green, blue and red lines are the [011] projections of (200), (1–22) and (12–2) planes, respectively. c – e , The experimental electron diffraction patterns along [011], [001] and [010] show consistently the 4-period superstructure diffraction spots corresponding to the (200) planes. The [011] diffraction pattern in c shows additional 2-period (1–22) and (12–2) superstructure diffraction spots. Full size image Figure 2: Atomic-resolution STEM image visualizes CJTE and STEM-EELS shows Mn charge ordering. a , The Mn EELS L 3 /L 2 peak ratio at each Mn [010] atomic column site along the STEM-EELS scanning direction of [100]. The dashed lines show the ratios corresponding to the standard samples of Mn 2 O 3 and MnO 2 . The error bars are determined from the errors introduced in background subtraction and data fluctuation among spectra. b , STEM image along [010] shows the periodic distortion of the Mn ab plane and the intensity modulation of the Na plane that agrees quantitatively with the superstructure model. c , The 17 Mn L 23 -edge EEL spectra after background subtraction that give the L3/L2 ratios shown in b . Full size image We now describe the structure and its magnetic ordering in more detail. The Na layer of Na 5/8 MnO 2 is formed by one full Na atomic stripe parallel to three half-full Na stripes in which Na and V Na alternate, as shown in Figs 2b , 3a and 4a,b from three different zone axes, respectively. We observe Mn charge ordering in both the STEM-EELS (electron energy loss spectroscopy) measurement in Fig. 2 , where the L 3 /L 2 peak ratio is inversely proportional to the Mn charge state, and the DFT calculations in Fig. 3 . There are three types of Mn atomic stripe: a pure Mn 3+ stripe, a pure Mn 4+ stripe and a stripe of alternating Mn 3+ and Mn 4+ ions. As the Mn 3+ O 6 octahedron is Jahn–Teller distorted and the Mn 4+ O 6 is not, the periodic arrangement of these different stripes gives a CJTE, which can be directly visualized by the rippling of the Mn layers in Fig. 2b . It is worth noting that in the DFT-calculated structure Na ions in one of the three half-full Na stripes are displaced along the stripe direction [010] by about 1.4 Å from their normal octahedral site into a new site, consistent with our synchrotron XRD refinement. This stripe is labelled as Na Disp in Figs 2 and 3 and as stripe ‘e’ in Fig. 4 . The local environment of the displaced Na ions (Na Disp ), shown in Fig. 3b , is a highly distorted octahedral site face-sharing with both Mn 4+ O 6 octahedra. Although such face-sharing sites occur in the O1 structure of CoO 2 or NiO 2 when they are fully delithiated 19 , it is unusual to see such a site occupied by an alkali ion, as we find in Na 5/8 MnO 2 . The other two types of Na site in Na 5/8 MnO 2 , Na 2 and Na 3 , are normal slightly distorted octahedral sites, which edge-share with both TM layers. Figure 3c clearly shows that notwithstanding the unusual site occupation for Na Disp the stacking is still O3 type 20 . Figure 3: Na 5/8 MnO 2 superstructure shows V Na ordering, Mn charge and magnetic stripe orderings. a , The Na ordering in the ab layer includes Na ions (yellow circles), displaced Na ions (orange circles labelled with Na Disp ) and Na vacancies (open squares). The TM layer charge ordering includes pure Mn 3+ O 6 (purple hexagon) stripes, pure Mn 4+ O 6 (green hexagon) stripes and the mixed-valence Mn stripes. The magnetic spin stripe ordering includes ferrimagnetic stripes (red arrows) and AF stripes (blue arrows). The exchange parameters J ij between different Mn sites provided in Supplementary Table 1 are defined. The different sites labelled by Mn( α – δ ), Na(Disp, 2, 3) and O(1–6) correspond to Supplementary Tables 2 and 3 . b , The local environment of two displaced Na ion face-sharing with the Mn 4+ O 6 octahedra. The two triangles that share faces with one Na site are labelled with thick lines. c , The structure viewed from the b direction shows oxygen O3 stacking. d , The basic unit of the spd hybridization interaction that drives the Na displacement to the distorted octahedral site. The spd hybridized bonds are in red connecting Na, O and Mn 3+ ions. Full size image Figure 4: The Na ordering in Na 5/8 MnO 2 is visualized by STEM ABF and ADF images. a , b , Simultaneously taken STEM-ABF ( a ) and ADF ( b ) images along the [011] zone axis. The corresponding region to the ADF image is marked by the white rectangle in the ABF image. The ADF image is taken at the optimum defocus condition, which is a defocus value for the ABF image with inversed contrast to the optimum focus condition (details in Methods ). c – e , STEM-ABF line scans along the directions marked by the white arrows in a . The label and model of Mn and/or Na show the projected atomic columns at each intensity peak position, corresponding to the superposed model in a , where the white circles are for the oxygen columns. Full size image The Na Disp is directly observed in the STEM measurement in Fig. 4 . The annular bright field (ABF) image is sensitive to light elements such as Na, and can visualize the Na Disp ions between the neighbouring Mn columns, as shown along the stripe labelled ‘e’ in Fig. 4a , as well as in its line scan in Fig. 4e . The Na intensity modulations along the other stripes labelled as ‘c, d’ in the ABF image and their corresponding line scans in Fig. 4c,d , respectively, are also consistent with the x = 5/8 superstructure model. The type of each atomic column in the ABF image was identified by direct comparison with the annular dark field (ADF) image taken simultaneously ( Fig. 4b ) with consistent contrast information. To investigate the role of the Jahn–Teller distortion on the displacement of the Na Disp ions, we replaced all of the Mn ions by non-Jahn–Teller-active Cr ions in the DFT calculation, which reduced the Na displacement to less than 0.5 Å, indicating that the Na shift is largely driven by the Jahn–Teller distortion. The Na and V Na ordering is in fact connected by the cooperative Jahn–Teller effect to the Mn 3+ and Mn 4+ in a remarkable way. At the extension of all 180° –O–Mn–O– triplets are Na sites in Na 5/8 MnO 2 . It has previously been argued in the Li x NiO 2 system that the Jahn–Teller activity of the transition metal creates attractive interactions between the alkali ions sitting at the extensions of these 180° oxygen–metal–oxygen bonds due to the spd hybridization of the alkali- s , O- p and metal- d z 2 orbitals 21 . The inset of Fig. 2b (enlarged in Supplementary Fig. 3a ) shows that in Na 5/8 MnO 2 all of the non-displaced Na ions are in this configuration. In the (202) planes, these 180° Na–O–Mn 3+ –O–Na fragments alternate with non-Jahn–Teller-distorted V Na –O–Mn 4+ –O–V Na configurations, as shown in Supplementary Fig. 3b . The origin of the displacement of the Na Disp ions in the remaining half-full stripe is now clear (see Fig. 3d and Supplementary Fig. 3c ): there are not enough Na ions in this stripe to create only 180° Na–O–Mn 3+ –O–Na configurations. Hence, rather than create V Na –O–Mn 3+ –O–Na configurations, the Na ions relax to the highly distorted octahedral site where they share the symmetric attraction of the two neighbouring Jahn–Teller-distorted –O–Mn 3+ –O–Na configurations. In other words, the Jahn–Teller-distorted 180° configuration of V Na –O–Mn 3+ –O–Na is unstable, producing a long-ranged V Na –Na repulsion through the Jahn–Teller centre. We have also re-examined the previously predicted superstructures in Li x NiO 2 (refs 21 , 22 ), and found no 180° V Li –O–Ni 3+ –O–Li configurations. As many new battery compounds are based on the Mn 4+ /Mn 3+ redox couple, the argument of Jahn–Teller-mediated orbital interactions has general implications for both Na and Li batteries 23 , 24 . One of the interesting consequences of the electronic and alkali ordering in Na 5/8 MnO 2 is that it leads to a new magnetic ordering. On the basis of neutron powder diffraction, magnetic susceptibility measurements and DFT generalized gradient approximation (GGA) + U calculations, we propose the magnetic-stripe-sandwich structure at low temperature in Fig. 3a . The DFT total energies for different collinear magnetic spin orderings in Na 5/8 MnO 2 in a supercell containing up to 80 formula units were calculated, and mapped onto a spin Hamiltonian, −∑ i < j J ij S i ⋅ S j , to extract the nearest-neighbour and next-nearest-neighbour spin exchange parameters J ij between site i and j , as defined in Fig. 3a . S i and S j are the spin angular momentum operators; details of this method can be found in the literature 12 . The resulting exchange parameters and ground-state magnetic structures in Supplementary Table 1 predict the Mn 3+ stripes to be AF, Mn 4+ stripes to be ferrimagnetic, and mixed valence Mn 3+ /Mn 4+ stripes to be ferrimagnetic with AF nearest-neighbour coupling, at U = 2.47 eV, a U value close to the previously determined one by comparison of the GGA + U DFT simulation of the pristine NaMnO 2 (ref. 12 ) with neutron powder diffraction 11 , 13 . The neutron powder diffraction data of Na 5/8 MnO 2 shown in Fig. 5 confirm the presence of long-range magnetic order at low temperatures. The diffraction pattern taken at T = 2.5 K reveals additional magnetic Bragg peaks compared with the diffraction pattern at T = 100 K (consisting of nuclear Bragg peaks from the crystal structure). The inset in Fig. 5 shows the temperature dependence of the integrated intensity of the lowest angle magnetic Bragg peak, indicating an ordering temperature of around T ~ 60 K that is consistent with the temperature of the upturn in the magnetic susceptibility. The DFT-calculated coupling constant of −61.5 K (negative value corresponds to AF interaction) for the Mn 3+ AF stripe at U = 2.47 eV ( Supplementary Table 1 ) is in reasonable agreement with the temperature scale of the observed ordering. Figure 5: Neutron diffraction and magnetic susceptibility measurement indicates magnetic stripe ordering. Magnetic Bragg peaks labelled by red arrows are seen in the 2.5 K spectrum compared with the 100 K spectrum. The inset shows the integrated intensity of the strongest magnetic peak in the neutron spectrum (left scale) and also the inverse magnetic susceptibility (right scale) versus temperature. The intensity of the neutron scattering calculation scales with | F ( hkl )| 2 /sin( θ )/sin(2 θ ), where F is the structure factor and 2 θ is the scattering angle. Full size image The intensities of the magnetic Bragg peaks can be described by a pattern of magnetic stripes of ordered moments on the Mn sites. The simplest model of ordered moments consistent with the data consists of AF stripes of Mn 3+ with an AF coupling between stripes, as indicated by the blue arrows in Fig. 3a . When only these moments are taken into account, the fits yield an ordered moment of 3.2(4) μ B per Mn 3+ . Other magnetic ordering patterns that have the same periodicity as these AF Mn 3+ stripes would also be consistent with the data. For example, AF stripes of Mn 4+ and mixed Mn 3+ /Mn 4+ coexisting with the AF stripes of Mn 3+ would yield a similar magnetic diffraction pattern, but the average moment would have a smaller value of 2.2(3) μ B per Mn. Further details of the magnetic structure will probably require measurements on single crystals. Interestingly, the divergence of the field-cooled and zero-field-cooled susceptibility curves around T = 12 K in Supplementary Fig. 4 may indicate the presence of weakly coupled ferrimagnetic components and agrees quantitatively with the calculated nearest-neighbour coupling constant of 11.9 K for the Mn 4+ ferrimagnetic stripes at U = 2.47 eV ( Supplementary Table 1 ). Overall, the general pattern predicted by DFT of AF Mn 3+ stripes interwoven with the Mn 4+ and Mn 4+/ Mn 3+ stripes is fully supported by the experiments. This new magnetic-stripe-sandwich structure may have potential application in magnetic storage or spin electronics 25 as the one-dimensional analogy to the sandwich structure of two-dimensional magnetic thin films, from which abundant magnetic phenomena have been engineered 25 . It is also worth noting that the dynamic version of the hole-segregated magnetic stripes has been proposed to be important for high-temperature cuprate superconductors 26 . We have found Na 5/8 MnO 2 to be a model system for visualizing the complex interactions between Na ion ordering, charge ordering, magnetic ordering and cooperative Jahn–Teller distortions. We find that in contrast to other alkali-vacancy systems the Na ordering in Na x MnO 2 is controlled by the underlying combination of electrostatic and electronic structure interactions through the Jahn–Teller effect, which enables some Na to occupy the highly distorted octahedral site. This leads to Na and Mn charge-ordered stripes, which in turn yields a fascinating low-temperature magnetic ordering to develop. DFT and experimental observations are in excellent agreement, providing confidence in the proposed ground state and in the explanation for the physical origin of its stability. The understanding of the CJTE here may have general implications for understanding complex compounds where the Jahn–Teller effect is prominent, including intercalation energy storage materials and high-temperature superconductors, and illustrates the fascinating physics of mixed Mn valence systems. Methods Synthesis. The pristine NaMnO 2 powder was synthesized by the solid-state reaction, and the electrochemical cells were configured on the basis of our previous publication 18 . The Na 5/8 MnO 2 cathode films were obtained by charging to the end of the first electrochemical plateau either by the C/200 galvanostatic charge in a home-made in situ XRD cell to monitor the depletion of the pristine phase or by the potentiostatic intermittent titration technique charge with 10 meV step up to 2.685 V in a Swagelok cell. The charged batteries were disassembled in a glove box with the cathode films dried for further characterizations. The Na 5/8 MnO 2 powder for the neutron diffraction was obtained by chemical de-intercalation of pristine NaMnO 2 powder in an iodine acetonitrile solution. TEM. TEM samples were made by sonication of the charged cathode films in anhydrous dimethyl carbonate inside a glove box, and sealed in airtight bottles before immediate transfer into the TEM column. The electron diffractions were taken on the JEOL 2010F at MIT. The STEM-ABF/ADF and EELS line scan were taken on the Cs-corrected cold field-emission Hitachi HD 2700C at 200 kV at Brookhaven National Laboratory (BNL). The STEM images were obtained by using 1 Å scanning probe of 28 mrad semi-convergence angle, with the semi-collection angles of 10–22 mrad and 53–280 mrad for the ABF and ADF detectors, respectively. The STEM ABF and ADF images were taken simultaneously at the optimal defocus value of the ADF imaging condition, which was more defocused than the optimal ABF imaging condition on this instrument. Thus, the contrast in the ABF image is reversed with the bright area corresponding to the atomic positions 27 . The line-scanned EELS were collected by a 1.4 Å probe of 60 pA probe current and taken by the Gatan Enfina ER spectrometer with the semi-collection angle of 20 mrad at 0.3 eV per channel, 1.4 Å scanning interval and 1.2 s collection time per spectrum. The Mn L 2,3 edges were fitted by the Gaussian model after a background subtraction of power law. XRD. The charged cathode film was sealed with silicone tape for the collection of the synchrotron XRD pattern on beam line X14A at NSLS at BNL with a wavelength of 0.7788 Å. The XRD refinement was done by GSAS software with the background estimated by a shifted Chebyschev function and the peak profile described by the Finger, Cox and Jephcoat function. The preferential orientation was set for the first (004) peak. The Na 5/8 MnO 2 powder was sealed in the capillary for the synchrotron XRD measurement on X14A at NSLS, which confirmed the same superstructure phase as the electrochemically de-intercalated cathode film. Neutron diffraction and magnetic susceptibility. Neutron diffraction measurements were performed on 5 g of chemically de-intercalated Na 5/8 MnO 2 powder using the triple-axis spectrometer BT-7 at the NIST Center for Neutron Research 28 . Measurements were taken in two-axis mode with a fixed initial neutron energy of 14.7 meV (wavelength 2.359 Å), collimator configuration open-80’-sample-80’ and a radial-position sensitive detector. Magnetic Bragg peaks are observed when the sample was cooled below T ~ 60 K. The intensities of the observed magnetic peaks are consistent with a pattern of AF stripes, as discussed in the text. The magnetic structure factors were calculated assuming the spins were collinear and pointing in the direction of the Jahn–Teller-distorted axis of the oxygen octahedra. The form factor was assumed to be that of the free Mn 3+ ion 29 . The size of the ordered moment can be obtained by comparing the structure factors for the four intense structural peaks in the range 48° < 2Θ < 63° to the integrated intensities of the measured peaks. The structure factor for the structural peaks was calculated assuming the structure obtained from the XRD refinement. The magnetic susceptibility measurements were obtained using a Quantum Design MPMS-XL SQUID. DFT. All DFT calculations in this work were performed using the Vienna Ab initio Simulation Package within the projector augmented-wave approach using the Perdew–Burke–Ernzerhof GGA functional and the GGA+ U extension to it. A plane-wave energy cutoff of 520 eV and a k -point density of at least 1,000 per number of atoms in a unit cell were used to ensure that all calculations were converged to within 1 meV atom −1 . All calculations were spin-polarized and started from a high-spin configuration. A U value of 3.9 eV was used for Mn for the structure relaxation, in line with previous literature 30 . The phase diagram construction and analysis in Supplementary Fig. 2 was performed using the Python Materials Genomics (pymatgen) library 30 . For the extraction of spin exchange parameters, the magnetic structures were calculated in the supercells with up to 80 formula units, with different U values ranging from 0 eV to 3.9 eV and were converged to 0.01 meV per supercell.
Analysis of a manganese-based crystal by scientists at the National Institute of Standards and Technology (NIST) and the Massachusetts Institute of Technology (MIT) has produced the first clear picture of its molecular structure. The findings could help explain the magnetic and electronic behavior of the whole family of crystals, many of which have potential for use in batteries. The family of crystals it belongs to has no formal name, but it has three branches, each of which is built around manganese, cobalt or iron—transition metals that can have different magnetic and charge properties. But regardless of family branch, its members share a common characteristic: They all store chemical energy in the form of sodium, atoms of which can easily flow into and out of the layers of the crystal when electric current is applied, a talent potentially useful in rechargeable batteries. Other members of this family can do a lot of things in addition to energy storage that interest manufacturers: Some are low-temperature superconductors, while others can convert heat into electricity. The trouble is that all of them are, on the molecular level, messy. Their structures are so convoluted that scientists can't easily figure out why they do what they do, making it hard for a manufacturer to improve their performance. Fortunately, this particular manganese crystal is an exception. "It's the one stable compound we know of in the manganese branch that has a perfect crystal lattice structure," says Jeff Lynn of the NIST Center for Neutron Research (NCNR). "That perfection means we can isolate all its internal electronic and magnetic interactions and see them clearly. So now, we can start exploring how to make those sodium atoms more movable." Team members from MIT made the material and performed analysis using state-of-the-art lab techniques such as electron microscopy, but they needed help from the NCNR's neutron beams to tease out the interactions between its individual atoms. The effort showed that the crystal was unusual for reasons beyond its structural perfection. Its layers absorb sodium in a fashion rarely seen in nature: In each layer, one "stripe" of atoms fills up completely with sodium, then the next three stripes fill up only halfway before another full stripe appears. Lynn says the pattern is caused by different charges and magnetic moments that manganese atoms possess in different parts of the crystal, a feature revealed by analysis of the NCNR data. "This particular crystal is probably not the one you'd use in a battery or some other application, it just permits us to understand what's happening with its internal structure and magnetism for the first time," Lynn says. "Now we have a basis for tailoring the properties of these materials by changing up the transition metals and changing the sodium content. We no longer have to hunt around in the dark and hope."
10.1038/nmat3964
Medicine
Scientists investigate a powerful protein behind antibody development
Bcl-6 is the nexus transcription factor of T follicular helper cells via repressor-of-repressor circuits, Nature Immunology (2020). DOI: 10.1038/s41590-020-0706-5 , www.nature.com/articles/s41590-020-0706-5 Journal information: Nature Immunology
http://dx.doi.org/10.1038/s41590-020-0706-5
https://medicalxpress.com/news/2020-06-scientists-powerful-protein-antibody.html
Abstract T follicular helper (T FH ) cells are a distinct type of CD4 + T cells that are essential for most antibody and B lymphocyte responses. T FH cell regulation and dysregulation is involved in a range of diseases. Bcl-6 is the lineage-defining transcription factor of T FH cells and its activity is essential for T FH cell differentiation and function. However, how Bcl-6 controls T FH biology has largely remained unclear, at least in part due to the intrinsic challenges of connecting repressors to gene upregulation in complex cell types with multiple possible differentiation fates. Multiple competing models were tested here by a series of experimental approaches to determine that Bcl-6 exhibits negative autoregulation and controls pleiotropic attributes of T FH differentiation and function, including migration, costimulation, inhibitory receptors and cytokines, via multiple repressor-of-repressor gene circuits. Main The formation of germinal centers (GCs) is essential for the development of high-affinity memory B cells and antibody-secreting long-lived plasma cells in response to pathogen infections or vaccinations 1 . Follicular helper T cells (T FH ) provide key signals to antigen-specific B cells for the development of germinal center B (B GC ) cells 1 , 2 . CD4 + T cells receiving T FH inductive signals upregulate Bcl-6, the lineage-defining transcription factor (TF) of T FH cells 3 , 4 , 5 . Upregulation of Bcl-6 is associated with expression of the chemokine receptor CXCR5 and reduction of CCR7 and PSGL1, among other molecules, allowing for migration to the T cell-B cell (T-B) border and GCs 1 , the sites at which T FH and then GC-T FH cells interact with antigen-specific B cells. T FH and GC-T FH cells express many surface and secreted molecules that serve as positive markers and contribute to the differentiation (ICOS, IL-6Rα and PD-1), migration (CXCR5 and CD69), and function (IL-21, IL-4, CXCL13, SAP, ICOS, PD-1, CD200 and CD40L) of T FH and GC-T FH cells. GC-T FH cells provide IL-21, IL-4 and CD40L that are required for B GC cell survival, proliferation and somatic hypermutation 1 , 2 , 6 . Bcl-6 function is critical in T FH differentiation 3 , 4 , 5 . Multiple TFs in addition to Bcl-6 have been identified that regulate T FH differentiation 2 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 . Inhibition of Blimp-1 (encoded by Prdm1 ) by Bcl-6 is required for T FH differentiation 3 . Tcf-1 and Lef-1 are involved in early induction of Bcl-6 and repression of Blimp-1 (refs. 19 , 20 , 21 ). Downregulation of Id2 leads to the release of E protein TFs such as E2A and Ascl2 (refs. 22 , 23 ), which is important for CXCR5 expression. Whereas the importance of Bcl-6 in T FH cell development is clear, the way in which Bcl-6 controls T FH cell biology is still unclear. Two studies using Bcl-6 chromatin immunoprecipitation with sequencing (ChIP–seq) in human GC-T FH cells and murine T FH cells provided insights into Bcl-6-bound genes 24 , 25 , but functional roles have remained largely untested, and there is no consensus on a mechanistic model of how Bcl-6 regulates T FH cell biology. Bcl-6 and Blimp-1 are reciprocal antagonistic regulators of the genetic loci of each other 3 . That interaction provides a powerful mechanism for a genetic switch in cell differentiation, as coexpression of Bcl-6 and Blimp-1 is a metastable state 26 . However, from an experimentalist perspective, their mutual antagonism confounds experimental designs to probe Bcl-6 (and Blimp-1) functions in CD4 + T cells. Additionally, the putative nature of Bcl-6 as a repressor in CD4 + T cells adds an extra layer of complexity to understanding gene regulation, as many signature T FH genes are upregulated in the presence of Bcl-6. In this study, we took a first-principles-based approach to define and test hypothetical models of how Bcl-6 may control T FH biology. Results T FH differentiation is not a default pathway One proposed model of T FH differentiation is that T FH differentiation is the default pathway for naive CD4 + T cells activated by antigen presenting cells. In this model, the primary role of Bcl-6 would be to inhibit Blimp-1 to allow activated CD4 + T cells to undergo a default T FH differentiation pathway 27 (Extended Data Fig. 1a,b ). We tested this model by using Bcl6 f/f Prdm1 f/f Cre CD4 mice. If T FH differentiation is a default setting in activated CD4 + T cells, then when Blimp-1 is absent Bcl-6 would not be required. CD45.1 + SMARTA cells from Bcl6 f/f Prdm1 f/f Cre CD4 mice were transferred into C57BL/6J (B6) mice, as were SMARTA cells from wild-type (WT), Bcl6 f/f Cre CD4 , or Prdm1 f/f Cre CD4 mice. Host mice were immunized with keyhole limpet hemocyanin (KLH) conjugated with lymphocytic choriomeningitis virus (LCMV) glycoprotein 61–80 peptide (KLH–gp 61 ) in alum + cGAMP adjuvant (Fig. 1a and Extended Data Fig. 1c,d ). WT SMARTA cells differentiated into non-T FH (CXCR5 lo SLAM hi ), T FH (CXCR5 + SLAM lo or CXCR5 + PSGL1 int/lo ) and GC-T FH cells (CXCR5 hi PSGL1 lo or CXCR5 hi PD-1 hi ) after KLH–gp 61 immunization (Fig. 1b ). Prdm1 f/f Cre CD4 SMARTA cells predominantly differentiated into T FH and GC-T FH cells. Bcl6 f/f Cre CD4 CD4 + T cells did not differentiate into T FH cells 3 , 28 . Notably, Bcl6 f/f Prdm1 f/f Cre CD4 CD4 + T cells failed to differentiate into T FH and GC-T FH cells. Similar results were observed in the context of an acute viral infection ( Supplementary Note ; Extended Data Fig. 1e–g ). Adoptive transfer of Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells demonstrated that the T FH differentiation defect was antigen-specific and CD4 + T cell-intrinsic 29 . Signature T FH surface markers were examined to determine whether Bcl6 f/f Prdm1 f/f Cre CD4 CD4 + T cells became bona fide T FH cells in response to acute LCMV infection. Expression of T FH signature surface proteins was dysregulated (Fig. 1c ), which indicates that Bcl-6 has important functions in gene regulation beyond repression of Blimp-1 that are necessary for T FH differentiation in both immunization and viral infection contexts. Fig. 1: T FH differentiation is not the default pathway. a , Schematic of the SMARTA cell transfer system used for KLH–gp 61 immunization. SMARTA CD4 + T cells from WT, Bcl6 f/f Cre CD4 , Prdm1 f/f Cre CD4 and Bcl6 f/f Prdm1 f/f Cre CD4 mice were transferred to B6 host mice, which were then immunized with KLH–gp 61 in alum + cGAMP adjuvant, and analyzed 8 d later. (Results are shown in b and in Extended Data Fig. 1c,d ). b , Representative flow cytometry of GC-T FH , T FH and non-T FH SMARTA cell subsets from draining LNs (dLNs) of KLH–gp 61 -immunized mice. Numbers in flow cytometry plots indicate percent cells throughout. Three independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. c , SMARTA CD4 + T cells from WT, Bcl6 f/f Cre CD4 , Prdm1 f/f Cre CD4 and Bcl6 f/f Prdm1 f/f Cre CD4 mice were transferred to B6 host mice, which were infected with LCMV Arm , and analyzed 7 d later (only WT and Bcl6 f/f Cre CD4 results are shown). The relative protein expression of GC-T FH core signature markers was gated on CXCR5 + T FH cells. The geometric mean fluorescence intensity (gMFI) value of each gene was normalized to that of the WT cells. Three independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test (Extended Data Fig. 1e ). d , Schematic of the SMARTA cell transfer system used for LCMV Arm infection or KLH–gp 61 immunization. SMARTA CD4 + T cells from WT, Bcl6 f/f Cre CD4 , Prdm1 f/f Cre CD4 and Bcl6 f/f Prdm1 f/f Cre CD4 mice were transferred to Bcl6 f/f Prdm1 f/f Cre CD4 host mice, which were infected with LCMV Arm or immunized with KLH–gp 61 in alum + cGAMP adjuvant, and analyzed 8 d later. (Results are shown in e–i and in Extended Data Fig. 1h,i ). e , Representative flow cytometry gate of B GC cells from the spleens of LCMV Arm -infected mice. Three independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. f , Representative flow cytometry of B GC cells and B PC cells from dLNs of KLH–gp 61 -immunized mice. Two independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. g , Histology of dLNs at day 8 after KLH–gp 61 -immunization in f . Magnified images from Extended Data Fig. 1g are shown. Blue, TCRβ; red, GL7; green, IgD; white, CD45.1 SMARTA. SMARTA cells are presented with large dots for clarity. Scale bar, 200 μm. h , Quantification of results shown in i . The total number of counted SMARTA cells is indicated. i , Serum antigen-specific IgG endpoint titers at day 8 after KLH–gp 61 immunization in f . Two independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. Full size image To assess the migration and function of Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells, we transferred SMARTA cells into Bcl6 f/f Cre CD4 mice, followed by infection of the host mice with LCMV Armstrong strain (LCMV Arm ) (Fig. 1d ). Bcl6 f/f Cre CD4 mice were used as recipient mice to eliminate endogenous T FH help to B cells. Bcl6 f/f Cre CD4 mice that received Bcl6 f/f Cre CD4 SMARTA cells did not generate B GC cells (FAS + PNA + ) in response to LCMV Arm infection. Notably, mice that received Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells failed to generate B GC cells, in contrast to mice that received WT or Prdm1 f/f Cre CD4 SMARTA cells (Fig. 1e ). In a similar manner, B GC and plasma cell (B PC ; IgD lo CD138 hi ) responses were negligible in mice that received either Bcl6 f/f Prdm1 f/f Cre CD4 or Bcl6 f/f Cre CD4 SMARTA cells in response to KLH–gp 61 immunization (Fig. 1d,f ). Approximately 50% of WT SMARTA cells migrated into B cell follicles and GCs (Fig. 1g-h and Extended Data Fig. 1h,i ). Bcl6 f/f Cre CD4 SMARTA cells were mostly excluded from the B cell follicle 3 . Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells did not generate any histologically observable GCs and exhibited a migration pattern indistinguishable from that of Bcl6 f/f Cre CD4 SMARTA cells. IgG titers were significantly decreased in mice that received Bcl6 f/f Prdm1 f/f Cre CD4 or Bcl6 f/f Cre CD4 SMARTA cells (Fig. 1i ) in comparison to mice that received WT cells. Altogether, we conclude that differentiation into T FH cells is not the default pathway of activated CD4 + T cells, and that Bcl-6 has important activities beyond the inhibition of Prdm1 for instructing functional GC-T FH and GC development. Bcl-6 is an autoregulatory repressor in CD4 + T cells In B cells, Bcl-6 is generally considered to be an obligate repressor of transcription, but Bcl-6 mechanisms of action have been controversial in CD4 + T cells. Whereas Bcl-6 expression positively correlates with expression of many genes in T FH cells, including genes with Bcl-6 binding sites 24 , 25 , a mechanistic connection between Bcl-6 binding and gene regulation has been lacking. One example target gene of interest is Bcl6 itself. Bcl-6 binds to its own promoter in human and mouse GC-T FH cells 24 , 25 . This Bcl-6 binding-site ( Bcl6 promoter site 1; BPS1) sequence is conserved among mammals (Extended Data Fig. 2a ). Given that Bcl-6 expression positively correlates with T FH differentiation, Bcl-6 has been considered to be a plausible candidate for positive regulation by Bcl-6. By contrast, there is evidence in B cell tumor lines that BCL-6 exhibits negative autoregulation 30 . To test whether Bcl-6 acts as a repressor or an activator of its own expression in CD4 + T cells, we first used a self-inactivating (SIN) retroviral vector (RV) to measure Bcl6 promoter activity (Fig. 2a and Extended Data Fig. 2b ). SMARTA cells were transduced with WT Thy1.1-RV (an RV construct containing the proximal Bcl6 promoter upstream of a Thy1.1 reporter) or with ΔBPS1 Thy1.1-RV (a mutated Bcl6 promoter construct with an 8-nt deletion mutation) and transferred to recipient mice, and Bcl6 promoter activity was analyzed in T FH and T H 1 cells after acute LCMV infection (Extended Data Fig. 2c,d ). WT Bcl6 promoter activity (measured by Thy1.1 expression) was reduced in T FH cells compared to T H 1 cells. ΔBPS1 Bcl6 promoter activity was increased in T FH cells in comparison to that of the WT Bcl6 promoter (Fig. 2b ). Thus, Bcl-6 appears to repress Bcl6 promoter activity in T FH cells by binding to the BPS1 locus. Fig. 2: Bcl-6 exhibits direct negative autoregulatory feedback. a , Schematic diagram of the Bcl6 promoter RV plasmid. The WT Thy1.1-RV or ΔBPS1 Thy1.1-RV Bcl6 promoter constructs were generated based on the pQdT SIN plasmid vector. b , Representative flow cytometry and quantification of flow cytometry gate of Thy1.1-reporter-positive cells, gated on CXCR5 + T FH or CXCR5 lo T H 1 cells from the spleens of B6 host mice that were given SMARTA CD4 + T cells transduced with WT Thy1.1-RV or ΔBPS1 Thy1.1-RV, then infected with LCMV Arm and analyzed 7 d after infection. Numbers in flow cytometry plots indicate percent cells throughout. Three independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test (Extended Data Fig. 2c,d ). c , d , Phenotyping of WT and ΔBPS1 SMARTA cells from B6 host mice that were given WT or ΔBPS1 SMARTA CD4 + T cells, then infected with LCMV Arm , and analyzed 7 d after infection. Representative flow cytometry of T FH and GC-T FH SMARTA cell subsets from the spleens of LCMV Arm -infected mice. Two independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. See Extended Data Fig. 2g,h for experimental scheme and quantification of gene expression level, respectively. e , SMARTA CD4 + T cells transduced with sh Cd8 -RV or sh Ncor1 -RV were transferred to B6 host mice, which were then infected with LCMV Arm and analyzed 7 d after infection. Representative flow cytometry of GC-T FH RV + SMARTA cell subsets from the spleens of LCMV Arm -infected mice. Two independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. Full size image To test whether Bcl-6 binding to the endogenous Bcl6 promoter affects Bcl-6 expression in vivo, we generated a new CRISPR mouse line possessing the 8-nt ΔBPS1 mutation. ΔBPS1 or littermate control (WT) SMARTA cells were transferred into B6 mice and then host mice were infected with LCMV Arm (Extended Data Fig. 2e,f ). ΔBPS1 SMARTA cells had highly increased Bcl-6 expression only in T FH cells and not in T H 1 cells, which indicates that BPS1 acts as a cis -regulatory element of Bcl6 expression only when cells express elevated Bcl-6. Deletion of BPS1 increased the frequency of T FH cells (CXCR5 hi SLAM lo ) and GC-T FH cells (CXCR5 hi PSGL1 lo or CXCR5 hi Bcl-6 hi ; Fig. 2c,d ). Changes in T FH -associated proteins were observed specifically in T FH cells ( Supplementary Note and Extended Data Fig. 2g ). Ncor1 is a Bcl-6 corepressor 31 . BCL-6 binding at the BCL6 promoter locus overlapped with NCOR binding in a human B cell line (Extended Data Fig. 2h ). To determine whether the Bcl-6 autoregulation in CD4 + T cells involved Ncor1, we transferred SMARTA cells expressing a retroviral microRNA-adapted short hairpin RNA targeting Ncor1 (sh Ncor1 -RV) or a negative control (sh Cd8 ) into B6 mice, which were then exposed to an acute LCMV Arm infection. sh Ncor1 + SMARTA cells exhibited enhanced GC-T FH cell development and Bcl-6 expression (Fig. 2e ). In summary, these data indicate that Bcl-6 represses its own expression in CD4 + T cells, mediated in conjunction with corepressor Ncor1, in a negative autoregulatory feedback loop at the Bcl6 promoter, dampening T FH and GC-T FH cell accumulation. Simple circuitry repressor-of-repressors model of Bcl-6 The findings above excluded the simplest model of Bcl-6 regulation, that of T FH differentiation. A logical model for how Bcl-6 functions as the lineage-defining TF of T FH biology is that Bcl-6 instructs positive T FH gene expression by a repressor-of-repressors mechanism. A simple circuitry model can be proposed (Fig. 3a ) wherein Bcl-6 inhibits a set of repressor TFs (‘Bcl6-r’ TFs, directly inhibited by Bcl-6) that in turn repress genes that are positively associated with T FH biology (‘Bcl6-rr’ genes, inhibited by repressor TFs targeted by Bcl-6). Alternative cell fates (that is, non-T FH or T H 1/T H 2/T H 17/iT REG ) and genes downregulated as part of T FH cell migration or function (for example, Selplg , which encodes PSGL1) may be downregulated by Bcl-6 directly in this simple gene circuitry model. It is difficult to test this model because of the mutually antagonistic relationship of Bcl-6 and Blimp-1. We reasoned that CD4 + T cells that are deficient in both Bcl-6 and Blimp-1 would be needed to gain insights into the TFs that are directly regulated by Bcl-6. Fig. 3: A testable simple circuitry model of T FH differentiation. a , A hypothetical model of the regulation of non-T FH and T FH genes by Bcl-6. Bcl6-r, genes repressed by Bcl-6; Bcl6-rr, genes repressed by repressors that are repressed by Bcl-6. b , Schematic of the SMARTA cell transfer system used for RNA-seq analysis. T H 1 (CXCR5 lo SLAM hi ) populations from WT and Bcl6 f/f Cre CD4 SMARTA cells, T FH (CXCR5 hi SLAM lo ) populations from WT and Prdm1 f/f Cre CD4 SMARTA cells, and T H 1-like (CXCR5 lo SLAM int ) and T FH -like (CXCR5 + SLAM int ) populations from Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells were sorted from the spleens of B6 host mice, which were given SMARTA CD4 + T cells from WT, Bcl6 f/f Cre CD4 , Prdm1 f/f Cre CD4 and Bcl6 f/f Prdm1 f/f Cre CD4 mice, then infected with LCMV Arm , and analyzed 7 d later. Naive SMARTA cells were isolated as CD44 lo CD62L hi CD45.1 + from uninfected mice. Representative flow cytometry of T FH , T H 1, T FH -like and T H 1-like subsets from three independent experiments. c , Upper: scatter plot of genes upregulated (red) or downregulated (blue) in T FH cells relative to their expression in T H 1 cells (1.4-fold cut off, adjusted P < 0.05). Lower: volcano plots of gene expression changes between WT T FH cells and Bcl6 f/f Prdm1 f/f T FH -like cells (horizontal axis) against adjusted P value (vertical axis). Numbers indicate total and percentage of those genes upregulated in WT T FH cells (top left) or Bcl6 f/f Prdm1 f/f T FH -like cells (top right). T H 1 cell–associated genes, upregulated in WT T H 1 cells versus WT T FH cells (WT T H 1 > WT T FH ); T FH cell–associated genes, upregulated in WT T FH cells versus WT T H 1 cells (WT T FH > WT T H 1). Adjuseted P values for multiple test correction were determined using the Benjamini–Hochberg algorithm. d , Gene expression changes were clustered by MAP-DP analysis. Scale, row z -score. Double knockout (DKO) represents Bcl6 f/f Prdm1 f/f Cre CD4 . e , Left: a hypothetical model of Bcl-6 regulation of Bcl6-rr T FH + genes in cluster 4 by inhibition of Bcl6-r TFs in cluster 1. Right: a hypothetical model of Blimp-1 regulation of Blimp-1-rr T H 1 + genes in cluster 2 by inhibition of Blimp-1-r TFs in cluster 3. f , Multiple GSEA for the identification and comparison of gene signatures in CD4 + T cells between subpopulations. Blue indicates a negative association and red indicates a positive association. Circle size is proportional to NES (scale, 1.5–3.0). Tint indicates adjusted P values that were corrected using the Benjamini–Hochberg algorithm. g , GSEA of BCL-6-bound genes from human tonsillar GC-T FH cells 24 compared to cluster 1 genes (left) or cluster 4 genes (right) differentially expressed between Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells and Prdm1 f/f Cre CD4 T FH cells. The ticks below the line correspond to the rank of each gene, which is defined by the P value of the gene expression change between Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells and Prdm1 f/f Cre CD4 T FH cells. NES, normalized enrichment score; FDR, false discovery rate. h , GSEA of Blimp-1-bound genes from activated CD8 T cells 42 in comparison to cluster 3 genes that were differentially expressed between Bcl6 f/f Prdm1 f/f Cre CD4 T H 1-like cells and Bcl6 f/f Cre CD4 T H 1 cells. Full size image To identify the putative set of Bcl6-r TFs, we conducted RNA sequencing (RNA-seq) gene expression profiling of Bcl6 f/f Prdm1 f/f Cre CD4 , Bcl6 f/f Cre CD4 , Prdm1 f/f Cre CD4 , and WT SMARTA T FH and T H 1 cells generated in response to acute LCMV Arm infection or KLH–gp 61 immunization (Fig. 3b and Extended Data Fig. 3a ). As a first analysis of the effect of Bcl6 / Prdm1 double-deficiency on the T FH biology, we assessed the expression of a broadly curated set 19 , 22 , 25 of T FH -associated genes across all samples from the RNA-seq gene expression profiling. Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells lost expression of positively T FH -associated genes in comparison to WT T FH cells or Prdm1 f/f Cre CD4 T FH cells. Conversely, Bcl6 f/f Prdm1 f/f Cre CD4 T H 1-like cells had a gene expression profile different from that of WT T H 1 or Bcl6 f/f Cre CD4 T H 1 cells (Extended Data Fig. 3b ). Principal component analysis provided similar findings (Extended Data Fig. 3c ), which supports the overall hypothesis that T FH is not a default differentiation pathway of CD4 + T cells and that Bcl-6 has important activities beyond the inhibition of Prdm1 . We next characterized the effect of Bcl6 / Prdm1 double-deficiency on the expression of all T FH - and T H 1-associated genes (Extended Data Fig. 3b,c ). Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells had reduced expression of ~88% of genes that were upregulated in WT T FH cells (Fig. 3c ). These data indicate that the vast majority of genes that were upregulated in T FH cells required Bcl-6 for proper induction. Furthermore, Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells had increased expression of ~90% of genes that were upregulated in WT T H 1 cells, which suggests that Bcl-6 was required to properly repress most T H 1-associated genes. Taken together, these results suggest that Bcl-6 is broadly important for both the induction of T FH genes and the repression of non-T FH genes, consistent with, and expanding upon, previous observations 3 , 4 , 5 , 24 , 29 . To identify genes that were likely to be directly repressed by Bcl-6, we analyzed patterns of gene expression changes between the six different T FH and T H 1 populations and naive CD4 + T cells. Both k -means analysis ( k = 10) and hierarchical clustering bioinformatic approaches readily separated four distinct major gene expression patterns (Extended Data Fig. 3d,e ). To obtain gene lists associated with those four major cluster patterns, maximum a posteriori Dirichlet process mixtures (MAP-DP) clustering was performed (Fig. 3d ). We then attempted to apply our simple circuitry model of Bcl-6 function to the MAP-DP clustering outcomes. Given that the data sets also include modulation of Blimp-1 expression, and that Blimp-1 has major effects on T H 1 versus T FH differentiation, we posited that a similar circuitry model of Blimp-1-mediated gene regulation may need to be included (Fig. 3e ). We therefore assessed whether the four major gene expression patterns that are regulated in T FH and T H 1 cells could be largely accounted for by this simple circuitry model of Bcl-6 and Blimp-1 functioning as repressors. The model predicts that T FH upregulated genes (Bcl6-rr) are upregulated via a Bcl-6 repressor-of-repressors mechanism (Fig. 3e ). If the model was accurate, these Bcl6-rr genes would correspond to cluster 4, as the expression of such genes would be reduced in Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells compared to Prdm1 f/f Cre CD4 T FH cells (Fig. 3d,e ). Indeed, many genes that were upregulated in WT T FH cells, and are important for the differentiation and function of T FH cells, were observed in cluster 4, including Cxcr5 , Icos , Cd200 , Pdcd1 , Sh2d1a , Tcf7 , Lef1 , Tox , Tox2 , Il6ra , Il4 and Il21 . The model predicts that genes that are directly repressed by Bcl-6 (Bcl6-r) would fall into cluster 1 (Fig. 3e ). Genes associated with alternative cell fates that are directly repressed by Bcl-6 would also group in cluster 1. Important genes associated with non-T FH fates do indeed group in cluster 1 (ref. 23 ) (Fig. 3d ) . Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells had increased expression of T H 1, T H 2, and T REG signature genes (Fig. 3f ). Thus, multiple analytical approaches identified major gene networks and expression changes consistent with the proposed Bcl-6 repressor-of-repressors model. If the Bcl-6 repressor-of-repressor model is accurate, a substantial proportion of cluster 1 genes should represent genes that are directly repressed by Bcl-6 binding (Bcl6-r), whereas cluster 4 genes should largely represent genes that are not directly bound by Bcl-6 (Bcl6-rr). To test this, we used the gene set of BCL-6-bound genes in human GC-T FH cells that was identified by BCL-6 ChIP–seq 24 and performed gene set enrichment analysis (GSEA) against clusters 1 and 4 (see Supplementary Note ). BCL-6-bound genes were highly enriched in cluster 1 (Fig. 3g ). By contrast, BCL-6-bound genes were not enriched in cluster 4 (Fig. 3g ). Similarly, BCL-6-bound genes also were not enriched in clusters 2 and 3 (Extended Data Fig. 3f ). Our model predicted that clusters 2 and 3 would contain genes that are regulated by Blimp-1 (Blimp1-rr and Blimp1-r), with cluster 3 genes being those that are directly targeted by Blimp-1 (Fig. 3e ). GSEA using Blimp-1-bound gene sets showed that cluster 3 was highly enriched for Blimp-1-bound genes (Fig. 3h and Extended Data Fig. 3g ), consistent with the model proposed. Whereas the analyses do not exclude the possibility of some activity of Bcl-6 as an activator (see Supplementary Note ), taken together, the data support the proposed model that Bcl-6 primarily acts as a repressor in regulating T FH biology. Bcl-6, Blimp-1, and Id2 relationships regulate Cxcr5 Among genes that were directly repressed by Bcl-6, TFs were of particular interest. Id2 was identified in the clustering analysis as a Bcl6-r TF (Fig. 4a ). We previously demonstrated that Id2 is an important regulator of T FH differentiation 22 . Bcl-6 directly represses Id2 and Id2 inhibits CXCR5 expression via complexing with E proteins 22 , 23 . GSEA showed that E2A-bound genes were enriched in cluster 4 (Fig. 4b ). Therefore, a minimalist T FH differentiation model would be that Bcl-6 initiates and controls T FH biology primarily via repression of two inhibitory TFs: Blimp-1 and Id2 (Fig. 4c ). To test the model, we generated Bcl6 f/f Prdm1 f/f Id2 f/f Cre CD4 SMARTA mice. Bcl6 f/f Prdm1 f/f Id2 f/f Cre CD4 CD4 + T cells exhibited substantial increases in CXCR5 expression in comparison to Bcl6 f/f Prdm1 f/f Cre CD4 CD4 + T cells (Fig. 4d,e ). However, GC-T FH differentiation by Bcl6 f/f Prdm1 f/f Id2 f/f Cre CD4 CD4 + T cells remained extremely defective (Fig. 4f,g and Extended Data Fig. 4a–g ). These data indicate that Bcl-6 probably represses multiple TFs in addition to Blimp-1 and Id2 to control T FH biology. Fig. 4: Bcl-6 drives CXCR5 expression via repression of Id2-E2A pathway. a , Gene expression of Id2 from RNA-seq data of LCMV Arm -infected mice or KLH–gp 61 -immunized mice. Each data point was collected from three (LCMV Arm ) or four (KLH–gp 61 ) independent experiments. TPM, transcripts per million. b , GSEA of E2A-bound genes from thymocytes 22 , 43 in comparison to cluster 4 genes between Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells and Prdm1 f/f Cre CD4 T FH cells. c , A hypothetical model of Bcl-6 regulation of T FH genes primarily via inhibition of Id2 and Prdm1 /Blimp-1. T FH UP genes, genes upregulated in T FH cells. d , Schematic of the SMARTA cell transfer system used for KLH–gp 61 immunization. SMARTA CD4 + T cells from WT, Bcl6 f/f Prdm1 f/f Cre CD4 or Bcl6 f/f Prdm1 f/f Id2 f/f Cre CD4 were transferred to B6 host mice, which were then immunization with KLH–gp 61 in alum + cGAMP, and analyzed 8 d later (( e , f ) and Extended Data Fig. 4a ). e,f , Representative flow cytometry of CXCR5 hi T FH , CXCR5 hi PD-1 hi and CXCR5 hi PSGL1 lo GC-T FH SMARTA cell subsets from dLNs of KLH–gp 61 -immunized mice in d . Numbers in flow cytometry plots indicate percent cells throughout. Two independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. g , Quantification of CXCR5 hi SLAM lo T FH and CXCR5 hi PSGL1 lo GC-T FH cells, gated on SMARTA cells from the spleens of LCMV Arm -infected mice. Two independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test (Extended Data Fig. 4b–g ). Full size image Identification of Bcl-6 target TF candidates We sought to identify TFs, in addition to Blimp-1 and Id2, that are key repressors downstream of Bcl-6 and that control genes upregulated in T FH cells. The simple circuitry repressor-of-repressors model predicts that such TFs should be present in cluster 1 (Fig. 3e ). We identified 307 TFs in cluster 1 (Fig. 5a ). We developed an analytical approach to identify candidate TFs by integrated analysis of the composite RNA-seq data with both BCL-6 ChIP–seq data from human tonsillar GC-T FH cells 24 and an assay for transposase-accessible chromatin using sequencing (ATAC–seq) of T FH and non-T FH cells from multiple genetically modified mice. Among the cluster 1 TFs, 119 TFs represented BCL-6-bound gene loci in human GC-T FH cells, which confirmed that these 119 TFs were direct targets of BCL-6 (Fig. 5b ). Fig. 5: Integrated analysis of multiple genetic backgrounds and data types. a , Volcano plot of gene expression changes of cluster 1 TFs between Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells and Prdm1 f/f Cre CD4 T FH cells (LCMV Arm infection) or between Bcl6 f/f Prdm1 f/f Cre CD4 non-T F H cells and Prdm1 f/f Cre CD4 T FH cells (KLH–gp 61 immunization) (1.4-fold cut off; adjusted P < 0.05). Numbers represent 76 TFs from LCMV Arm infection, 287 TFs from KLH–gp 61 immunization and 307 combined TFs). Adjusted P values for multiple test correction were determined using the Benjamini–Hochberg algorithm. Each data point was collected from independent experiments. Selected genes of interest are labeled. b , Schematic of the integrated analytical approach. The composite RNA-seq data with both BCL-6 ChIP–seq data from human tonsillar GC-T FH cells and ATAC–seq of T FH and T H 1 cells from WT, Bcl6 f/f Cre CD4 , Prdm1 f/f Cre CD4 and Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells. Among the cluster 1 TFs in a , 119 TFs represented BCL-6-bound gene loci in human GC-T FH cells. ATAC–seq and TF motif scanning were used to filter the top Bcl6-r TF candidates. c , Schematic of the experimental plan to generate ATAC–seq data. T H 1 (CXCR5 lo SLAM hi ) populations from WT and Bcl6 f/f Cre CD4 SMARTA cells, T FH (CXCR5 hi SLAM lo ) populations from WT and Prdm1 f/f Cre CD4 SMARTA cells, and T H 1-like (CXCR5 lo SLAM int ) and T FH -like (CXCR5 + SLAM int ) populations from Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells were sorted from the spleens of B6 host mice that were given WT, Bcl6 f/f Cre CD4 , Prdm1 f/f Cre CD4 and Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells, then infected with LCMV Arm , and analyzed 7 d later. Representative flow cytometry of T FH , T H 1, T FH -like and T H 1-like subsets from three independent experiments. d , Genome browser tracks depict ATAC–seq chromatin accessibility and TF occupancy. Peak calls are indicated below each track. A Bcl-6 liftover peak from the human to the mouse reference genome is indicated. * and ** indicate DESeq2 raw P values of ≤0.05 and ≤0.01, respectively, in comparisons between WT T FH and T H 1 cells. Gene expression from RNA-seq data of LCMV Arm -infected mice was plotted. Each data point was collected from three independent experiments. DKO represents Bcl6 f/f Prdm1 f/f Cre CD4 . e , ChIP–qPCR analysis of Bcl-6 at Selplg E1 or a negative control region (Neg) among chromatin prepared from CXCR5 hi T FH cells from the spleens of B6 host mice given SMARTA CD4 + T cells transduced with myctagN– Bcl6 -RV, then infected with LCMV Arm and analyzed 7 d after infection. ChIP was performed using an anti-myc IgG or control IgG. Three independent experiments were performed. Each data point is from an independent experiment ( n = 3) and is presented as a percentage of input. Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. Full size image The most functionally important Bcl6-r TFs would repress Bcl6-rr genes, on the basis of the repressor-of-repressors Bcl-6 model (Fig. 3a ). We reasoned that candidate Bcl6-r TFs could be functionally connected to transcriptional regulation of genes that are upregulated in WT T FH cells in a Bcl-6-dependent manner by testing for enrichment of candidate TF DNA-binding motifs in differentially accessible chromatin regulatory regions of T FH -associated genes, in particular between Prdm1 f/f Cre CD4 T FH and Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells, as those chromatin changes would be dependent on Bcl-6 expression. We therefore conducted ATAC–seq of T FH or T H 1 populations of Bcl6 f/f Prdm1 f/f Cre CD4 , Bcl6 f/f Cre CD4 , Prdm1 f/f Cre CD4 and WT SMARTA cells in the context of acute LCMV Arm infection (Fig. 5c ). Several genes of interest were examined as a first test. An E2A binding motif was identified in a differential T FH ATAC–seq peak in a downstream enhancer of Cxcr5 , consistent with Bcl-6 control of CXCR5 expression via inhibition of Id2 (Extended Data Fig. 5a,b ). Selplg , Tbx21 , and Gata3 are known Bcl-6-bound genes 24 . Substantial changes in chromatin accessibility were observed for each of these genes. BCL-6 binding sites of these genes in human GC-T FH cells were conserved in mouse by syntenic analysis (Fig. 5d and Extended Data Fig. 5c,d ). A differential T FH ATAC–seq peak in an intron of Selplg overlapped with a large human GC-T FH BCL-6 ChIP–seq peak 24 that was centered on a BCL-6 DNA-binding motif ( SELPLG E1) (Fig. 5d and Extended Data Fig. 5d ). To evaluate whether SELPLG E1 BCL-6 binding is conserved in mouse T FH cells, we performed Bcl-6 ChIP with Myc-tagged Bcl-6-expressing (myctagN– Bcl6 -RV + ) Bcl6 f/f Cre CD4 T FH cells. Human BCL-6-bound SELPLG E1 in GC-T FH was indeed a site bound by Bcl-6 in mouse T FH cells (Fig. 5e ). Together, these results indicated that the ATAC–seq data were high quality and could be used for broader TF motif scanning. We then applied the differential chromatin accessibility plus TF motif analysis to all genes, across all ATAC–seq data sets, to identify TF motifs that were enriched within regions that underwent differential chromatin remodeling. Chromatin accessibility in the seven cell populations was distinct (Fig. 6a ). We scanned for 566 known TF binding motifs within each differentially accessible ATAC peak in the genome. Motifs recognized by TFs Runx, Ets, T-box and Klf families were most highly enriched in chromatin regions with increased accessibility in Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells compared with Prdm1 f/f Cre CD4 T FH cells (each P < 1.0 × 10 -16 ) and WT T FH cells (Fig. 6b ). By contrast, in the same cell-type comparisons, these TF motifs were depleted from chromatin regions with reduced accessibility (Extended Data Fig. 6a ). Changes in abundance of Runx DNA-binding motifs were particularly significant ( P < 9 × 10 -75 ). Runx and T-bet footprints were present (Fig. 6c and Extended Data Fig. 6b ), which indicates that Bcl-6 expression prevents Runx and T-box family TFs from binding these sites, most probably by direct Bcl-6 transcriptional repression of Runx and T-box family genes. Runx DNA-binding sites were observed in enhancer regions of T FH -associated genes, including Pdcd1 and Icos (Fig. 6d and Extended Data Fig. 6c ). All three Runx TFs are expressed in CD4 + T cells (Fig. 6e ), and each Runx TF is known to be competent for binding consensus Runx motifs 32 . Runx2 and Runx3 were grouped in cluster 1 in the gene expression MAP-DP clustering analysis (Fig. 3d ), making them the more likely Runx candidates for repression by Bcl-6. BCL-6 bound robustly to RUNX2 and RUNX3 (ref. 24 ) enhancers in human GC-T FH cells (Fig. 6f ). Bcl-6 bound Runx2 E1, Runx2 E2, Runx2 E3 and Runx3 E1 in mouse T FH cells, which confirms the conservation of Bcl-6-binding to Runx2 and Runx3 loci (Fig. 6g ). Fig. 6: Identification of candidate TFs. a , t -distributed stochastic neighbor embedding (tSNE) analysis of ATAC–seq chromatin accessibility. b , Heatmap plots representing the frequencies of the most enriched TF motifs in regions of increased accessibility (relatively more open in first group than second group, DESeq2 raw P < 0.05). Scale, motif frequencies (%). c , TF footprints derived from ATAC–seq reads over representative TF motifs within accessible ATAC–seq regions. d , Genome browser tracks depict ATAC–seq chromatin accessibility and TF occupancy. Peak calls are indicated below each track. ** indicates DESeq2 raw P ≤ 0.01 in comparison between Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like and Prdm1 f/f Cre CD4 T FH cells. e , Gene expression of Runx1, Runx2 and Runx3 from RNA-seq data of LCMV Arm -infected mice or KLH–gp 61 -immunized mice. Each data point was collected from three (LCMV Arm ) or four (KLH–gp 61 ) independent experiments. f , Genome browser tracks show BCL-6 ChIP–Seq peaks at RUNX2 , RUNX3 and KLF2 loci. Peak calls are indicated below each track. g,i . ChIP–qPCR analysis of Bcl-6 at Runx2 E1, E2 and E3, Runx3 E1, or Klf2 P1 and DE among chromatin prepared from CXCR5 hi T FH cells as shown in Fig. 5e . Three independent experiments were performed. Each data point is from an independent experiment ( n = 3) and is presented as a percentage of input. Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. h , Gene expression of Klf2 from RNA-seq data of LCMV Arm -infected mice or KLH–gp 61 -immunized mice. Each data point was collected from three (LCMV Arm ) or four (KLH–gp 61 ) independent experiments. Full size image Enrichment of Klf DNA-binding motifs in Bcl6 f/f Prdm1 f/f Cre CD4 T FH -like cells was examined further. Several Klf-family TFs were expressed in CD4 + T cells, with Klf2 representing the dominant member (Extended Data Fig. 6d ). Klf2 exhibited a cluster 1-type gene expression pattern in T FH cells in the context of KLH–gp 61 immunization, whereas it exhibited a more complex gene expression pattern in acute LCMV infection (Fig. 6h ). The TF footprints for several Klf motifs were enriched in WT T H 1 cells compared to WT T FH cells (Extended Data Fig. 6b,e ). Bcl-6 bound the Klf2 promoter (P1) and a putative distal enhancer (DE), which confirms that Bcl-6 binding sites at Klf2 loci are conserved between humans and mice (Fig. 6f,i ). Klf binding sites were observed in open chromatin regions of multiple-signature GC-T FH genes (Fig. 6d and Extended Data Fig. 6c ). These results identified Klf2 as a Bcl6-r TF candidate. GATA-3 is constitutively expressed in CD4 + T cells. Gata3 was defined as a cluster 1 gene, with reduction of GATA-3 expression in T FH cells compared to T H 1 cells (Fig. 3d and Extended Data Fig. 5c ). Taken together, integrated bioinformatic analyses revealed Runx2, Runx3, GATA-3 and Klf2 as strong potential Bcl6-r TF candidates that repress T FH genes. Bcl-6 repressor-of-repressors circuits To test the in vivo roles of candidate Bcl6-r TFs, we optimized a system for direct transfection of SMARTA cells with target gene CRISPR RNA (crRNA) and Cas9 ribonucleoprotein (RNP) complexes to disrupt target genes (Extended Data Fig. 7a,b ). RNP + SMARTA cells were adoptively transferred into host mice that were subsequently infected with LCMV Arm . Bcl6 and Prdm1 were first tested as positive control genes (Fig. 7a,b and Extended Data Fig. 7c ). crRNA-mediated deletion of TFs was efficient and was a suitable experimental system for exploring the genetics of T FH biology in vivo. We then disrupted Gata3 , Runx2, Runx3 and Klf2 as candidate Bcl6-r T FH repressors, with crRNA RNPs in Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells, and examined their differentiation in the context of acute LCMV infection. cr Cd8 + WT SMARTA cells were used as controls. cr Gata3 + Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells exhibited increased CXCR5 and reduced PSGL1 expression in comparison to cr Cd8 + Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells (Fig. 7c and Extended Data Fig. 7 ). These results suggest that GATA-3 is a Bcl6-r TF that is a repressor of CXCR5 and a positive regulator of PSGL1. Fig. 7: Identification of Runx2, Runx3 and GATA-3 as repressors of T FH genes, acting downstream of Bcl-6. a , Schematic of the CRISPR–Cas9-mediated gene knockdown of SMARTA cell system used for testing Bcl6 and Prdm1 in LCMV Arm infection. SMARTA CD4 + T cells transfected with cr Cd8 , cr Bcl6 or cr Prdm1 were transferred to B6 host mice, which were then infected with LCMV Arm , and analyzed 6 d later. b , Representative flow cytometry of GC-T FH SMARTA cell subsets from the spleens of LCMV Arm -infected mice in a . Two independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. c , Quantification of results from cr Cd8 + and cr Gata3 + SMARTA cells from the spleens of LCMV Arm -infected mice. Three independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test (Extended Data Fig. 7h,i,l,m ). d , Schematic of the CRISPR–Cas9-mediated gene knockdown of SMARTA cell system used for testing Runx2 and Runx3 in LCMV Arm infection. WT or Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA CD4 + T cells transfected with cr Cd8 , cr Runx2 , or cr Runx3 were transferred to B6 host mice, which were then infected with LCMV Arm , and analyzed 6–7 d later ( 7(e–h) and Extended Data Fig. 7j,k ). e , g , Representative flow cytometry of T FH SMARTA cell subsets from the spleens of LCMV Arm -infected mice in d . Three independent experiments were performed; each dot represents one mouse ( n = 5 ( e ); n = 4 ( g )). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. f , h , Quantification of GC-T FH core signature markers, gated on SMARTA cells in d . CD44 lo naive CD4 + T cells were used as a negative control. i , j , Representative flow cytometry of T H 1, T FH and GC-T FH RV + SMARTA cell subsets from the spleens of B6 host mice that were given SMARTA CD4 + T cells transduced with pMIG (GFP-RV + ), pMIG-Runx3myc ( Runx3 -RV + (Low)), or pMIG-Runx2myc ( Runx2 -RV + (Med)), then infected with LCMV Arm and analyzed 7 d after infection. Two independent experiments were performed; each dot represents one mouse ( n = 5). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. Full size image The roles of Runx family TFs in T FH biology are largely unknown. We first examined the impact of Runx2 disruption on T FH gene regulation (Fig. 7d–f and Extended Data Fig. 7 ). We observed significantly greater frequencies of CXCR5 + T FH -like cells among cr Runx2 + Bcl6 f/f Prdm1 f/f Cre CD4 CD4 + T cells compared to cr Cd8 + control cells. Expression of ICOS and CD200 was upregulated in the absence of Runx2 (Fig. 7e,f ). Runx3 disruption also increased the development of CXCR5 + T FH -like cells on the Bcl6 f/f Prdm1 f/f Cre CD4 background, as well as the expression of ICOS and CD200 (Fig. 7g,h ), which parallels the T FH gene regulation observed by Runx2. Next, we investigated whether Runx2 and Runx3 act predominantly upstream or downstream of Bcl-6 in T FH differentiation. The proposed Bcl-6 repressor-of-repressors model predicted that Runx2 and Runx3 would act downstream of Bcl-6. To test this, we transduced WT SMARTA cells with RVs expressing green fluorescent protein (GFP) alone (GFP-RV + ), Runx3 ( Runx3 -RV + ), or Runx2 ( Runx2 -RV + ), transferred the cells into B6 mice and analyzed T FH differentiation 7 d after acute LCMV Arm infection (Extended Data Fig. 8a ). Enforced Runx2 expression resulted in reduced T FH and GC-T FH differentiation relative to control cells. Expression of ICOS and CD200 was also reduced in Runx2 -RV + T FH cells, which is consistent with the Runx2 gene-disruption data. Constitutive Runx3 expression caused more severe disruption of T FH differentiation than did Runx2 (Fig. 7i , Extended Data Fig. 8 ). Most notably, Bcl-6 expression was not affected by enforced Runx2 or Runx3 expression, which indicates that Bcl-6 is indeed upstream of Runx2 and Runx3 in T FH cells (Fig. 7j ). Disruption of Runx2 expression resulted in a gain of T FH gene expression similar to that of Runx3 gene disruption (Fig. 7e–h ), which indicates that both Runx2 and Runx3 are relevant targets of Bcl-6 in vivo for T FH development. Klf2 has been connected to T FH differentiation in both mice and humans, downstream of ICOS signaling 13 , 14 . Klf2 represses CXCR5 expression and Klf2 binds the Prdm1 locus, but different models were proposed for how Klf2 influences T FH differentiation 13 , 14 . Therefore, we investigated the effect of Klf2 on T FH gene expression using the crRNA RNP SMARTA system to test the model that Klf2 may act downstream of Bcl-6 as a Bcl6-r TF in a repressor-of-repressors circuit (Fig. 8 and Extended Data Fig. 9a–d ). In the Bcl6 f/f Prdm1 f/f Cre CD4 background, expression of PD-1, ICOS, CD200 and IL-6Rα were all significantly upregulated in cr Klf2 + CD4 + T cells versus cr Cd8 + cells (Fig. 8a–c ). More surprisingly, expression of the T FH cytokine IL-21 was significantly increased (Fig. 8d and Extended Data Fig. 9b ). Given that result, we examined the expression of IL-4, the other major cytokine expressed by T FH cells. IL-4 expression was substantially increased in antigen-stimulated cr Klf2 + versus cr Cd8 + CD4 + T cells after KLH–gp 61 immunization (Fig. 8e and Extended Data Fig. 9d ). This occurred even though GATA-3 expression was not changed in the absence of Klf2 13 , 14 . Disruption of Klf2 did not affect the expression of Maf, a TF known to have a role in Il21 and Il4 expression 33 , 34 (Fig. 8f and Extended Data Fig. 9c ). Fig. 8: Identification of Klf2 as a repressor acting downstream of Bcl-6 regulating major T FH genes. a , Schematic of the CRISPR–Cas9-mediated gene knockdown of SMARTA cell system used for testing Klf2 in LCMV Arm infection. WT or Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA CD4 + T cells transfected with cr Cd8 or cr Klf2 were transferred to B6 host mice, which were then infected with LCMV Arm , and analyzed 6–7 d later. Results are shown in b , c and Extended Data Fig. 8a,b . b , Representative flow cytometry of T FH and GC-T FH cells, gated on SMARTA cells from the spleens of LCMV Arm -infected mice in a . Three independent experiments were performed; each dot represents one mouse ( n = 4). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. c , Quantification of GC-T FH core signature markers, gated on SMARTA cells from the spleens of LCMV Arm -infected mice in a . CD44 lo naive CD4 + T cells were used as a negative control. d , e , WT or Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA CD4 + T cells transfected with cr Cd8 or cr Klf2 were transferred to B6 host mice, which were then immunized with KLH–gp 61 and alum + cGAMP, and analyzed 8 d later. Representative flow cytometry and quantification of gp 66 -restimulated IL-21 + and IL-4 + SMARTA cells from the dLNs of KLH–gp 61 -immunized mice. Two independent experiments were performed; each dot represents one mouse ( n = 5). Data are shown as the mean ± s.d. and were analyzed by an unpaired two-tailed Student’s t -test. See Extended Data Fig. 9d for experimental design. f , Quantification of expression of Tcf-1, GATA-3 and Maf, gated on SMARTA cells from the dLNs of KLH–gp 61 -immunized mice in Extended Data Fig. 8d . g , A circuitry model of the regulation of T FH genes upregulated by Bcl-6 through repression-of-repressor TFs. Full size image These observations indicated that Klf2 is a negative regulator of the expression of PD-1, ICOS, CD200, IL-6Rα, IL-21 and IL-4. The regulation of IL-6Rα and ICOS by Klf2 was reminiscent of the role of Tcf-1 (the product of Tcf7 ), but opposite in manner 19 , 20 , 21 . Multiple Klf motifs were observed in open chromatin regions of the Tcf7 gene locus in T FH cells, which suggests that Tcf7 may be a Klf2-targeted TF (Extended Data Fig. 9e ). Therefore, we examined Tcf-1 expression in cr Klf2 + Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells. Expression of Tcf-1 protein was substantially increased in the absence of Klf2 (Fig. 8f and Extended Data Fig. 9c ). In mouse T FH cells, Tcf-1 binding was observed at the promoter and enhancers of multiple GC-T FH signature genes including Pdcd1 , Il6ra (Extended Data Fig. 9f ), Icos and Il21 , all of which are upregulated in cr Klf2 + Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA cells. These data support a Bcl-6 − Klf2 − Tcf7 inhibitory pathway (Bcl-6 inibits Klf2 , and Klf2 inhibits Tcf7 ; therefore, Bcl-6 upregulates Tcf7 ) for upregulation of T FH genes (Fig. 8g ). Klf and Tcf-1 binding motifs were observed concomitantly in open chromatin of multiple Bcl6-rr genes including Pdcd1 , Cd200 and Il21 (Fig. 6d and Extended Data Figs. 6c,9g ), which suggests that Klf2 represses Bcl6-rr genes through a combinatorial mechanism via direct binding and by repression of Tcf7 . Taken together, these results enable us to conclude that Bcl-6 is a nexus for control of positive T FH gene expression by repression of multiple repressors (Fig. 8g ). Discussion T FH differentiation is a multistage, multifactorial process 1 . The way in which Bcl-6, the lineage-defining TF of T FH cells, accomplishes control of T FH differentiation and function has remained unclear, at least in part because of the complexity of the biology, the antagonistic relationship between Bcl-6 and Blimp-1, and the intrinsic challenges of studying repressors. This study advances our mechanistic understanding of how Bcl-6 controls T FH differentiation and function. We resolved several obstacles through a series of logical approaches, integrating analyses of multiple data sources from multiple genotypes and genetically modified cells. Our observations excluded the two simplest models of Bcl-6 regulation of T FH differentiation via repression of Blimp-1 alone or Blimp-1 and Id2. Bioinformatic analyses identified numerous candidate Bcl6-r TFs, and experiments demonstrated that Runx2, Runx3, GATA-3 and Klf2 are Bcl-6 target TFs that regulate T FH differentiation and function. We conclude that Bcl-6 is a nexus for control of positive T FH gene expression by repression of multiple repressors. Bcl-6 is an obligate repressor in B cells 30 , 35 , 36 . We find that Bcl-6 regulates its own expression by a negative autoregulatory loop in T FH cells. On the basis of these and other data, we proposed that Bcl-6 drives upregulation of canonical genes of T FH differentiation and function via repressor-of-repressors mechanisms. In addition, Bcl-6 represses alternative, non-T FH , cell fates. Bcl-6 clearly inhibits non-T FH differentiation fates via inhibition of Prdm1 (refs. 2 , 3 , 9 ). BCL-6 can also block T H 1/T H 2/T H 17 differentiation by repression of lineage-defining TFs TBX21, GATA3 and RORA 4 , 5 , 6 , 24 , 37 , as well as by repressing genes that are central to those cell types (for example, Il17a , Il17f , Ifng , Il2ra and Ifngr1 (refs. 4 , 5 , 24 )). By contrast, testing the repressor-of-repressors model of Bcl-6 function required identification of Bcl6-r TFs that repress positive features of T FH biology, downstream of Bcl-6. Our analytical approach to identify candidate Bcl6-r repressor TFs integrated RNA-seq, ChIP–Seq and ATAC–seq. Id2 is one Bcl6-r TF that is clearly important for regulation of Cxcr5 . We also identified Runx2, Runx3, GATA-3 and Klf2 as Bcl6-r TFs that repress important T FH genes including Pdcd1 , Icos , Cd200 , Il6ra , Il211 and Il4 . These observations demonstrate that there are multiple repressors downstream of Bcl-6 that control T FH genes. Although the results do not exclude additional TFs, or additional mechanisms of action including potential direct activator activity of Bcl-6 (see Supplementary Note ), the overall structure of the repressor-of-repressors Bcl-6 gene regulatory network can explain why T FH differentiation is fully dependent on Bcl-6. In the absence of Bcl-6 no TF appears to substitute. One might consider whether Bcl-6 acting as an obligate repressor in CD4 + T cells is a rare case for a lineage-defining TF. Bcl-6 was originally identified as the master regulator of B GC differentiation, and it has been well characterized as an obligate repressor in B cells 30 , 35 , 36 , though efforts have predominantly focused on genes that are downregulated in B GC cells. Foxp3 is a second lineage-defining TF that mainly acts as a repressor 38 . RORγT appears to act substantially through repressor activity in T H 17 cells, in concert with Maf 39 . The primary actions of T-bet in CD4 + T cells may also be predominantly repressive 40 . As such, it has been proposed that a general primary function of lineage-defining TFs may be to limit, by direct repression or other mechanisms, the number of genes that are induced broadly by T cell antigen receptor (TCR) and cytokine signaling 41 . Bcl-6 is a clear example of this direct repression model, but the work here adds to that by demonstrating how genes that are positively associated with a cell type can be upregulated downstream of a lineage-defining TF via repressor-of-repressor mechanisms. Our results thus establish an overall structure of T FH differentiation and gene regulation in a parsimonious model in which Bcl-6 serves as the apex of a repressor-of-repressors network. This may also provide future insights into the biology of dysregulated T FH or T FH -like cells that are present in a range of biomedically relevant diseases such as atherosclerosis and autoantibody-mediated autoimmune diseases 1 . Methods Mice C57BL/6J (B6), Cre CD4 and CD45.1 + mice were obtained from The Jackson Laboratory. Mouse strains described below were bred and housed in specific-pathogen-free conditions in accordance with the Institutional Animal Care and Use Guidelines of the La Jolla Institute. SMARTA mice (TCR transgenic for I-A b -restricted LCMV glycoprotein 66–77 peptide (gp 66 )) 44 , Bcl6 f/f 45 , Prdm1 f/f 46 and CD45.1 + congenic mice were on a full B6 background. Bcl6 f/f or Prdm1 f/f mice were crossed to the SMARTA, Cre CD4 and CD45.1 + strains to generate Bcl6 f/f Cre CD4 CD45.1 + SMARTA and Prdm1 f/f Cre CD4 CD45.1 + SMARTA mice. Bcl6 f/f Prdm1 f/f Cre CD4 CD45.1 + SMARTA mice were generated by crossing Bcl6 f/f Cre CD4 CD45.1 + SMARTA and Prdm1 f/f Cre CD4 CD45.1 + SMARTA strains. Blimp-1-YFP 47 or ΔBPS1 (described below) mice were crossed to the CD45.1 + SMARTA strain to generate Blimp-1-YFP CD45.1 + SMARTA or ΔBPS1 CD45.1 + SMARTA mice. Both male and female mice (6–15 weeks of age) were used throughout the study, with sex- and age-matched T cell donors and recipients. All animal experiments were performed under protocols approved by the Institutional Animal Use and Care Committees of the La Jolla Institute for Immunology. Adoptive cell transfer, infection and immunization Adoptive transfer of congenically marked cells (CD45.1 + ) into recipient mice (CD45.2 + ) was performed by intravenous injection via the retroorbital sinus. For LCMV Arm infection, 10 × 10 3 naive, transduced RV + or crRNA + CD4 + T cells were transferred into each mouse. Recipient mice were injected intraperitoneally with 2 × 10 5 plaque-forming units of LCMV Arm in plain Dulbecco’s modified Eagle’s medium. For protein immunization, 50 × 10 3 naive or crRNA + CD4 + T cells were transferred into each mouse. A total of 10 μg of KLH–gp 61 was prepared in alum (Alhydrogel) only, alum + LPS (1 μg), alum + Poly (I:C) (10 μg) or alum + cyclic [G(3′,5′)pA(3′,5′)p] (3′3′-cGAMP; 10 μg, InvivoGen) adjuvants in a total volume of 20 μl and injected into each footpad of recipient mice. Alum + 3′3′-cGAMP was chosen as the adjuvant combination because alum alone is a poor inducer of Blimp-1 (Extended Data Fig. 1a ). Transferred cells were allowed to rest in host mice for 1d (naive cells) or 3–4 d (RV + or crRNA + cells) before infection or immunization. Plasmids and retroviral transduction The pMIG (contains an IRES-GFP), pMIG-Bcl6, pMIG-Runx2myc, pMIG-Runx3myc retroviral (RV) plasmid, and pQCXIP (contains Thy1.1 and pGK-GFP) SIN RV plasmids were described previously 28 , 48 , 49 . The pMIG-myctagN-Bcl6 plasmid was generated by insertion of sequences encoding the Myc tag to those encoding the N-terminus of Bcl-6 with short linker sequences (5′-GATCTGAATTCGGAATCTACC-3′). The pQCXIP plasmid was modified further by deletion of the IRES-Puro R cassette and the CCAAT box, and the TATA box of the 3′-UTR, to reduce background reporter expression (pQdT; Extended Data Fig. 2b ). Constructs containing either WT or an 8-nt deletion that removes the Bcl-6 binding motif (+18; TCTAGGAA) in the proximal Bcl6 promoter region (−709 to +272 from transcription start site) were cloned into the pQdT plasmid upstream of the Thy1.1 reporter to generate WT Thy1.1-RV or ΔBPS1 Thy1.1-RV, respectively (Fig. 2a ). Virions were produced by transfection of the Plat-E cell line. Culture supernatants were collected 24 h and 48 h after transfection, filtered through a 0.45 μm syringe filter and stored at 4 °C until transduction. CD4 + T cells were isolated from whole splenocytes by negative selection (Stemcell Technologies) and resuspended in R10 (RPMI1640 + 10% fetal bovine serum, supplemented with 2 mM GlutaMAX, 100 U ml −1 penicillin/streptomycin, and non-essential amino acids (Gibco)) with 2 ng ml −1 recombinant human IL-7 (PeproTech) and 50 μM β-mercaptoethanol (2-ME). Then, 0.5 × 10 6 cells were stimulated in 24-well plates pre-coated with 8 μg ml −1 anti-CD3 (17A2; BioXcell) and anti-CD28 (37.51; BioXcell). At 40 h and 48 h after stimulation, cells were transduced by adding RV supernatants supplemented with 50 μM 2-ME and 8 μg ml −1 polybrene (Millipore), followed by centrifugation for 90 min at 524 g at 37 °C. Following each transduction, the RV-containing medium was replaced with R10 + 50 μM 2-ME + 10 ng ml −1 human IL-2. After 72 h of in vitro stimulation, CD4 + T cells were transferred into six-well plates in R10 + 50 μM 2-ME + 10 ng ml −1 human IL-2, followed by incubation for 2 d. One day before transfer, the culture medium was replaced with R10 + 50 μM 2-ME + 2 ng ml −1 human IL-7. Transduced cells were sorted on the basis of GFP expression (FACSAria; BD Biosciences). Enforced Runx3 (Runx3-RV + (High)) or Runx2 (Runx2-RV + (High)) expression disrupted T FH differentiation relative to control cells (GFP-RV + ) (Extended Data Fig. 8b,d ). As overexpression of Runx3 at a level much higher than the physiological level resulted in a negative effect on CD4 + T cell accumulation (Extended Data Fig. 8b,c ), we made methodological improvements by performing similar experiments with lower constitutive Runx3 expression by sorting the bottom 10% of GFP + cells (Runx3-RV + (Low)) instead of total GFP + SMARTA cells (Runx3-RV + (high)) for adoptive transfer (Extended Data Fig. 8e,f ). With lower enforced expression of Runx3, SMARTA cell proliferation was enhanced to levels similar to that of Runx2-RV + (Med) cells (Extended Data Fig. 8g ). Nevertheless, Runx3-RV + (Low) SMARTA cells still showed a stronger impaired T FH and GC-T FH development compared to Runx2-RV + (Med) SMARTA cells (Fig. 7i and Extended Data Fig. 8h ). Generation of ΔBPS1 mice using CRISPR–Cas9 gene editing The in vitro molecular and cellular biology was performed by Ingenious Targeting Laboratory. Guide RNAs (gRNAs) were selected using optimized CRISPR design by CHOP-CHOP ( ) 50 . The gRNAs (gRNA1:, 5′-caccgTCTAGGAAAGGCCGGACACC-3′ and 3′-cAGATCCTTTCCGGCCTGTGGcaaa-5′; gRNA2, 5′-caccgTGGTGATGCAAGAAGTTTCT-3′ and 3′-cACCACTACGTTCTTCAAAGAcaaa-5′) were cloned into px459-Cas9-puromycin plasmid. Lipofectamine transfection of each gRNA with a control gRNA was performed on Neuro2A cells (a duplicate set per gRNA). The control gRNA was designed upstream of the gRNA. After 24 h, cells were selected for puromycin resistance for 3–5 days and then lysed. PCR analysis was performed to verify the cleavage efficiency of each gRNA. The cleavage efficiency of gRNA was 10–15%. An injection mix of 30 ng μl −1 Cas9 protein, 0.6 μM gRNA, and 20 ng μl −1 oligonucleotide (template DNA for the repair to delete the 8-nt Bcl-6 recognition motif) was injected into 150–250 fertilized eggs from B6 mice by the University of California San Diego Stem Cell Core. These eggs were implanted into B6 surrogate mothers, and pups were genotyped by DNA sequencing. DNA sequences were analyzed and diagrammed using MacVector (Extended Data Fig. 2e ). The ΔBPS1 mice were healthy and immune cell development appeared to be grossly normal. CRISPR–Cas9-mediated gene deletion of murine CD4 + T cells High-ranked guide sequences with the highest on-target and off-target scores were selected by CHOP-CHOP. crRNA and ATTO-550-conjugated trans -activating CRISPR RNA (tracrRNA) were purchased from Integrated DNA Technologies. Purified Streptococcus pyogenes Cas9-NLS protein was purchased from QB3 Macrolab of University of California, Berkeley. crRNA and tracrRNA were duplexed by heating at 95 °C for 5 min. RNP complexes were generated by mixing crRNA–tracrRNA duplexes (240 pmol) and Cas9-NLS protein (80 pmol) for 10 min at 24–26 °C. Isolated CD4 + T cells were stimulated in 24-well plates pre-coated with 8 μg ml −1 anti-CD3 (17A2) and anti-CD28 (37.51) for 2 d. The cells were then transfected with an RNP mixture by electroporation using MaxCyte ATX with Expanded T cell-4 protocol (MaxCyte). The transfected cells were cultured in R10 + 50 μM 2-ME + 10 ng ml −1 human IL-2 without TCR stimulation for 1 d, followed by culture for an additional day in R10 + 50 μM 2-ME + 2 ng ml −1 human IL-7. Transfection efficiency and cell viability were measured using LSRII or LSR Fortessa. RNP transfection efficiencies were consistently greater than 90%, with high viability (Extended Data Fig. 7a,b ). crRNA sequences used in the study were as follows: cr Cd8 , 5′-GCAGGTTCAGCGACAGAAAG-3′; cr Bcl6 , 5′-TCAAGATGTCCCGACTCCGG-3′; cr Prdm1 , 5′-TTGGAACTAATGCCGTACGG-3′, cr Runx2 , 5′-ACCATGGTGCGGTTGTCGTG-3′; cr Runx3 , 5′-GCTAAGCGCGCAGGCAACCG-3′; cr Gata3 , 5′- TGTACGAATGGCCGAGGCCC-3′ and cr Klf2 , 5′-CTGGCCGCGAAATGAACCCG-3′. WT or Bcl6 f/f Prdm1 f/f Cre CD4 SMARTA CD4 + T cells transfected with cr Cd8 , cr Gata3 , cr Runx2 , cr Runx3 or cr Klf2 were transferred to B6 host mice, which were then infected with LCMV Arm or immunized with KLH–gp 61 and analyzed 6–7 d later. SMARTA cells were sorted by flow cytometry from the spleens or lymph nodes (LNs) of host mice. Gene knockdown efficiencies were measured by mRNA qPCR (for Runx2 , using the primer set: fwd, 5′- CACGACAACCGCACCAT-3′ and rev, 5′- CACGGAGCACAGGAAGTT-3′), flow cytometry (for Bcl-6 and GATA-3) or immunoblot analysis (for Runx3 and Klf2). sh Runx3 -RV + (gene knockdown) SMARTA cells and pMIG-Klf2-RV + (enforced expression) SMARTA cells were used as positive controls for immunoblot analysis. Flow cytometry and cell sorting Single-cell suspensions of spleens or draining popliteal LNs were prepared by standard gentle mechanical disruption. Surface staining for flow cytometry was done with monoclonal antibodies to CD4 (RM4-5; BV510), CD8 (53-6.7; BV605), SLAM (TC15-12F12.2; PerCP-Cy5.5 or APC), ICOS (C398.4 A; BV785), CD200 (OX-90; PE), CD138 (281-2; APC) CD62L (MEL-14; PE.Cy7) (from BioLegend), B220 (RA3-6B2; AF700), CD45.1 (A20; APC-eFluor780), PD-1 (J43; PE), CD44 (IM7; APC) (from eBioscience), PSGL1 (2PH1; BV650), Fas (Jo2; BV510), IgD (11-26; BV510) (from BD Biosciences), PNA (FL-1071; FITC) (Vector Laboratories) and Fixable Viability Dye eFluor780 (eBioscience). Staining was performed for 30 min at 4 °C in PBS supplemented with 0.5% bovine serum albumin (BSA), unless specified otherwise. CXCR5 staining was done using biotinylated anti-CXCR5 (SPRCL5; eBioscience) for 30 min, followed by BV421 or PE–Cy7-labeled streptavidin (BioLegend) at 4 °C in PBS supplemented with 0.5% BSA. Intracellular staining for TFs was performed with monoclonal antibodies to Bcl-6 (K112-91; AF647, BD Biosciences), Tcf-1 (C63D9; AF647, Cell Signaling), T-bet (4B10; PE.Cy7), GATA-3 (TWAJ; PE.Cy7) and Maf (Sym0F1; PerCP.eF710) (from eBioscience) using the Foxp3/Transcription Factor Staining Buffer Set (eBioscience). For measurement of cytokines, the cells from spleen or LNs were cultured in vitro for 5 h in 10 μg ml −1 gp 66 and brefeldin A. Intracellular staining for cytokines was performed with monoclonal antibody to IL-4 (11B11; PE, eBioscience) and recombinant mouse IL-21 receptor Fc (R&D Systems), followed by anti-human IgG (DyLight650, Invitrogen), using the Fixation/Permeabilization buffer kit (BD Biosciences). Stained cells were analyzed using LSRII, LSRFortessa, or FACS Celesta (BD Biosciences) and FlowJo software v.10.6 (FlowJo). For cell sorting, CD45.1 + SMARTA cells were pre-enriched with PE-conjugated anti-mouse CD45.1 Ab (eBioscience) and anti-PE microbeads (Miltenyi Biotec). All sorting was done on a FACSAria or FACSAria Fusion (BD Biosciences). Cellular data were presented as frequencies of cell populations; absolute numbers of cells were also determined. Frequency and absolute number conclusion were equivalent unless stated otherwise. Spleen and LN size were equivalent between samples unless stated otherwise. ELISA Nunc MaxiSorp plates (Thermo Fisher Scientific) were coated overnight at 4 °C with 1 μg ml −1 KLH–gp 61 (GenScript) in PBS. Plates were blocked with PBS + 0.05% Tween-20 + 0.5% BSA (PBST-B) for 90 min at 25 °C. After washing, mouse serum was added in a serial dilution in PBST-B and incubated for 90 min. After washing, horseradish peroxidase (HRP)-conjugated goat anti-mouse IgG (Thermo Fisher Scientific) was added at 1:5,000 in PBST-B for 90 min at 25 °C. Colorimetric detection was performed using a TMB substrate kit (Thermo Fisher Scientific). Color development was stopped after approximately 5–10 min with 2 N H 2 SO 4 , and absorbance was measured at 450 nm. Immunofluorescence staining of lymph node Popliteal LNs from mice immunized with KLH–gp 61 were snap-frozen in OCT medium (Sakura Finetek), and 5–8-μm sections were prepared using a cryostat. LN sections were fixed with acetone–methanol, and stained with antibodies to TCRβ (BV421), IgD (AlexaFlour488), GL7 (PE) and CD45.1 (AlexaFlour647) to reveal the T cell zone, B cell zone, GCs and SMARTA cells, respectively. Sections were fixed and mounted with ProLong gold antifade reagent (Invitrogen), and imaged using a Zeiss AxioScan Z1 Slide Scanner. SMARTA cell localization was analyzed by ImageJ software (v2.0.0-rc-69/1.52p, NIH). The T-B border was defined as a ±15-μm region at a boundary line between the T cell zone and the B cell zone (Extended Data Fig. 1h ). RNA sequencing RNA-seq was performed by a method described previously 51 . Spleens or LNs were isolated and pooled from 4–8 mice per group. Between 25,000 and 100,000 CXCR5 + SLAM lo/int T FH , CXCR5 lo SLAM hi T H 1 SMARTA cells (CD45.1 + CD4 + CD8 – B220 – singlets), or naive SMARTA cells (CD4 + CD8 – B220 – CD44 lo CD62L hi ) were sorted using a FACSAria into Trizol LS (Invitrogen). RNA extraction was performed using miRNeasy micro kits (Qiagen) for downstream RNA-Smart-Seq2 input requirements 51 . cDNA was purified using AMPureXP beads (1:1 ratio; Beckman Coulter). One nanogram of cDNA was used to prepare a standard Nextera XT sequencing library (Nextera XT DNA sample preparation and index kits; Illumina). Quality control steps were included to determine total RNA quality and quantity, the optimal number of PCR pre-amplification cycles and fragment size selection. All samples passed the quality control. Libraries were sequenced using a HiSeq2500 to generate 50-bp single-end reads (TruSeq Rapid Kit; Illumina), generating a median of >13 million (LCMV Arm infection) or >7 million (KLH–gp 61 immunization) mapped reads per sample. Three (LCMV Arm infection) or four (KLH–gp 61 immunization) biological replicates were generated. RNA sequencing analysis The single-end reads that passed Illumina filters were subsequently filtered for reads aligning to tRNA, rRNA, adapter sequences and spike-in controls. The reads were aligned with the mm10 reference genome using TopHat (v1.4.1., library-type fr-secondstrand-C) and the RefSeq gene annotation downloaded from the University of California Santa Cruz (UCSC) Genome Bioinformatics site. DUST scores were calculated with PRINSEQ Lite (v0.20.3) and low-complexity reads (DUST > 4) were removed from the BAM files. The alignment results were parsed via the SAMtools package to generate SAM files. Read counts to each genomic feature were obtained with the HTSeq-count program (v0.7.1; -m union -s yes -t exon -i gene_id). After removing absent features (zero counts in all samples), the raw counts were imported to the R/Bioconductor package DESeq2 (v3.1). P values for differential expression were calculated using the Wald test and then adjusted for multiple test correction using the Benjamini–Hochberg algorithm 52 . We considered genes to be differentially expressed between two groups of samples when the DESeq2 analysis resulted in an adjusted P value of <0.05 and the difference in gene expression was 1.4-fold. Genes with raw counts less than six as a minimum cutoff (an average raw count of Cd8a in all samples; Cd8a is considered to be not expressed, or very weakly expressed in peripheral, splenic CD4 + T cells) were excluded from differential expression analysis. Transcripts per million values (TPMs) were calculated from raw count data by dividing by the number of counts by exon length in kilobases (RPK), dividing the total number of counts by 1 million (CM) and then dividing the RPK by CM. Principal component analysis was performed using the ‘prcomp’ function in R. We first assessed expression of a broad curated set 19 , 22 , 25 of T FH -associated genes across all samples from RNA-seq gene expression profiling. Sequential clustering analyses were performed to analyze the patterns of gene expression changes between the six different T FH and T H 1 populations. k -means analyses were first performed using k = 10 for overall gene expression change patterns by ExpressCluster v1.3 software from the CBDM Laboratory in Harvard School of Medicine ( ). Total DEG genes (fold change > 1.4; adjusted P < 0.05; pre-filtered with minimum cutoff) were subjected to the k -means clustering. Four major clusters ( n = ~300 or more genes) of gene expression change were apparent. To obtain gene lists associated with those four major cluster patterns of gene expression, MAP-DP clustering was then performed with predefined cluster centers according to the result from k -means analysis by R-ClustMAPDP 53 . We chose MAP-DP instead of k -means clustering because MAP-DP analysis efficiently separates outliers from the data and is statistically rigorous 53 . We used the following criteria for the four major clusters: (1) WT T FH and Prdm1 f/f Cre CD4 T FH cells have lower expression than that of the other populations; (2) WT T H 1 and Bcl6 f/f Cre CD4 T H 1 cells have higher expression than that of the other populations; (3) WT T H 1 and Bcl6 f/f Cre CD4 T H 1 cells have lower expression than that of the other populations and (4) WT T FH and Prdm1 f/f Cre CD4 T FH cells have higher expression than that of the other populations (Fig. 3d ). Total DEG genes (fold change > 1.4; adjusted P < 0.05; pre-filtered with minimum cutoff) were subjected to the MAP-DP analysis, as was done for the k -means clustering. Hierarchical clustering analysis was performed with the genes upregulated or downregulated in T FH cells relative to their expression in T H 1 cells (Fig. 3c ; 1.4-fold cut off, adjusted P <0.05) using the hclust function from the stats package in R and the heatmap was generated using the heatmap.2 function from the gplots package in R. GSEA was run on gene lists pre-ranked by the DESeq log( P ) multiplied by the sign of the log (fold change) 54 with GSEA v3.0 (Broad Institute). Bcl-6-bound, Blimp-1-bound, and T FH , T H 1, T H 2, T H 17 and T REG signature gene lists were collected from previous studies 19 , 22 , 24 , 25 , 55 , 42 and from the Ingenuity Pathway Analysis database (Qiagen). The E2A-target-gene list was generated by Shaw et al. 22 using ChIP–seq results from thymocytes by the E2A Bio-ChIP method as described previously 43 . In multiple GSEA (Fig. 3f ), the nominal P values were corrected with the Benjamini–Hochberg algorithm. Assay for transposase-accessible chromatin using sequencing ATAC–seq was performed with a modified method from that described previously 56 . Spleens were isolated and pooled from 3–5 mice per group. Then, 5 × 10 4 CXCR5 + SLAM lo/int T FH , CXCR5 lo SLAM hi T H 1 SMARTA cells (CD45.1 + CD4 + CD8 – B220 – singlets), or naive SMARTA cells (CD4 + CD8 – B220 – CD44 lo CD62L hi ) were sorted using a FACSAria. Cells were pelleted and resuspended in 25 μl lysis buffer, and pelleted again. The nuclear pellet was resuspended into 25 μl transposition reaction mixture containing Tn5 transposase from a Nextera DNA Sample Prep Kit (Illumina) and incubated at 37 °C for 30 min. Then, the transposase-associated DNA was purified using a MinElute Purification kit (Qiagen). To amplify the library, the DNA was amplified for twelve cycles using a KAPA Real-Time Library amplification kit (KAPA Biosystems) with Nextera indexing primers. The total amplified DNA was purified using AmPureXP beads. The quantity and size of amplified DNA was examined by TapeStation to confirm that independent samples exhibited similar fragment distributions. The libraries were sequenced using a HiSeq 4000 with paired-end sequencing (Illumina). Replicates were generated from three independent experiments. Assay for transposase-accessible chromatin using sequencing analysis Fastq reads were aligned with the mm10 reference genome with Bowtie2 (-p 15 -m 1 -best -strata -X 2000 -S --fr --chunkmbs 1024). PCR duplicates were removed by SAMtools. Peaks were called with MACS2 (macs2 callpeak --t inputfile --f BED --g mm --n outputfile --nomodel -1 0.01 --keep-dup all --call-sumits -B). Bigwig files for ATAC–seq signal visualization on UCSC genome browser was generated by converting the MACS2 output read pileup file into bigwig files with bedgraphtobigwig from UCSC tools. Peaks from all samples were merged to create a common reference peak set using Homer (mergePeaks -d 200). Peaks localized within 100 kb upstream of the transcription start site and 100 kb downstream of the transcription end site were annotated to a gene 48 . hTseq-count was used to calculate the Tn5 insertion site number in peaks 57 . Scikit-learn was used for t -distributed stochastic neighbor embedding analysis of Tn5 count matrix to visualize sample similarity 58 . For differential analysis, DESeq2 was used to compare Tn5 insertion site numbers in peaks for each condition after filtering out low read peaks (the count number was <5 for all samples in the comparison). The relative open/close regions were defined by using DESeq2 to compare the Tn5 insertion site number within ATAC–seq peaks among samples (DESeq2 raw P < 0.05). Motif enrichment was performed by Homer (findMotifsGenome.pl -size given -mis 3 -mask), with the combination of Homer-curated known motif set, motifs from the MEME motif database and the JASPAR database. Frequencies of the enriched TF motifs were determined in regions of increased or decreased accessibility (DESeq2 raw P < 0.05). Significance for the differences of TF motif enrichment across comparisons was determined by Χ 2 tests. Familes of transcription factors were characterized with information from the TFClass database 59 . TF footprint analyses were performed with RGT-hint 60 using the JASPAR database. JASPAR motifs are described by a position weight matrix (PWM) that provides information on the frequency of usage of each nucleotide in the motifs derived from experimental data, such as ChIP–seq 61 . Although two related TFs (for example, Runx2 and Runx3) can bind the identical consensus sequences, use of the PWMs in JASPAR allows for non-identical assignments if the data support differential PWMs. PageRank analysis was performed with Taiji software as described previously 56 , as a separate bioinformatic algorithm based on differential gene expression and TF binding motifs that provides compatible results with differential ATAC–seq analysis (Extended Data Fig. 6f ), although the nature of the PageRank algorithm is more effective for identifying activator TFs rather than repressors. Analysis of chromatin immunoprecipitation with sequencing Raw sequencing reads for Blimp-1 ( GSE75724 , GSE79339 ) 55 , 42 , Tcf-1 ( GSE103387 ) 62 , and Bcl-6 and NCOR ( GSE29282 ) 35 were downloaded from the SRA database. Reads were aligned to the UCSC mm9 (Blimp-1), mm10 (Tcf-1) or hg19 genome assemblies (Bcl-6 and NCOR) with Bowtie (v1.1.2) using options (-S---fr -p 3 -m 1 -k 1---best --strata) and peaks were called using MACS with default settings. The UCSC tracks and peak calls of Bcl-6 ChIP–seq data of human tonsillar GC-T FH cells ( GSE59933 ) were from the data of the original paper 24 . Peaks were annotated according to the RefSeq database. Peaks localized ±2 kb of the transcription start site were defined as promoter peaks, peaks localized ±2 kb of the transcription end site were defined as 3′ end peaks, and peaks >2 kb away from genes were defined as intergenic 24 . Syntenic analysis of human Bcl-6-binding sites in the mouse genome was performed using the Liftover tool of the UCSC browser with default settings ( ). The human Bcl-6-binding peaks were first converted from hg18 to mm9, and then converted from mm9 to mm10 (7,673 peaks were successfully converted to the mouse genome from the original human 8,523 peaks). Bcl-6 binding in the mouse genome was evaluated by chromatin immunoprecipitation coupled with quantitative PCR (ChIP–qPCR) for some genes of interest. Chromatin immunoprecipitation with quantitative PCR To validate whether Bcl-6 binding in human GC-T FH cell is conserved in mouse T FH cells, Bcl-6 ChIP–qPCR of mouse T FH cells was performed using tagged-Bcl6 RV transduction. We confirmed that Bcl-6 protein from pMIG-Bcl6 ( Bcl6 -RV) was expressed at levels similar to that of endogenous Bcl-6 in T FH cells (Extended Data Fig. 5e ). We then validated that an amino-terminal Myc-tagged Bcl-6 fusion protein (myctagN– Bcl6 -RV) was functionally comparable with non-tagged Bcl6 -RV ( Bcl6 -RV), on the basis of tag-Bcl6-RV-rescue of T FH differentiation and function of Bcl6 f/f Cre CD4 CD4 + T cells in LCMV-infected mice (Extended Data Fig. 5e,f ). To perform myctagN–Bcl6 ChIP–qPCR, Bcl6 f/f Cre CD4 SMARTA cells transduced with myctagN–Bcl6-RV were transferred to B6 mice, which were then infected with LCMV Arm . Seven days later, spleens were isolated and pooled from 30 mice, and pre-enriched CD45.1 + GFP + SMARTA cells were further sorted to obtain CXCR5 + SLAM lo T FH cells. Then, 9 × 10 6 T FH cells were fixed in 1% formaldehyde for 2.5 min and then quenched with 125 mM glycine for 5 min. Cells were lysed using a truChIP Chromatin Shearing Kit and sonicated to generate <500-bp fragments using an E220 Focused-ultrasonicator (Covaris). Fragmented DNA was used as an input control. Magnetic Dynabeads (45 μl) were washed with IP buffer (50 mM NaCl, 5 mM EDTA, 50 mM Tris pH8.0 and 0.1% NP-40) and then mixed with 7.5 μg anti-Myc tag or goat IgG (Abcam) antibodies in 300 μl IP buffer and rotated for 6 h at 4 °C. The sonicated lysates were diluted in IP buffer at a 1:4 ratio and precleared with Dynabeads for 2 h at 4 °C. The precleared lysates were added to antibody-conjugated Dynabeads and incubated overnight at 4 °C. The beads were washed with IP buffer once, wash buffer I (150 mM NaCl, 0.5% sodium deoxycholate, 1% NP-40, 0.1% SDS, 1 mM EDTA and 50 mM Tris pH8.0) twice, wash buffer II (500 mM NaCl, 0.5% sodium deoxycholate, 1% NP-40, 0.1% SDS, 1 mM EDTA and 50 mM Tris pH8.0) twice, wash buffer III (250 mM LiCl, 0.5% sodium deoxycholate, 1% NP-40, 0.1% SDS, 1 mM EDTA and 50 mM Tris pH8.0) twice, and TE buffer twice for 5 min each. The beads were resuspended in 200 μl elution buffer (100 mM NaHCO 3 and 1% SDS) and were reverse-cross-linked at 65 °C for 30 min and then treated with rNase A for 30 min at 37 °C and proteinase K at 65 °C overnight. DNA was purified using AMPureXP beads, and eluted in nuclease-free water. The eluted DNA was further diluted in water and subjected to qPCR using the following primers: Selplg E1 F, 5′-CGCACAAACACACACAACTC-3′; Selplg E1 R, 5′-TCAGACCCTCCAAACTACCT-3′; Runx2 E1 F, 5′-AGATCGCTCACTCGACTCAT-3′; Runx2 E1 R, 5′-CTTCTTCTACTTCCGCCACAC-3′; Runx2 E2 F, 5′-TCCTTGTCTCTTGCTCTCTTTC-3′; Runx2 E2 R, 5′- ACAGGTAGTGGCATAGAGGA-3′; Runx2 E3 F, 5′- GCTGTGTGT TCTTGCTCTTCT-3′; Runx2 E3 R, 5′-CTAATGAGATGCTGTCGCTGAA-3′; Runx3 E1 F, 5′- GAGAGCCTTTGAGGTCTCTTTG-3′; Runx3 E1 R, 5′- CTCAACAGTGCACACCTTCT-3′; Klf2 P1 F, 5′-AGCAAGGTACCAGGCTACA-3′; Klf2 P1 R, 5′- TCCCACAGCCTGAAGTCTAA-3′; Klf2 DE F, 5′-CTATCTCAGGCAACCCAATCA-3′; Klf2 DE R, 5′-ACCGCTGAAGTTTCTAGGTAAA-3′; Neg F 5′-GCCGCTCTATCATCCGAAAT-3′; Neg R 5′-CCAGCTGCAAGATTAACACAAC-3′. A negative control region was arbitrary selected approximately 40 kb upstream of the Sleplg E1 site (Fig. 5e ). Immunoblot Analysis Equivalent cell numbers were lysed in 2× Laemmli sample buffer and boiled for 10 min. The proteins were resolved by NuPAGE 4–12% Bis-Tris gels and transferred onto PVDF membrane in NuPAGE transfer buffer (Invitrogen). Membranes were incubated with primary antibodies: anti-Myc tag (Cell Signaling), anti-Klf2 (EMD Millipore), anti-Runx3 (HRP-conjugated, Santa Cruz Biotechnology) and anti-GAPDH (Santa Cruz Biotechnology). After incubation with HRP-conjugated secondary antibody, target proteins were detected by using an ECL prime detection kit (GE Healthcare) and the OdysseyFc Imaging System (LI-COR). The band densities were quantified by ImageStudio Lite software (v.5.2.5; LI-COR). Statistical analysis All RNA-seq and ATAC–seq experiments were performed independently in 3–4 replicates. All graphs represent the mean and s.d. unless otherwise noted (Prism 8.0, GraphPad). Comparison between two groups was determined by an unpaired or paired Student’s t -test with a 95% confidence interval. Statistical details of each experiment can be found in the figure legends and specific methods. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability RNA-seq and ATAC–seq data were deposited to the Gene Expression Omnibus (GEO) under the GSE140187 super series. Scripts for analysis are available on Github: . All other data that support the findings of this study are available from the corresponding author upon request.
Scientists at the La Jolla Institute for Immunology (LJI) have discovered a potential new way to better fight a range of infectious diseases, cancers and even autoimmune diseases. The new study, published recently in Nature Immunology, shows how a protein works as a "master regulator" in the immune system. The research is an important step toward designing vaccines and therapies that can "switch on" the immune cells that help produce disease-fighting antibodies. Scientists may also be able to "switch off" these cells to counteract immune cells dysfunction in autoimmune diseases. "This cell type (Tfh cells) sometimes does bad things in autoimmune diseases—particularly autoantibody diseases like lupus, rheumatoid arthritis and Sjogren's syndrome," says LJI investigator Shane Crotty, Ph.D., who led the new research. "So, hopefully, our fundamental knowledge about the circuitry of this cell can help us understand how to turn it off in autoimmune diseases." Crotty's laboratory studies key immune system players, such as different kinds of helper T cells. In 2009, his laboratory published work showing that a protein called Bcl6 controls how helper T cells differentiate to do different jobs in the body. They found that Bcl6 prompts helper T cells to become T follicular helper (Tfh) cells, which work with B cells to produce powerful antibodies. This was an important breakthrough, but Crotty's lab still wanted to know: What exactly was Bcl6 doing to Tfh cells? Answering this question could open the door to controlling immune responses. "There is great interest in the use of Tfh-cell-associated biology for enhancement of vaccines," says Crotty. "There is also great interest in targeting Tfh cell-associated biology for therapeutic interventions in human autoimmune diseases, allergies, atherosclerosis, organ transplants and cancer." For the new study, Crotty led a complex effort to test competing theories for how Bcl6 controls Tfh. The researchers used mouse models and a range of genetic sequencing tools to determine that Tfh cells actually need Bcl6 to even exist. Looking closer, the researchers found that Bcl6 acts mainly as a repressor in helper T cells, meaning that it blocks the expression of other proteins in these cells through a series of genetic switches, which they mapped. These new maps show that Bcl6 controls a "double negative circuit." Crotty explains, "The protein Bcl6 switches this cell type on, but it is a protein that is only known to switch things off. So, we did a lot of experiments to figure out that it controls cells by a series of double negatives. It turns off genes that turn off other genes." Bcl6 blocks the expression of two proteins that normally stop Tfh cell differentiation. When Bcl6 does its job, helper T cells are free to become Tfh cells when the body needs them. The new research gives scientists a guide to how they could potentially switch Bcl6 on or off to control immune responses, says Crotty. "Increasing emphasis will surely now be placed on how to apply that knowledge to Tfh-related therapeutics," he adds. The body also uses the kinds of genetic circuits controlled by Bcl6 to stay healthy and not produce antibodies that mistakenly attack the body's own cells. "The system needs to self-correct and stop the attack. If an immune response is needed to fight off a pathogen, the body needs to reset itself and return to a steady state," Crotty says. But deficiencies in this Bcl6-Tfh system can lead to autoimmunity or immunodeficiency. The new research suggests that tweaking immune responses through Bcl6 could also help control autoimmune diseases such as multiple sclerosis and type 1 diabetes. Via Bcl6, Tfh can theoretically also be tuned down to treat allergies, rejection from transplanted organs, and to help prevent atherosclerosis. "Heart disease is now understood to have a large immunological component, as in too much inflammation," Crotty says. Better cancer treatments could also include tweaking Tfh to decrease unwanted immune responses to therapy, he adds. Crotty adds that the way Bcl6 operates to control positive Tfh gene expression may represent a model by which to study other puzzling biological switches. "We had to do a lot of genetics to connect the dots, but this double negative circuit may actually be the way many immune system cells get controlled," he says.
10.1038/s41590-020-0706-5
Medicine
Researchers unravel omicron's secrets to better understand COVID-19
Peter J. Halfmann et al, SARS-CoV-2 Omicron virus causes attenuated disease in mice and hamsters, Nature (2022). DOI: 10.1038/s41586-022-04441-6 Laura A. VanBlargan et al, An infectious SARS-CoV-2 B.1.1.529 Omicron virus escapes neutralization by therapeutic monoclonal antibodies, Nature Medicine (2022). DOI: 10.1038/s41591-021-01678-y Journal information: Nature , Nature Medicine
http://dx.doi.org/10.1038/s41586-022-04441-6
https://medicalxpress.com/news/2022-02-unravel-omicron-secrets-covid-.html
Abstract The recent emergence of B.1.1.529, the Omicron variant 1 , 2 , has raised concerns of escape from protection by vaccines and therapeutic antibodies. A key test for potential countermeasures against B.1.1.529 is their activity in preclinical rodent models of respiratory tract disease. Here, using the collaborative network of the SARS-CoV-2 Assessment of Viral Evolution (SAVE) programme of the National Institute of Allergy and Infectious Diseases (NIAID), we evaluated the ability of several B.1.1.529 isolates to cause infection and disease in immunocompetent and human ACE2 (hACE2)-expressing mice and hamsters. Despite modelling data indicating that B.1.1.529 spike can bind more avidly to mouse ACE2 (refs. 3 , 4 ), we observed less infection by B.1.1.529 in 129, C57BL/6, BALB/c and K18-hACE2 transgenic mice than by previous SARS-CoV-2 variants, with limited weight loss and lower viral burden in the upper and lower respiratory tracts. In wild-type and hACE2 transgenic hamsters, lung infection, clinical disease and pathology with B.1.1.529 were also milder than with historical isolates or other SARS-CoV-2 variants of concern. Overall, experiments from the SAVE/NIAID network with several B.1.1.529 isolates demonstrate attenuated lung disease in rodents, which parallels preliminary human clinical data. Main Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused a pandemic resulting in millions of deaths worldwide. The extensive morbidity and mortality made the development of vaccines, antibody-based countermeasures and antiviral agents a global health priority. As part of this process, several models of SARS-CoV-2 infection and lung pathogenesis were developed in animals for rapid testing 5 . Remarkably, several highly effective vaccines and therapeutics were deployed with billions of doses given worldwide. Although these measures markedly reduced hospitalizations and deaths, their efficacy has been jeopardized by the emergence of SARS-CoV-2 variants with mutations in the spike gene. The SARS-CoV-2 spike protein engages angiotensin-converting enzyme 2 (ACE2) on the surface of human cells to facilitate entry and infection of cells 6 . Upon cell attachment, spike proteins are cleaved by host proteases into S1 and S2 fragments. The S1 protein includes the amino-terminal (NTD) and receptor-binding (RBD) domains. The RBD is the target of many potently neutralizing monoclonal 7 , 8 , 9 , 10 , 11 and serum polyclonal 12 antibodies. Although SARS-CoV-2 spike proteins from strains early in the pandemic bound to ACE2 from several animal species (for example, hamster, ferret and nonhuman primates), they did not bind mouse ACE2, which explained why laboratory strains of mice could not be infected by SARS-CoV-2 (refs. 6 , 13 ); indeed, mice could become susceptible through expression of hACE2 via a transgene 14 , 15 , 16 or viral vector 17 , 18 , or under regulation of the mouse ACE2 promoter 19 , 20 , 21 . Later in the pandemic, several strains acquired a mouse-adapting spike substitution (N501Y), which allowed engagement of mouse ACE2 and productive infection of mice without hACE2 expression 22 , 23 , 24 . In late November 2021, the Omicron (B.1.1.529) variant emerged. This variant has the largest number (>30) of substitutions, deletions or insertions in the spike protein described so far, raising concerns of escape from protection by vaccines and therapeutic monoclonal antibodies. B.1.1.529 isolates have many changes in the RBD (G339D, R346K, S371L, S373P, S375F, K417N, N440K, G446S, S477N, T478K, E484A, Q493R, G496S, Q498R, N501Y and Y505H). The N501Y substitution along with changes at sites (K417, E484, Q493, Q498 and N501) associated with mouse adaptation 25 , 26 , 27 , 28 , 29 , 30 indicated that B.1.1.529 should infect mice 3 . One study speculated that the progenitor of B.1.1.529 jumped from humans to mice, and then back into humans 4 . In support of this, B.1.1.529 RBD binds to mouse ACE2 (ref. 31 ). Last, hamsters have been a valuable animal model for assessing countermeasures against SARS-CoV-2 and variants. Hamsters are susceptible to SARS-CoV-2 infection and show similar pathological changes to those seen in lung tissues from COVID-19 patients 5 , 32 , 33 . Here, using data from several laboratories of the SAVE/NIAID consortium (Supplementary Table 1 ), we report on the infectivity of several B.1.1.529 isolates in mice and hamsters, two key rodent models of SARS-CoV-2 infection and pathogenesis. B.1.1.529 infection in mice Because of the presence of several amino acid alterations that are considered mouse adapting, we predicted that B.1.1.529 should infect immunocompetent mice and cause lung disease as seen with other recombinant strains (WA1/2020 N501Y) or variants (for example, B.1.351) containing N501Y substitutions. We first tested B.1.1.529 in 129 mice. Two of our laboratories independently inoculated 6–8-week-old or 10–20-week-old 129 mice with 10 4 , 10 5 or 10 6 infectious units (plaque-forming units (PFU) or focus-forming units (FFU)) of three different B.1.1.529 strains (Supplementary Tables 1 and 2 ). As 129 mice sustain 10 to 15% loss of body weight 3 to 4 days post infection (dpi) yet recover and gain weight beginning at 5 dpi (refs. 22 , 34 ) with SARS-CoV-2 strains encoding N501Y substitutions 34 , we assessed weight change with B.1.1.529 at 3 and 4 dpi. However, after inoculation with B.1.1.529, 129 mice failed to lose weight (Fig. 1a ). Similarly, aged (10- to 14-month-old) C57BL/6 mice also did not lose weight after B.1.1.529 infection, whereas those infected with B.1.351 did (Fig. 1a ). Fig. 1: B.1.1.529 is less pathogenic in mice. a , Left: weight change in mock-infected mice ( n = 4) or mice inoculated with B.1.1.529 + A701V ( n = 5), B.1.1.529 ( n = 3) or B.1.351 ( n = 3). Middle: weight change in mice inoculated with B.1.1.529 or B.1.351 ( n = 5) (** P = 0.0075, *** P = 0.0006, **** P < 0.0001). Right: weight change in mice inoculated with B.1.1.529 ( n = 4), B.1.1.7 ( n = 10) or B.1.351 ( n = 18). Comparison between B.1.351 and B.1.1.529: * P = 0.0151, *** P = 0.0003 (3 dpi) and 0.0006 (4 dpi). Mean ± s.e.m. b , Viral RNA level in mice inoculated with B.1.1.529 or B.1.351 ( n = 5) (** P = 0.0079). c , Infectious virus titre in mice inoculated with B.1.1.529 + A701V, B.1.1.529 or B.1.351 ( n = 3). d , Infectious virus titre in mice inoculated with B.1.1.529 or B.1.351 ( n = 5) (** P = 0.0079). e , Pulmonary function analysis as measured by whole-body plethysmography. Mean ± s.e.m. Comparison between B.1.617.2 and B.1.351: ** P = 0.0095 ( n = 5 each). f , Left, weight change in mice inoculated with WA1/2020 D614G (10 3 FFU; n = 6), B.1.1.529 (10 3 FFU; n = 3), B.1.1.529 (10 4 PFU; n = 6) or B.1.1.529 (10 5 FFU; n = 3). Right, weight change in mice inoculated with 10 4 PFU of B.1.1.529 +A701V ( n = 6) or B.1.351 ( n = 6), or mock-infected, age-matched mice ( n = 4). Mean ± s.e.m. g , Infectious virus titre in lungs of mice inoculated with WA1/2020 D614G ( n = 8) or B.1.1.529 ( n = 7) (**** P < 0.0001). h , Infectious virus titre in mice inoculated with B.1.1.529 + A701V or B.1.351 ( n = 3). i , Heat map of concentration of cytokines and chemokines in lungs of infected mice. Results are from one ( a – f , h , i ) or two ( g ) experiments. The dotted line is the limit of detection. Statistical analysis ( a , e : two-way analysis of variance (ANOVA) with multiple comparisons test; b , d , g : two-tailed Mann–Whitney test) was performed on datasets with four or more data points. See Supplementary Table 1 for more information. CCL4, chemokine (C-C motif) ligand 4; IL-18, interleukin-18; CXCL2, chemokine (C-X-C motif) ligand 2; TNF, tumour necrosis factor; GM-CSF, granulocyte–macrophage CSF; IFNγ, interferon-γ. Source data Full size image We next compared viral burden in B.1.1.529- and B.1.351-infected 129 mice. At 3 dpi, 129 mice infected with B.1.351 sustained high levels of infection in the nasal wash, nasal turbinates and lungs (Fig. 1b ). The levels of viral RNA in the nasal turbinates and lungs of B.1.1.529-infected mice were 10- to 100-fold lower than those in B.1.351-infected animals (Fig. 1b ). Similar results were seen in a separate cohort of 129 mice at 4 dpi, with 1,000- to 100,000-fold less infectious virus recovered from nasal turbinates and lungs of animals infected with B.1.1.529 compared to B.1.351 (Fig. 1c ). Members of the group also tested B.1.1.529 in BALB/c mice. At 2 dpi, infectious virus levels in the nasal turbinates and lungs were significantly lower (≈1,000-fold, P < 0.001) in BALB/c mice infected with B.1.1.529 compared to B.1.351 (Fig. 1d ). We used whole-body plethysmography 35 to measure pulmonary function in infected mice. At 2 dpi, whereas B.1.351 caused an increase ( P < 0.001) in the lung enhanced pause (Penh), a marker of bronchoconstriction, B.1.1.529 did not (Fig. 1e ). The ratio of peak expiratory flow (Rpef) also was decreased at 2 dpi in BALB/c mice infected with B.1.351 but not B.1.1.529 ( P < 0.001, Fig. 1e ). Two of our groups tested B.1.1.529 infection in K18-hACE2 transgenic mice, which express hACE2 under an epithelial cytokeratin promoter 14 , and are more susceptible to SARS-CoV-2 infection 16 . At intranasal doses ranging from 10 3 to 10 5 infectious units of B.1.1.529, weight loss was not observed over the first 5 to 6 days of infection in younger or older K18-hACE2 mice (Fig. 1f ). These data contrast with historical results with WA1/2020 D614G or variant (for example, B.1.351) SARS-CoV-2 strains 16 , 24 , 34 , 36 , which uniformly induce weight loss starting at 4 dpi. The groups separately observed reduced levels of infectious B.1.1.529 compared to WA1/2020 D614G or B.1.351 in the lower respiratory tracts at 3 dpi (Fig. 1g, h ). Finally, we assessed inflammatory responses in the lungs of K18-hACE2 mice at 3 dpi. Mice inoculated with B.1.1.529 had lower levels of several pro-inflammatory cytokines and chemokines compared to those inoculated with B.1.351, with many values similar to those of uninfected controls (Fig. 1i and Supplementary Table 3 ). Thus, on the basis of several parameters (weight change, viral burden, respiratory function measurements and cytokine responses), B.1.1.529 seems attenuated in the respiratory tract of several strains of mice. B.1.1.529 infection in hamsters Four members of our group tested three different B.1.1.529 strains for their ability to infect and cause disease (Supplementary Table 1 ). Whereas intranasal infection with historical or other variant SARS-CoV-2 strains generally resulted in ≈10 to 15% reduction in body weight over the first week, we observed no weight loss in hamsters inoculated with B.1.1.529 (Fig. 2a–d ), although animals did not gain body weight as rapidly as uninfected hamsters. Viral RNA analysis at 4 dpi showed lower levels of B.1.1.529 infection in the lungs (12-fold, P < 0.001) compared to WA1/2020 D614G (Fig. 2e ). A comparison of infectious viral burden in tissues at 3 dpi between B.1.617.2 and B.1.1.529 strains showed virtually no difference in nasal turbinates but substantially less infection of B.1.1.529 in the lungs of most animals (Fig. 2f ). A comparison of viral RNA levels between WA1/2020 and B.1.1.529 in nasal washes at 4 dpi did not show substantial differences in titres (Fig. 2g ). Thus, in hamsters infected with B.1.1.529, the upper, but not the lower, respiratory tract infection seems relatively intact. Fig. 2: B.1.1.529 is less pathogenic in wild-type and hACE2-transgenic Syrian hamsters. a , Weight change in uninfected age-matched hamsters ( n = 3) or in hamsters inoculated with B.1.1.529 or B.1.617.2 ( n = 4). Mean ± s.e.m. b , Weight change in uninfected age-matched hamsters ( n = 9) or in hamsters inoculated with B.1.1.529 ( n = 10) or WA1/2020 D614G ( n = 6). Mean ± s.e.m. (red, * P = 0.0293; red, ** P = 0.0046 and 0.0014; black, ** P = 0.0021; black, *** P = 0.0001). c , Weight change in hamsters inoculated with 10 3 , 10 4 , 10 5 or 10 6 PFU of B.1.1.529 or 10 3 PFU of B.1.617.2 ( n = 4). Mean ± s.e.m. Comparison between B.1.617.2 and B.1.1.529 (10 3 PFU): * P = 0.0476, ** P = 0.0041, 0.0041, 0.0047 and 0.0019, respectively. d , Weight change in hamsters inoculated with B.1.1.529 ( n = 5) or WA1/2020 ( n = 9). Mean ± s.e.m. (**** P < 0.0001). e , Viral RNA level in hamsters inoculated with WA1/2020 D614G or B.1.1.529 ( n = 15) (* P = 0.015, *** P < 0.0003). f , Infectious virus titre in hamsters inoculated with B.1.617.2 or B.1.1.529 ( n = 4) (* P = 0.0286; NS, not significant). g , Nasal wash viral RNA level in hamsters inoculated with WA1/2020 ( n = 8) or B.1.1.529 ( n = 3). TCID 50 , median tissue culture infectious dose. h , Pulmonary function analysis by whole-body plethysmography. Mean ± s.e.m. (Penh and Rpef, comparison between B.1.617.2 and B.1.1.529: * P = 0.0263 (3 dpi), * P = 0.0186 (5 dpi), *** P = 0.0005 (7 dpi), **** P < 0.0001) ( n = 4). i , Micro-CT images of the lungs of mock-infected ( n = 3) or B.1.617.2- ( n = 4) and B.1.1.529-infected ( n = 4) hamsters at 7 dpi. Multifocal nodules (black arrows), ground-glass opacity (white arrowheads) and pneumomediastinum (white asterisk) are indicated. j , CT score for uninfected hamsters ( n = 3) or those inoculated with B.1.617.2 or B.1.1.529 ( n = 4) (**** P < 0.0001). k , Weight change in hACE2 hamsters inoculated with HP-095 D614G or B.1.1.529 ( n = 4). Error bars indicate s.e.m. l , Survival of hACE2 hamsters after inoculation with HP-095 D614G or B.1.1.529 ( n = 4) (* P = 0.029). m , Infectious virus titre of hACE2 hamsters inoculated with HP-095 D614G or B.1.1.529; n = 3 (3 dpi), n = 4 (5 dpi) (* P = 0.0286). The results are from one ( a , c , d , f – m ) or two to three independent ( b , e ) experiments. Dotted lines represent the limit of detection. Statistical analysis ( b – d , h : two-way ANOVA with multiple comparisons test; e , j : two-tailed t -test, f , m : two-tailed Mann–Whitney test, l : log-rank test) was performed on datasets with four or more data points. See Supplementary Table 1 for more information. Source data Full size image We used whole-body plethysmography to measure pulmonary function in infected Syrian hamsters. Starting at 3 dpi and continuing until 7 dpi, infection with B.1.617.2 caused an increase ( P < 0.05) in the Penh, whereas B.1.1.529 infection did not (Fig. 2h , left). The Rpef was decreased at 5 and 7 dpi in animals infected with B.1.617.2 but not B.1.1.529 ( P < 0.001; Fig. 2h , middle). Finally, hamsters infected with B.1.617.2, but not B.1.1.529, demonstrated a decrease in respiratory rate (frequency) compared to uninfected animals (Fig. 2h , right). On the basis of several functional parameters, lung infection and disease after B.1.1.529 infection was attenuated compared to that after infection with other variant strains. We performed microcomputed tomography (micro-CT) to assess for lung abnormalities in hamsters at 7 dpi. Micro-CT analysis revealed lung abnormalities in all B.1.617.2-infected hamsters on 7 dpi that were consistent with commonly reported imaging features of COVID-19 pneumonia 37 . In comparison, analysis of B.1.1.529-infected hamsters on 7 dpi revealed patchy, ill-defined ground-glass opacity consistent with minimal to mild pneumonia. Syrian hamsters infected with B.1.617.2 had a much higher CT disease score 35 than those infected with B.1.1.529 (Fig. 2i, j ). Members of our group also compared lung pathology in Syrian hamsters after infection with B.1.617.2 or B.1.1.529. The lungs obtained from B.1.617.2-infected hamsters showed congestion and/or haemorrhage, which were absent in B.1.1.529-infected animals (Fig. 3a ). Immune cell infiltration and inflammation were present in the peribronchial regions of the lungs at 3 dpi with B.1.617.2. At 6 dpi, extensive infiltration of neutrophils and lymphocytes in the alveolar space was accompanied by pulmonary edema and haemorrhage (Fig. 3b , inset), and regenerative changes in the bronchial epithelia became prominent (Fig. 3b ). By contrast, in B.1.1.529-infected hamsters, small foci of inflammation in the alveoli and peribronchial regions were observed only at 6 dpi (Fig. 3b ). A worse histopathology score of viral pneumonia at 6 dpi was measured after B.1.617.2 than B.1.1.529 infection (Fig. 3c ). After B.1.617.2 infection, viral RNA was detected readily in the alveoli and bronchial epithelia at 3 and 6 dpi (Fig. 3d ). After B.1.1.529 infection, fewer bronchial epithelial cells and alveoli were positive for viral RNA at either time point (Fig. 3d ). Thus, B.1.1.529 replicates less efficiently in the lungs of Syrian hamsters, which results in less severe pneumonia compared to that resulting from the B.1.617.2 variant. Fig. 3: Pathological findings in the lungs of SARS-CoV-2-infected Syrian hamsters. Hamsters were inoculated with 10 3 PFU of B.1.617.2 or B.1.1.529 and euthanized at 3 and 6 dpi ( n = 4). a , Macroscopic images of the lungs obtained at 6 dpi. Yellow arrows indicate haemorrhage. b , Lung sections from animals infected with B.1.617.2 or B.1.1.529. Scale bars, 200 µm. Focal alveolar haemorrhage in B.1.617.2-infected animals at 6 dpi is outlined and shown at higher magnification in the inset (scale bar, 100 µm). Black arrow indicates focal inflammation. c , Histopathological score of pneumonia based on the percentage of alveolitis in a given section using the following scoring: 0, no pathological change; 1, affected area (≤10%); 2, affected area (<50%, >10%); 3, affected area (≥50%); an additional point was added when pulmonary edema and/or alveolar haemorrhage was observed. Data are median score ( n = 4; * P = 0.0286; two-tailed Mann–Whitney test). d , RNA in situ hybridization for SARS-CoV-2 viral RNA. Representative images for the alveoli and bronchi of hamsters infected with B.1.617.2 or B.1.1.529 ( n = 4) virus at 3 or 6 dpi are shown. Scale bars, 20 µm. See Supplementary Table 1 for more information. Source data Full size image Although hamster ACE2 can serve as a receptor for the SARS-CoV-2 spike protein, some of the contact residues in hACE2 are not conserved 38 , which could diminish infectivity. To develop a more susceptible hamster model, members of the consortium used transgenic hamsters expressing hACE2 under the epithelial cytokeratin-18 promoter 39 . Whereas intranasal inoculation of 10 3 PFU of HP-095 D614G virus resulted in marked weight loss within the first week (Fig. 2k ) and uniform mortality by 10 dpi (Fig. 2l ), less weight loss and death ( P < 0.05) were observed after infection with B.1.1.529. Moreover, 1,000- to 10,000-fold lower levels of infectious virus were measured in the lungs of hACE2 transgenic hamsters challenged with B.1.1.529 than in those challenged with HP-095 D614G at 3 and 5 dpi (Fig. 2m ). As seen in wild-type Syrian hamsters, smaller differences in infection were observed in the nasal turbinates. Thus, B.1.1.529 infection in the lung is attenuated in both wild-type and hACE2 transgenic hamsters. Discussion Our experiments indicate that the B.1.1.529 variant is less pathogenic in laboratory mice and hamsters. Although these results are consistent with preliminary data in humans 40 , 41 , the basis for attenuation remains unknown. One study indicates that B.1.1.529 replicates faster in the human bronchus and less in lung cells, which may explain its greater transmissibility and putative lower disease severity 42 . We observed that B.1.1.529 resulted in a lower level of infection of hamster bronchial cells in vivo and lower viral burden in nasal washes and turbinates in mice compared with other SARS-CoV-2 strains. The attenuation in mice was unexpected given that B.1.1.529 has alterations in the RBD that are sites associated with adaptation for mice 25 , 26 , 27 . The attenuation in hamsters seen by our group and others 43 was also surprising, given that other SARS-CoV-2 variants replicate to high levels in this animal 35 , 44 , 45 . Whereas the >30 changes in the B.1.1.529 spike protein could impact receptor engagement, changes in other proteins could affect replication, temperature sensitivity, cell and tissue tropism, and induction of pro-inflammatory responses in a species-specific manner. Our results showing attenuated B.1.1.529 infection in laboratory mice do not support the suggestion that B.1.1.529 has a mouse origin 4 . However, infection studies in wild mice 46 are needed to fully address this question. Although B.1.1.529 is less pathogenic in mice and hamsters, these animals still will have utility in evaluating vaccine, antibody or small-molecule inhibitors. The mice and hamsters tested, to varying degrees, showed evidence of viral replication and dissemination to the lower respiratory tract, which could be mitigated by countermeasures. The most severe B.1.1.529 infection and disease was observed in hACE2-expressing mice and hamsters, which is consistent with findings for other SARS-CoV-2 strains 16 , 24 , 39 , 47 , and possibly related to the enhanced interactions between hACE2 and B.1.1.529 spike 48 . Indeed, structural analysis of the B.1.1.529 spike protein in complex with hACE2 reveals new interactions formed by mutated residues in the RBD 48 , 49 . These in vivo studies were performed as part of the SAVE/NIAID consortium and reflect a network that communicates weekly to expedite progress on SARS-CoV-2 variants. This format had several advantages: animal experiments were reproduced across laboratories providing confidence in results; several B.1.1.529 isolates were tested limiting the possibility of sequence adaptations in a strain from one laboratory that could skew results; several strains of mice and hamsters at different ages were tested allowing for a comprehensive dataset; and the groups used overlapping metrics to evaluate infection and disease in the different animal models. We note several limitations to our study. First, our experiments reflect data from a consortium that did not use uniform study design and metrics, which created variability in outcomes; despite this, data from several groups indicate that B.1.1.529 is attenuated in rodent models. Second, although attenuation of B.1.1.529 in mice and hamsters correlates with preliminary data in humans, evaluation in nonhuman primates and unvaccinated, previously uninfected humans is needed for corroboration. Third, we used the prevailing B.1.1.529 isolate that lacks an R346K substitution. Approximately 10% of B.1.1.529 sequences in GISAID as of the writing of this paper have an R346K sequence, and this substitution or others in gene products apart from spike might affect virulence. Although one of the B.1.1.529 isolates we tested contains an additional A701V change in spike near the furin cleavage site, it was still attenuated in mice. Fourth, detailed pathological and immunological analyses were not performed for all of the animal species studied. It remains possible that B.1.1.529 is attenuated clinically (for example, weight loss) because of defects in promoting pathological host responses. In summary, our collective studies rapidly and reproducibly demonstrated attenuated infection in several strains of mice and hamsters. Experiments are ongoing to determine the basis for attenuation in mice and hamsters and to determine how this relates to B.1.1.529 infection in humans. Methods Cells Vero-TMPRSS2 (refs. 35 , 50 , 51 ) and Vero-hACE2-TMPRSS2 (ref. 52 ) cells were cultured at 37 °C in Dulbecco’s modified Eagle medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 10 mM HEPES pH 7.3 and 100 U ml −1 penicillin–streptomycin. Vero-TMPRSS2 cells were supplemented with 5 μg ml −1 blasticidin or 1 mg ml −1 geneticin (depending on the cell line) and in some cultures with Plasmocin. Vero-hACE2-TMPRSS2 cells were supplemented with 10 µg ml −1 puromycin. All cells routinely tested negative for mycoplasma using a PCR-based assay. Viruses The WA1/2020 recombinant strains with substitutions (D614G and/or N501Y/D614G) were described previously 53 . The B.1.1.529 isolates (hCoV-19/USA/WI-WSLH-221686/2021 (GISAID: EPI_ISL_7263803), hCoV-19/Japan/NC928-2N/2021 (NC928) (GISAID: EPI_ISL_7507055), hCoV-19/USA/NY-MSHSPSP-PV44476/2021 (GISAID: EPI_ISL_7908052), hCoV-19/USA/NY-MSHSPSP-PV44488/2021 (GISAID: EPI_ISL_7908059) and hCoV-19/USA/GA-EHC-2811C/2021 (GISAID: EPI_ISL_7171744)) were obtained from nasal swabs and passaged on Vero-TMPRSS2 cells as described previously 33 , 35 , 51 . Sequence differences between B.1.1.529 isolates are depicted in Supplementary Table 2 . Other viruses used included: SARS-CoV-2/UT-HP095-1N/Human/2020/Tokyo (HP-095; D614G), hCoV-19/USA/CA_CDC_5574/2020 (Alpha, B.1.1.7; BEI NR54011), hCoV-19/USA/MD-HP01542/2021 (Beta, B.1.351), 20H/501Y.V2 (Beta, B.1.351), hCoV-19/USA/PHC658/202 (Delta, B.1.617.2) and hCoV-19/USA/WI-UW-5250/2021 (Delta, B.1.617.2; UW-5250) 54 . All viruses were subjected to next-generation sequencing as described previously 55 to confirm the stability of substitutions and avoid introduction of adventitious alterations All virus experiments were performed in an approved biosafety level 3 facility. Animal experiments and approvals Animal studies were carried out in accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocols were approved by the Institutional Animal Care and Use Committee at the Washington University School of Medicine (assurance number A3381–01), University of Wisconsin, Madison (V006426), St Jude Children’s Research Hospital (assurance number D16-00043), Emory University, University of Iowa (assurance number A3021-01), Icahn School of Medicine at Mount Sinai (PROTO202100007), BIOQUAL, Inc., and the Animal Experiment Committee of the Institute of Medical Science, the University of Tokyo (approval numbers PA19-72 and PA19-75). Virus inoculations were performed under anaesthesia that was induced and maintained with ketamine hydrochloride and xylazine, and all efforts were made to minimize animal suffering. In vivo studies were not blinded, and animals were randomly assigned to infection groups. No sample-size calculations were performed to power each study. Instead, sample sizes were determined based on previous in vivo virus challenge experiments. Mouse infection experiments Heterozygous K18-hACE2 C57BL/6J mice (strain 2B6.Cg-Tg(K18-ACE2)2Prlmn/J), 129 mice (strain 129S2/SvPasCrl or 129S1/SvImJ) and C57BL/6 (strain 000664) mice were obtained from The Jackson Laboratory and Charles River Laboratories. BALB/c mice were purchased from Japan SLC Inc. Animals were housed in groups and fed standard chow diets. Infection experiments were performed as follows. In a first set of experiments, 5-month-old female K18-hACE2 mice were inoculated intranasally with 10 3 , 10 4 or 10 5 FFU of SARS-CoV-2. In a second set of experiments, 129S1 male and female mice were used between 10 and 20 weeks of age. Mice were anaesthetized with isoflurane and inoculated intranasally with virus (50 μl, 10 6 PFU per mouse). In a third set of experiments, 6-week-old female BALB/c mice were inoculated intranasally with 10 5 PFU of hCoV-19/Japan/NC928-2N/2021 or hCoV-19/USA/MD-HP01542/2021. In a fourth set of experiments, retired breeder female C57BL/6 mice (10 to 14 months old) were anaesthetized with ketamine–xylazine and inoculated intranasally with SARS-CoV-2 in a total volume of 50 μl DMEM. Animal weight and health were monitored daily. In a fifth set of experiments, 6–8-week-old female 129S1 mice and 6-month-old female K18-hACE2 mice were inoculated intranasally under light ketamine–xylazine sedation with 10 4 PFU of hCoV-19/USA/NY-MSHSPSP-PV44476/2021 or hCoV-19/USA/NY-MSHSPSP-PV44488/2021 in a total volume of 50 μl. Hamster infection experiments Male 5–6-week-old Syrian golden hamsters were obtained from Charles River Laboratories, Envigo or Japan SLC Inc. The K18-hACE2 transgenic hamster line was developed with a piggyBac-mediated transgenic approach, in which the K18-hACE2 cassette from the pK18-hACE2 plasmid 14 was transferred into a piggyBac vector, pmhyGENIE-3 (ref. 56 ), for pronuclear injection. hACE2 transgenic hamsters will be described in detail elsewhere 39 . Female 12-month-old transgenic animals were used. Infection experiments were performed as follows. In a first set of experiments, animals were challenged intranasally with 10 3 PFU of WA1/2020 D614G or B.1.1.529 variant in 100 µl. In a second set of experiments, under isoflurane anaesthesia, wild-type Syrian hamsters were intranasally inoculated with 10 3 PFU of SARS-CoV-2 strains in 30 µl. Body weight was monitored daily. For virological and pathological examinations, four hamsters per group were euthanized at 3 and 6 dpi, and nasal turbinates and lungs were collected. The virus titres in the nasal turbinates and lungs were determined by plaque assays on Vero-TMPRSS2 cells. Human ACE2 transgenic hamsters were intranasally inoculated with 10 3 PFU of HP-095 D614G or B.1.1.529 (hCoV-19/USA/WI-WSLH-221686/2021) in 50 µl. Body weight and survival were monitored daily, and nasal turbinates and lungs were collected at 3 and 5 dpi for virological analysis. In a third set of experiments, six-week-old male Syrian golden hamsters were randomized into groups of 4 to 6 and inoculated with SARS-CoV-2 via delivery of 100 µl of appropriately diluted virus in PBS equally split between both nostrils. Weight change and clinical observations were collected daily. In a fourth set of experiments, while under isoflurane anaesthesia, male 8–10-week-old hamsters were inoculated intranasally with 10 4 PFU of WA1/2020 or B.1.1.529 in 100 µl volume. Body weight and survival were monitored daily. Nasal washes were taken at 4 dpi for virological analysis. Measurement of viral burden Mouse studies Tissues were weighed and homogenized with zirconia beads in a MagNA Lyser instrument (Roche Life Science) in 1,000 μl DMEM medium supplemented with 2% heat-inactivated FBS. Tissue homogenates were clarified by centrifugation at 10,000 r.p.m. for 5 min and stored at −80 °C. RNA was extracted using the MagMax mirVana Total RNA isolation kit (Thermo Fisher Scientific) on the Kingfisher Flex extraction robot (Thermo Fisher Scientific). Viral RNA ( N gene) was reverse transcribed and amplified using the TaqMan RNA-to-CT 1-Step Kit (Thermo Fisher Scientific), and data were analysed and normalized as described previously 57 . Infectious virus titres were determined by plaque assay on Vero-hACE2-TMPRSS2 cells as previously published 24 . The viral titres in the nasal turbinates and lungs were determined by plaque assay on Vero-TMPRSS2 cells as previously published 51 . At the indicated day post infection, mice were euthanized with isoflurane overdose and one lobe of lung tissue was collected in an Omni Bead ruptor tube filled with Tri Reagent (Zymo, number R2050-1-200). Tissue was homogenized using an Omni Bead Ruptor 24 (5.15 ms, 15 s), and then centrifuged to remove debris. RNA was extracted using a Direct-zol RNA MiniPrep Kit (Zymo, number R2051), and then converted to cDNA using a High-capacity Reverse Transcriptase cDNA Kit (Thermo, number 4368813). SARS-CoV-2 RNA-dependent RNA polymerase and subgenomic RNA were measured as described previously 29 , 58 . The subgenomic SARS-CoV-2 RNA levels were quantified in nasal turbinates and lungs by quantitative PCR with reverse transcription as previously published 29 , 55 . Infectious virus titres in nasal turbinates and lungs were determined by plaque assay on Vero-TMPRSS2 cells as described previously 59 . Hamster studies Lungs were collected 4 dpi and homogenized in 1.0 ml of DMEM, clarified by centrifugation (1,000 g for 5 min) and stored at −80 °C. Nasal washes were clarified by centrifugation (2,000 g for 10 min) and the supernatant was stored at −80 °C. To quantify viral load in lung tissue homogenates and nasal washes, RNA was extracted from 100 µl samples using E.Z.N.A. Total RNA Kit I (Omega) and eluted with 50 µl of water. Four microlitres of RNA was used for real-time quantitative PCR with reverse transcription to detect and quantify the N gene of SARS-CoV-2 using TaqMan RNA-to-CT 1-Step Kit (Thermo Fisher Scientific) as described previously 60 . The virus titres in the nasal turbinates and lungs were determined by plaque assay on Vero E6 cells expressing human TMRPSS2 as previously published 61 . RNA was extracted from clarified nasal washes using the Qiagen RNeasy extraction kit (Qiagen) following the manufacturer’s instructions. Samples were purified on the included columns and eluted in 50 µl of nuclease-free water. PCR was conducted using 4× TaqMan Fast Virus Master Mix (Thermo Fisher) and an N -gene primer/probe set. Plaque assay Vero-TMPRSS2 or Vero-TMPRSS2-hACE2 cells were seeded at a density of 1 × 10 5 cells per well in 24-well tissue culture plates. The following day, medium was removed and replaced with 200 μl of material to be titrated diluted serially in DMEM supplemented with 2% FBS. One hour later, 1 ml of methylcellulose overlay was added. Plates were incubated for 72 h, and then fixed with 4% paraformaldehyde (final concentration) in PBS for 20 min. Plates were stained with 0.05% (w/v) crystal violet in 20% methanol and washed twice with distilled, deionized water. Measurement of cytokines and chemokines Superior and middle lobes of the lungs from K18-hACE2 mice (mock-infected or 3 dpi) were collected, homogenized and then stored at −80 °C. After thawing, lung homogenates were centrifuged at 10,000 g for 5 min at 4 °C. Samples were inactivated with ultraviolent light in a clear, U-bottom 96-well plate (Falcon). A mouse 26-plex, bead-based Luminex assay (catalogue number EPXR260-26088-901) was used to profile cytokine and chemokine levels in clarified lung supernatants. The assay was performed according to the manufacturer’s instructions, and all incubation steps occurred on an orbital shaker set at 300 r.p.m. Briefly, 50 μl of clarified lung homogenate supernatant was combined with beads in a lidded, black 96-well plate supplied as part of the kit and incubated for 30 min at room temperature, and then overnight at 4 °C. The next day, the plate was allowed to equilibrate to room temperature for 30 min, washed 3 times with 150 μl per well of 1× wash buffer diluted, and then 25 μl per well of 1× detection antibody mixture was added for 30 min at room temperature. The plate was washed 3 times, and then 50 μl per well of 1× Streptavidin–PE solution was added for 30 min at room temperature. After washing 3 times, 120 μl per well of reading buffer was added, and the plate was incubated for 5 min at room temperature. Data were acquired on a Luminex 100/200 analyser (Millipore) with xPONENT software (version 4.3) and analysed using GraphPad Prism (version 8.0) and R (version 4.0.5). Micro-CT imaging Hamsters were inoculated intranasally with 10 3 PFU (in 30 μl) of B.1.1.529 (strain NC928), B.1.617.2 (UW-5250) or PBS. Lungs of the infected animals were imaged by using an in vivo micro-CT scanner (CosmoScan FX; Rigaku Corporation). Under ketamine–xylazine anaesthesia, the animals were placed in the image chamber and scanned for 2 min at 90 kV, 88 μA, field of view 45 mm and pixel size 90.0 μm. After scanning, the lung images were reconstructed by using the CosmoScan Database software of themicro-CT (Rigaku Corporation) and analysed by using the manufacturer-supplied software. A CT severity score, adapted from a human scoring system, was used to grade the severity of the lung abnormalities 62 . Each lung lobe was analysed for degree of involvement and scored from 0 to 4 depending on the severity: 0 (none, 0%), 1 (minimal, 1%–25%), 2 (mild, 26%–50%), 3 (moderate, 51%–75%) or 4 (severe, 76%–100%). Scores for the five lung lobes were summed to obtain a total severity score of 0–20, reflecting the severity of abnormalities across the three infected groups. Images were anonymized and randomized; the scorer was blinded to the group allocation. Pathology Excised animal tissues were fixed in 4% paraformaldehyde in PBS, and processed for paraffin embedding. The paraffin blocks were cut into 3-µm-thick sections and mounted on silane-coated glass slides. Sections were processed for in situ hybridization using an RNA scope 2.5 HD Red Detection kit (Advanced Cell Diagnostics) with an antisense probe targeting the nucleocapsid gene of SARS-CoV-2 (Advanced Cell Diagnostics). Lung tissue sections were scored for pathology on the basis of the percentage of alveolar inflammation in a given area of a pulmonary section collected from each animal in each group using the following scoring system: 0, no pathological change; 1, affected area (≤10%); 2, affected area (<50%, >10%); 3, affected area (≥50%); an additional point was added when pulmonary edema and/or alveolar haemorrhage was observed. Reagent availability All reagents described in this paper are available through material transfer agreements. Statistical analysis The number of independent experiments and technical replicates used are indicated in the relevant figure legends. Statistical analysis included unpaired t -tests, Mann–Whitney tests and ANOVA with multiple correction post tests. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All data supporting the findings of this study are available in the paper. There are no restrictions in obtaining access to primary data. Source data are provided with this paper. Code availability No code was used in the course of the data acquisition or analysis.
When South African scientists announced in November that they had identified a new variant of the virus that causes COVID-19, they also reported two worrying details: one, that this new variant's genome was strikingly different from that of any previous variant, containing dozens of mutations compared with the original virus that emerged in 2019; and two, that the new variant—dubbed omicron—was spreading like wildfire. The world needed to know quickly how well COVID-19 immunity—either from vaccination or prior infection—and therapies would hold up against this new variant. Researchers at Washington University School of Medicine in St. Louis, led by Michael S. Diamond, MD, Ph.D., the Herbert S. Gasser Professor of Medicine, immediately started investigating the new variant of SARS-CoV-2, the virus that causes COVID-19. Within a few weeks, they had data showing that omicron was a mixed bag: It could resist most antibody-based therapeutics, but it was less able to cause severe lung disease, at least in mice and hamsters. "What omicron demonstrates is that a virus's intrinsic pathogenicity—its ability to cause disease—is just one factor you have to consider in the context of a pandemic," said Diamond, also a professor of molecular microbiology and of pathology & immunology. "The omicron variant is less pathogenic, but it's not not pathogenic. It can still cause severe disease, and it still kills people. When you have huge numbers of people getting infected in a short period of time, even if only a small fraction get seriously ill, it can still be enough to overwhelm the health-care system. Add that to the fact that many of our antibody therapies have lost effectiveness, and you get the crisis we've seen this winter." Diamond worked with Jacco Boon, Ph.D., an associate professor of medicine, of molecular microbiology, and of pathology & immunology, and colleagues at the SARS-CoV-2 Assessment of Viral Evolution (SAVE) Program to investigate omicron's capacity to cause severe disease. The SAVE Program was established by the National Institute of Allergy and Infectious Diseases to rapidly characterize emerging variants and monitor their potential impact on COVID-19 vaccines, therapeutics and diagnostics. The omicron wave peaked first in South Africa. Early reports from the country indicated that the huge wave of infections was followed by a surprisingly small wave of hospitalizations and deaths. This encouraging news suggested that omicron might cause milder disease than previous variants. But the South African and U.S. populations are very different. South Africa is much younger, and has a lower vaccination rate but a higher rate of prior infection, and a different pattern of high-risk health conditions. It was unclear whether the U.S. would follow the same path as South Africa. To separate the role of the virus itself from population factors such as average age and pre-existing immunity, Boon, Diamond and colleagues studied animals infected with the variant. The group tested omicron variants from three people in four strains of mice and two strains of hamsters. For comparison, they infected separate groups of animals with the original strain of SARS-CoV-2 or the beta variant, which emerged in South Africa in fall 2020. Beta caused a large wave of infections in South Africa in 2020 before spreading globally. People infected with beta were more likely to become severely ill and require hospitalization than those infected with other variants. Compared with animals infected with the original strain or with the beta variant, animals infected with omicron lost less weight, had less virus in their noses and lungs, had lower levels of inflammation, and lost less respiratory function. "Omicron virus is milder in every rodent model of COVID-19 disease that we tested," Boon said. "This suggests that it may also be less capable of causing severe disease in people, although we can't say for certain because people, obviously, are very different from mice and hamsters. But just because it might be milder doesn't mean it's harmless. People are still being hospitalized and dying every day, so it's important to continue taking precautions against infection." The disease-severity study was published in Nature, with co-corresponding authors Boon, Diamond and Yoshihiro Kawaoka, DVM, Ph.D., a professor of virology at the University of Wisconsin-Madison. Meanwhile, Diamond also began investigating omicron's ability to resist antibody-based therapeutics. The virus that causes COVID-19 uses its spike protein to get inside cells. Because of the critical importance of spike to the virus, all COVID-19 vaccines and antibody-based therapies used in the U.S. target the protein. Omicron has 30 mutations in its spike gene, enough to make scientists worry that some anti-spike antibodies might fail against omicron's very different spike protein. Diamond, along with staff scientist and first author Laura VanBlargan, Ph.D., and colleagues tested all antibodies then authorized by the Food and Drug Administration to treat or prevent COVID-19—including antibodies made by AstraZeneca, Celltrion, Eli Lilly, Regeneron and Vir Biotechnology—for their ability to prevent the omicron variant from infecting cells. The antibodies were tested individually and in the combinations they were authorized to be used. Most of the antibodies were much less potent against omicron than against the original virus. Many failed completely. Only Vir's antibody, known as sotrovimab, retained the power to neutralize the omicron variant. These data, published in Nature Medicine in January, contributed to a growing stack of evidence that many antibody-based COVID-19 therapies fail to help people sick with omicron. As omicron became the dominant variant in January, accounting for nearly all COVID-19 cases in the U.S., the FDA withdrew authorization for all antibody-based COVID-19 therapeutics except sotrovimab.
10.1038/s41586-022-04441-6
Biology
Common cold viruses reveal one of their strengths
Alan H. M. Wong et al, Receptor-binding loops in alphacoronavirus adaptation and evolution, Nature Communications (2017). DOI: 10.1038/s41467-017-01706-x Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-017-01706-x
https://phys.org/news/2017-11-common-cold-viruses-reveal-strengths.html
Abstract RNA viruses are characterized by a high mutation rate, a buffer against environmental change. Nevertheless, the means by which random mutation improves viral fitness is not well characterized. Here we report the X-ray crystal structure of the receptor-binding domain (RBD) of the human coronavirus, HCoV-229E, in complex with the ectodomain of its receptor, aminopeptidase N (APN). Three extended loops are solely responsible for receptor binding and the evolution of HCoV-229E and its close relatives is accompanied by changing loop–receptor interactions. Phylogenetic analysis shows that the natural HCoV-229E receptor-binding loop variation observed defines six RBD classes whose viruses have successively replaced each other in the human population over the past 50 years. These RBD classes differ in their affinity for APN and their ability to bind an HCoV-229E neutralizing antibody. Together, our results provide a model for alphacoronavirus adaptation and evolution based on the use of extended loops for receptor binding. Introduction Coronaviruses are enveloped, positive-stranded RNA viruses that cause a number of respiratory, gastrointestinal, and neurological diseases in birds and mammals 1 , 2 . The coronaviruses all possess a common ancestor and four different genera (alpha, beta, gamma, and delta) that collectively use at least four different glycoproteins and acetylated sialic acids as host receptors or attachment factors have evolved 3 , 4 , 5 . Four coronaviruses, HCoV-229E, HCoV-NL63, HCoV-OC43, and HCoV-HKU1 circulate in the human population and collectively they are responsible for a significant percentage of the common cold as well as more severe respiratory disease in vulnerable populations 6 , 7 . HCoV-229E and HCoV-NL63 are both alphacoronaviruses and although closely related, they have evolved to use two different receptors, aminopeptidase N (APN) and angiotensin converting enzyme 2 (ACE2), respectively 8 , 9 . The more distantly related betacoronaviruses, HCoV-OC43 and HCoV-HKU1, are less well characterized and although HCoV-OC43 uses 9- O -acetylsialic acid as its receptor 10 , the receptor for HCoV-HKU1 has not yet been determined 11 , 12 , 13 . Recent zoonotic transmission of betacoronaviruses from bats is responsible for SARS and MERS, and in these cases infection is associated with much more serious disease and high rates of mortality 14 , 15 , 16 . Like HCoV-NL63, SARS-CoV uses ACE2 17 as its receptor and the observation that MERS-CoV uses dipeptidyl peptidase 4 18 highlights the fact that coronaviruses with new receptor specificities continue to arise. The coronavirus spike protein (S-protein) is a trimeric single-pass membrane protein that mediates receptor binding and fusion of the viral and host cell membranes 19 . It is a type-1 viral fusion protein possessing two regions, the S1 region that contains the receptor-binding domain (RBD) and the S2 region that contains the fusion peptide and heptad repeats involved in membrane fusion 20 , 21 , 22 , 23 , 24 , 25 . The coronavirus S-protein is also a major target of neutralizing antibodies and one outcome of host-induced neutralizing antibodies is the selection of viral variants capable of evading them, a process known to drive variation 26 , 27 , 28 . As shown by both in vivo and in vitro studies, changes in host, host cell type, cross-species transmission, receptor expression levels, serial passage, and tissue culture conditions can also drive viral variation 29 , 30 , 31 , 32 , 33 . RNA viruses are characterized by a high mutation rate, a property serving as a buffer against environmental change 34 . A host-elicited immune response, the introduction of antiviral drugs, and the transmission to a new species provide important examples of environmental change 35 . Nevertheless, the means by which random mutations lead to viral variants with increased fitness and enhanced survival in the new environment are not well characterized. Given their wide host range, diverse receptor usage and ongoing zoonotic transmission to humans, the coronaviruses provide an important system for studying RNA virus adaptation and evolution. The alphacoronavirus, HCoV-229E, is particularly valuable as it circulates in the human population and a sequence database of natural variants isolated over the past fifty years is available. Moreover, changes in sequence and serology have suggested that HCoV-229E is changing over time in the human population 36 , 37 , 38 . Reported here is the X-ray structure of the HCoV-229E RBD in complex with human APN (hAPN). The structure shows that receptor binding is mediated solely by three extended loops, a feature shared by HCoV-NL63 and the closely related porcine respiratory coronavirus, PRCoV. It also shows that the HCoV-229E RBD binds at a site on hAPN that differs from the site where the PRCoV RBD binds on porcine APN (pAPN), evidence of an ability of the RBD to acquire novel receptor interactions. Remarkably, we find that the natural HCoV-229E sequence variation observed over the past fifty years is highly skewed to the receptor-binding loops. Moreover, we find that the loop variation defines six RBD classes (Classes I–VI) whose viruses have successively replaced each other in the human population. These RBD classes differ in their affinity for hAPN and their ability to be bound by a neutralizing antibody elicited by the HCoV-229E reference strain (Class I). Taken together, our results provide a model for alphacoronavirus adaptation and evolution stemming from the use of extended loops for receptor binding. Results Characterization of the HCoV-229E RBD interaction with hAPN To define the limits of the HCoV-229E RBD, we expressed a series of soluble S-protein fragments and measured their affinity to a soluble fragment (residues 66–967) 39 of hAPN, the HCoV-229E receptor. The smallest S-protein fragment made (residues 293–435) bound hAPN with an affinity ( K d of 0.43 ± 0.1 µM) similar to that of the entire S1 region (residues 17–560) (Table 1 , Supplementary Fig. 1A , B) and this fragment was used in the structure determination. To confirm the importance of the HCoV-229E RBD–hAPN interaction for viral infection, we showed that both the RBD and the hAPN ectodomain inhibited viral infection in a cell-based assay (Fig. 1a, b, c ). Table 1 Analysis of the hAPN ectodomain (residues 66–967, WT and mutants) interaction with fragments of the HCoV-229E S-protein (WT and mutants) using surface plasmon resonance Full size table Fig. 1 Characterization of soluble fragments of the HCoV-229E S-protein and hAPN. a HCoV-229E infection of L-132 cells in the presence of: PBS, the HCoV-229E S1 domain (residues 17–560 at 10 µM), and the HCoV-229E RBD (residues 293–435 at 30 µM). Statistics were obtained from three independent experiments. Statistical significance (ANOVA): *** p < 0.001; error bars correspond to the standard deviation. b Representative images of HCoV-229E infection of L-132 cells in the presence of the hAPN ectodomain at various concentrations. Green fluorescence measures the expression of the viral S-protein. Magnification (100×) and scale bar = 20 µm. c Quantitation of the hAPN inhibition experiment. Statistics were obtained from three independent experiments. Statistical significance (ANOVA): *** p < 0.001 Full size image Crystals of the HCoV-229E RBD–hAPN complex were obtained by co-crystallization of the complex after size exclusion chromatography. The crystallographic data collection and refinement statistics are shown in Table 2 . The asymmetric unit contains one hAPN dimer (and associated RBDs) and one hAPN monomer (and associated RBD) that is related to its dimeric mate by a crystallographic two-fold rotation axis. Both dimers (non-crystallographic and crystallographic) are found in the closed conformation and are essentially identical to that which we previously reported 39 for hAPN in its apo form (RMSD over all Cα atoms of 0.34 Å). Each APN monomer is bound to one RBD as shown in Fig. 2a . The HCoV-229E RBD–hAPN interaction buries 510 Å 2 of surface area on the RBD and 490 Å 2 on hAPN. Table 2 X-ray crystallographic data collection and refinement statistics Full size table Fig. 2 HCoV-229E RBD in complex with the ectodomain of hAPN. a The complex between dimeric hAPN (domain I: blue, domain II: green, domain III: brown, and domain IV: yellow) and the HCoV-229E RBD (purple) is depicted in its likely orientation relative to the plasma membrane. The hAPN peptide and zinc ion (red spheres) binding sites are located inside a cavity distant from the virus binding site. Black bars represent the hAPN N-terminal transmembrane region. b Ribbon representation of the HCoV-229E RBD (gray) in complex with hAPN (same coloring as in a ). The three receptor-binding loops are colored, orange (loop 1), cyan (loop 2), and purple (loop 3). N and C label the N- and C-termini of the RBD. c Atomic details of the interaction at the binding interface. Hydrogen bonds and salt bridges are indicated by dashed lines. Red and blue correspond to oxygen and nitrogen atoms, respectively. Loop and hAPN coloring as in b Full size image The HCoV-229E RBD is an elongated six-stranded β-structural domain with three extended loops (loop 1: residues 308–325, loop 2: residues 352–359, loop 3: residues 404–408) at one end that exclusively mediate the interaction with hAPN (Fig. 2b ). Loop 1 is the longest and it contributes ~70% of the RBD surface buried on complex formation (Figs. 2c and 3g ). Within loop 1, residues Cys 317 and Cys 320 form a disulfide bond that makes a stacking interaction with the side chains of hAPN residues Tyr 289 and Glu 291 (Fig. 2c ). The C317S/C320S RBD double mutant showed no binding to hAPN at concentrations up to 15 μM (Table 1 , Supplementary Fig. 1D , and Supplementary Table 1 ), evidence of the importance of the stacking interaction and a likely role for the disulfide bond in defining the conformation of loop 1. Notably, loop 1 contains three tandemly repeated glycine residues (residues 313–315) whose NH groups donate hydrogen bonds to the side chain of Asp 288 and the carbonyl oxygen of Phe 287 of hAPN (Fig. 2c ); mutation of hAPN residue Asp 288 to alanine leads to a ~10-fold reduction in affinity (Table 1 , Supplementary Fig. 2A , and Supplementary Table 1 ). Apolar interactions between RBD residues Cys 317 and Phe 318 and hAPN residues Tyr 289 , Val 290 , Ile 309 , Ala 310 , and Leu 318 are also observed (Fig. 2c ); mutation of RBD residue Phe 318 leads to a 13-fold reduction in affinity while mutation of hAPN residues Tyr 289 , Val 290 , Ile 309 , and Leu 318 lead to a 10- to 30-fold reduction in affinity (Table 1 , Supplementary Fig. 1C , Supplementary Fig. 2B –E, and Supplementary Table 1 ). Centered in the contact area between the RBD and hAPN is a hydrogen bond between the side chain of RBD residue Asn 319 and the carbonyl oxygen of hAPN residue Glu 291 (Fig. 2c ); mutation of RBD residue Asn 319 to alanine also ablates binding at the highest concentrations achievable (Table 1 , Supplementary Fig. 1E , and Supplementary Table 1 ). The remaining loop 1 residues serve to satisfy most of the hydrogen bond donor/acceptor pairs of the edge β-strand on subdomain 2 of the hAPN molecule. Most prominent of the remaining RBD–hAPN interactions is the salt bridge between loop 2 residue Arg 359 and hAPN residue Asp 315 and the interactions made by loop 3 residues Trp 404 and Ser 407 with hAPN residues Asp 315 and Lys 292 (Fig. 2c ); the importance of Trp 404 of loop 3 is evidenced by the fact that mutating it also ablates binding (Table 1 , Supplementary Fig. 1F , and Supplementary Table 1 ). Fig. 3 Alphacoronavirus receptor-binding domains. a Surface representation of an APN-based overlay of the HCoV-229E RBD–hAPN and PRCoV RBD–pAPN complexes. Human APN (dark gray), porcine APN (light gray), HCoV-229E RBD (green), and PRCoV RBD (yellow). APNs are aligned on domain IV. b Top view of the APN surface buried by HCoV-229E RBD binding (H-site, green) and PRCoV RBD binding (P-site; yellow) mapped onto hAPN. c Sequence alignment of human and porcine APN. Residues in the H-site are highlighted in green and residues in the P-site are highlighted in yellow. The “|“ symbol demarcates every 10 residues in the alignment. The N -glycosylation sequon (Asn residue 286) in porcine APN is shown in red (Glu residue 291 in human). d Ribbon representation of the HCoV-229E RBD (receptor: hAPN), the PRCoV RBD (receptor: pAPN), and the HCoV-NL63 RBD (receptor: hACE2). Loops 1, 2, and 3 are colored in orange, cyan, and purple, respectively. e Sequence alignment of the HCoV-229E, PRCoV, and HCoV-NL63 RBDs. Residues in loops 1, 2, and 3 are enclosed by orange, cyan, and purple boxes, respectively. The cysteine residues involved in the loop 1 disulfide bond are indicated by “^“. The “|“ symbol demarcates every 10 residues in the alignment. Residues directly interacting with the receptor are colored red. f Structural alignment of the HCoV-229E, HCoV-NL63, and PRCoV RBDs with receptor interacting residues colored orange, green, and blue, respectively. Numbers indicate the loop numbers. The structures are shown in two views rotated by 180 o relative to each other. g The percentage contribution made by each loop to the total surface area buried on the RBD in the receptor complexes Full size image HCoV-229E and PRCoV bind at different sites on APN As with HCoV-229E, the porcine respiratory alphacoronavirus, PRCoV, also uses APN as its receptor 40 . As our complex shows, HCoV-229E binds at a site on hAPN (H-site) that differs from the site on pAPN (P-site) used by PRCoV (Fig. 3a, b ). Glu 291 in hAPN, a residue in the hAPN–RBD interface, is an N -glycosylated asparagine (Asn 286 ) in pAPN and attempts to dock the HCoV-229E RBD at the H-site on pAPN leads to a steric clash with the N -glycan (Supplementary Fig. 3A ). Consistent with this observation, the HCoV-229E RBD cannot bind to a mutant form of hAPN (E291N/K292E/Q293T) that possesses an N -glycan at position 291, as we have shown (Table 1 , Supplementary Fig. 4A –C). Attempts to dock the PRCoV RBD at the P-site on hAPN also leads to a steric clash, in this case with hAPN residue Arg 741 (Supplementary Fig. 3B ). Notably, porcine transmissible gastroenteritis virus (TGEV) can bind hAPN, and HCoV-229E can bind mouse APN, once the Arg side chain (on hAPN) and the N -glycan (on mouse APN) on the respective APNs have been mutated 41 . Across species, the sequence identity at the H- and P-sites is only ~60% (Fig. 3c and Supplementary Fig. 3C ) and the receptor-binding loops of these viruses must be accommodating the remaining APN structural differences on receptors from species that they do not infect. Together these results provide evidence that the extended receptor-binding loops of these alphacoronaviruses possess conformational plasticity. The observation that HCoV-229E and PRCoV bind to different sites on APN has important consequences. Among species, APN is found in open/intermediate and closed conformations and conversion between them is thought to be important for the catalysis of its substrates 39 , 42 . The HCoV-229E RBD binds to hAPN in its closed conformation and structural comparison shows that the H-site does not differ between the open and closed conformations. This is to be contrasted with the P-site of pAPN that differs in the open and closed conformations. Indeed, the PRCoV RBD has recently been shown to bind to pAPN in the open conformation as a result of P-site interactions made possible in the open form 42 . These differences in binding and receptor conformation are reflected in the fact that enzyme inhibitors that promote the closed conformation of APN block TGEV infection 42 , but not HCoV-229E infection 8 , and the fact that the PRCoV S-protein 42 , but not HCoV-229E 43 , inhibits APN catalytic activity. The receptor-binding loops of HCoV-229E vary extensively Sequence data from viruses isolated over the past 50 years provides a wealth of data on the natural variation shown by HCoV-229E (Supplementary Fig. 5 ). With reference to the HCoV-229E RBD–hAPN complex reported here, we now show that 73% of the amino acids in the receptor-binding loops and supporting residues vary among the sequences analyzed (52 sequences in total), while only 11% of the RBD surface residues outside of the receptor-binding loops show variation (Fig. 4a, b ). Moreover, for the eight variants where full genome sequences were reported, the receptor-binding loops represent the location at which the greatest variation in the entire genome is observed (Fig. 4c ). Analysis of the HCoV-229E RBD–hAPN interface further shows that of the 16 RBD surface residues that are fully or partially buried on complex formation, 10 of them vary in at least one of the 52 sequences analyzed and a pairwise comparison of the sequences suggests that many of these positions can vary simultaneously (Supplementary Fig. 5 ). Finally, we show that the six invariant interface residues on the RBD (Gly 313 , Gly 315 , Cys 317 , Cys 320 , Asn 319 , and Arg 359 ) constitute only 45% of the viral surface area buried, the very region expected to be the most highly conserved from a receptor-binding standpoint. The remaining 55% (i.e., 279 Å 2 ) of the viral surface area buried is made up of 10 residues that differ in their variability and the role they play in complex formation (Supplementary Table 2 ). Fig. 4 Naturally occurring HCoV-229E sequence variation. a Color-coded amino-acid sequence conservation index (Chimera) mapped onto a ribbon representation of the HCoV-229E RBD. Blue represents a high percentage sequence identity and red represents a low percentage sequence identity among the 52 viral isolates analyzed. b Surface representation in the same orientation as in ( a , left), and rotated 180° (right). The Asn-GlcNAc moiety of the N -glycans are shown in stick representation. Color coding as in a . c Amino-acid sequence variation shown by the eight viral isolates whose entire genome sequences have been reported. The entire protein coding region of the viral genome was treated as a continuous amino acid string (8850 residues in total). Amino acid differences among the eight sequences were analyzed in 100 residue bins and for each bin the sum was plotted. Green-colored bins correspond to residues in the S-protein and purple-colored bins correspond to residues in the RBD. The horizontal dotted line denotes the average number of amino-acid differences per bin across the protein-coding region of the whole viral genome. d Alignment of the sequences selected for each of the six classes. The “|“ symbol demarcates every 10 residues in the alignment. e Representative images showing HCoV-229E infection of L-132 cells in the presence of: PBS, monoclonal antibody 9.8.E12 at two different concentrations, and monoclonal antibody 2.8H5 at two different concentrations (anti-HCoV-OC43 antibody). The nucleus is stained blue and green staining indicates viral infection. Magnification (×200) and scale bar = 10 µm. f Statistical quantification of the monoclonal antibody inhibition experiment. Error bars correspond to standard deviations obtained from three independent experiments Full size image Loop variation leads to phylogenetic classes Phylogenetic analysis of the HCoV-229E RBD sequences found in the database showed that they segregate into six classes (Supplementary Fig. 6 ). Class I contains the ATCC-740 reference strain (originally isolated in 1967 and deposited in 1973) and related lab strains, while Classes II–VI, represent clinical isolates that have successively replaced each other in the human population over time since the 1970s. To characterize these classes, a representative sequence from each was selected; for Class I, the RBD of the reference strain, also used in our structural analysis, was selected. To simplify characterization, the RBDs of the other five classes were synthesized with the Class I sequence in all but the loop regions (Fig. 4d ). As observed for Class I, the other RBDs do not bind to the hAPN mutant that introduces an N -glycan at Glu 291 (Supplementary Fig. 4D ), an observation suggesting that they all bind at the same site on hAPN. The RBDs bound hAPN with an ~16-fold range in affinity ( K d from ~30 to ~440 nM). These differences in affinity are largely a result of differences in k off with little difference in k on (Table 3 and Supplementary Fig. 7 ). Notably, the Class I RBD binds with the lowest affinity, while the RBDs from viral classes that have emerged most recently (Class V: viruses isolated in 2001–2004 and Class VI: viruses isolated in 2007–2015) bind with the highest affinity. For each of the six classes, Supplementary Table 2 shows the identity of the loop residues that have shown variation. Of those buried in the RBD–hAPN interface, residues 314, 404, and 407 are particularly noteworthy as they undergo considerable variation in amino-acid character. Residue 314, for example, accounts for 9% of the total buried surface area on complex formation and changes from Gly to Val to Pro in the transition from Classes I to VI. Variation of this sort provides insight into how changes in receptor-binding affinity might be mediated during the process of viral adaptation. Table 3 Surface plasmon resonance-binding data for the interaction between the six HCoV-229E RBDs and hAPN Full size table Each of the six RBD classes were also characterized using a neutralizing mouse monoclonal antibody (9.8E12) that we generated against the HCoV-229E reference strain (Class I). As shown in Fig. 4e, f , 9.8E12 inhibits HCoV-229E infection of the L132 cell-line. This antibody binds to the Class I RBD with a K d of 66 nM ( k on = 6.3 × 10 5 M −1 s −1 , k off = 0.041 s −1 ) and as shown by a competition binding experiment, it blocks the RBD–hAPN interaction (Supplementary Fig. 8A , B). In contrast, 9.8E12 shows no binding to the other five RBD classes at a concentration of 1 μM (Supplementary Fig. 8C ), strong evidence that the receptor-binding loops of the Class I RBD are important for antibody binding and that loop variation can abrogate antibody binding. Consistent with this observation, non-conserved amino-acid changes both within and outside of the RBD–hAPN interface are observed across all classes (Supplementary Table 2 ). Discussion Correlating structure and function with natural sequence data is a powerful means of studying viral adaptation and evolution. To this end, we have delimited the HCoV-229E RBD and determined its X-ray structure in complex with the ectodomain of its receptor, hAPN. We found that three extended loops on the RBD are solely responsible for receptor binding, and that these loops are highly variable among viruses isolated over the past 50 years. A phylogenetic analysis also showed that the RBDs of these viruses define six RBD classes whose viruses have successively replaced each other in the human population. The six RBDs differ in their receptor-binding affinity and their ability to be bound by a neutralizing antibody (9.8E12) and taken together, our findings suggest that the HCoV-229E sequence variation observed arose through adaptation and selection. Antibodies that block receptor binding are a common route to viral neutralization and exposed loops are known to be particularly immunogenic 44 . Loop-binding neutralizing antibodies are elicited by the alphacoronavirus TGEV 40 , and the receptor-binding loops of HCoV-229E mediate the binding of the neutralizing antibody, 9.8E12. As shown by the sequences of the viral isolates analyzed, the RBDs differ almost exclusively in their receptor-binding loops. 9.8E12 blocks the hAPN–RBD interaction and it can only bind to the RBD (Class I) found in the virus that elicited it. This observation shows that loop variability can abrogate neutralizing antibody binding. Indeed, the successive replacement or ladder-like phylogeny observed, when the sequence of the HCoV-229E RBD is analyzed, is characteristic of immune escape as shown by the influenza virus 45 , 46 . Taken together, our results suggest that immune evasion contributes to if not explains the extensive receptor-binding loop variation shown by HCoV-229E over the past 50 years. HCoV-229E infection in humans does not provide protection against different isolates 37 , and viruses that contain a new RBD class that cannot be bound by the existing repertoire of loop-binding neutralizing antibodies provide an explanation for this observation. Neutralizing antibodies that block receptor binding can also be thwarted by an increase in the affinity/avidity between the virus and its host receptor. Increased receptor-binding affinity/avidity allows the virus to more effectively compete with receptor blocking neutralizing antibodies, a mechanism thought to be important for evading a polyclonal antibody response 47 . In addition, an optimal receptor binding affinity is thought to exist in a given environment. As such, adaptation in a new species, changes in tissue tropism, and differences in receptor expression levels can all lead to changes in receptor binding affinity 29 , 31 , 48 . Taken together, the observation that the most recent RBD classes (Class V: viruses isolated in 2001–2004 and Class VI: viruses isolated in 2007–2015) show a ~16-fold increase in affinity for hAPN over that of Class I (viruses isolated in 1967) merits further study. Recent cryoEM analysis has shown that the receptor-binding sites of HCoV-NL63, SARS-CoV, MERS-CoV, and by inference HCoV-229E, are inaccessible in some conformations of the pre-fusion S-protein trimer 21 , 22 , 23 , 24 , 25 . Although the ramifications of this structural arrangement are not yet clear, restricting access to the binding site has been proposed to provide a means of limiting B-cell receptor interactions against the receptor-binding site 23 . How this might work in mechanistic terms is also not clear given the need to bind receptor. However, in a simple model, the inaccessible S-protein conformation(s) would be in equilibrium with a less stable (higher energy) but accessible S-protein conformation(s). The energy difference between these conformations is a barrier to binding that decreases equally the intrinsic free energy of binding of both the viral receptor and the B-cell receptor and relative binding energies may be the key. Both soluble hAPN and antibody 9.8E12 can inhibit HCoV–229E infection in a cell-based assay, an indication that their binding energies ( K d of 430 and 66 nM, respectively) are sufficient to efficiently overcome the barrier to binding. However, B-cell receptors bind their antigens relatively weakly prior to affinity maturation 49 and they would be much less able to do so. The dynamics of the interconversion between accessible and inaccessible conformations may also be a factor in the recognition of inaccessible antibody epitopes 50 , 51 , and further work will be required to establish if and how restricting access to the receptor binding site enhances coronavirus fitness. The cryoEM structures also show that the receptor-binding loops make intra- and inter-subunit contacts in the inaccessible prefusion trimer. This suggests the intriguing possibility that the magnitude of the energy barrier, or the dynamics of the interconversion between accessible and inaccessible conformations, might be modulated by loop variation during viral adaption. Immune evasion and cross-species transmission involve viral adaptation and we posit that the use of extended loops for receptor binding represents a strategy employed by HCoV-229E and the alphacoronaviruses to mediate the process. Such loops can tolerate insertions, deletions, and amino acid substitutions relatively free of the energetic penalties associated with the mutation of other protein structural elements. Indeed, our analysis of the six RBD classes shows that the receptor-binding loops possess a remarkable ability to both accommodate and accumulate mutational change while maintaining receptor binding. Among the six classes, 73% of the loop residues show change and only 45% of the receptor interface buried on receptor binding has been conserved. As we have shown, variation in the receptor-binding loops can abrogate neutralizing antibody binding and it will also increase the likelihood of acquiring new receptor interactions by chance. In this way, the selection of viral variants capable of immune evasion and/or cross-species transmission will be facilitated 27 , 28 , 52 , 53 , 54 . Cross-species transmission involves the acquisition of either a conserved (i.e., a similar interaction with a homologous receptor) or a non-conserved receptor interaction (i.e., an interaction with a non-homologous receptor, or an interaction at a new site on a homologous receptor) in the new host. HCoV-229E binds to a site on hAPN that differs from the site where PRCoV 40 binds to pAPN (Fig. 3a, b ), and HCoV-NL63 is known to bind the non-homologous receptor, ACE2 55 . Clearly, conserved receptor interactions have not accompanied the evolution of these alphacoronaviruses (Fig. 3d–g ). In mechanistic terms, receptor-binding loop variability and plasticity would facilitate the acquisition of both conserved and non-conserved receptor interactions. However, compared to conserved receptor interactions, the successful acquisition of non-conserved interactions would be expected to be relatively infrequent and more likely to require viral replication and mutation in the new host to optimize receptor-binding affinity. Many coronaviruses have originated in bats 3 , 4 and it is tempting to speculate that viral transmission between bats has facilitated the emergence of non-conserved receptor interactions. Bats account for ~20% of all mammalian species and they possess a unique ecology/biology that facilitates viral spread between them 56 , 57 . Moreover, the barriers to viral replication in a new host are lower among closely related species 58 , 59 . It follows that the viral replication required to optimize non-conserved receptor interactions in the new host would be facilitated by transmission between closely related bat species. By a similar reasoning, the use of conserved receptor interactions requiring little optimization would facilitate large species jumps. Several bat coronaviruses showing a high degree of sequence similarity with HCoV-229E have recently been identified 60 , 61 and an analysis of how they interact with bat APN will inform this discussion. Predicting the emergence of new viral threats is an important aspect of public health planning 62 and our work suggests that RNA viruses that use loops to bind their receptors should be viewed as a particular risk. RNA viruses are best described as populations 34 , and extended loops—inherently capable of accommodating and accumulating mutational change—will enable populations with loop diversity. Such populations will provide routes to escaping receptor loop-binding neutralizing antibodies, optimizing receptor-binding affinity, and acquiring new receptor interactions, interrelated processes that drive viral evolution and the emergence of new viral threats. Methods Protein expression and purification The soluble ectodomain of hAPN (residues 66–967) was expressed and purified from stably transfected HEK293S GnT1- cells (ATCC CRL-3022) as described previously 39 . The various soluble forms of the HCoV-229E S-protein were expressed and purified from stably transfected HEK293S GnT1-cells for X-ray crystallography, and from HEK293T (ATCC CRL-3216) and/or HEK293F (Invitrogen 51-0029) cells for cell-based and biochemical characterization, as described previously 63 . Point mutations were generated using the InFusion HD Site-Directed Mutagenesis protocol (Clontech). In all cases, the target proteins were secreted as N-terminal protein-A fusion proteins with a Tobacco Etch Virus (TEV) protease cleavage site following the protein-A tag. Harvested media was concentrated 10-fold and purified by IgG affinity chromatography (IgG Sepharose, GE). The bound proteins were liberated by on-column TEV protease cleavage and further purified by anion exchange chromatography (HiTrap Q HP, GE). Protein crystallization The RBD of the S-protein of HCoV-229E (residues 293–435) and the soluble ectodomain of hAPN (residues 66–967) were mixed in a ratio of 1.2:1 (RBD:hAPN) and the complex was purified by Superdex 200 (GE) gel filtration chromatography in 10 mM HEPES, 50 mM NaCl, pH 7.4. The complex was concentrated in gel filtration buffer to 10 mg/ml for crystallization trials. Crystals were obtained by the hanging drop method using a 1:1 mixture of stock protein and well solution containing 8% PEG 8000, 1 mM GSSG, 1 mM GSH, 5% glycerol, 1 µg/ml endo-β- N -acetylglucosaminidase A 64 and 100 mM MES, pH 6.5 at 298 K. Crystals were typically harvested after 3 days and flash-frozen with well solution supplemented with 22.5% glycerol as cryoprotectant. Data collection and structure determination Diffraction data were collected at the Canadian Light Source, Saskatoon, Saskatchewan (Beamline CMCF-08ID-1) at a wavelength of 0.9795 Å. Data were merged, processed, and scaled using HKL2000 65 ; 5% of the data set was used for the calculation of R free . Phases were obtained by molecular replacement using the human APN structure as a search model (PDB ID: 4FYQ) using Phaser in Phenix 66 . Manual building of the HCoV-229E RBD was performed using COOT 67 . Alternate rounds of manual rebuilding and automated refinement using Phenix were performed. Secondary structural restraints and torsion-angle non-crystallographic symmetry restraints between the three monomers in the asymmetric unit were employed. Ramachandran analysis showed that 96% of the residues are in the most favored region, with 4% in the additionally allowed region. Data collection and refinement statistics are found in Table 2 . A stereo image of a portion of the electron density map in the HCoV–229E–hAPN interface is showed in Supplementary Fig. 9 . Figures were generated using the program chimera 68 . Buried surface calculations were performed using the PISA server. Surface plasmon resonance binding assays Surface plasmon resonance (Biacore) assays were performed on CM-5 dextran chips (GE) covalently coupled to the ligand via amine coupling. The running and injection buffers were matched and consisted of 150 mM NaCl, 0.01% Tween-20, 0.1 mg/ml BSA, and 10 mM HEPES at pH 7.5. Response unit (RU) values were measured as a function of analyte concentration at 298 K. Kinetic analysis was performed using the global fitting feature of Scrubber 2 (BioLogic Software) assuming a 1:1 binding model. For experiments using hAPN as a ligand, between 300 and 400 RU were coupled to the CM-5 dextran chips. For experiments using 9.8E12, 1900 RU was immobilized. Viral inhibition assay HCoV-229E was originally obtained from the American Type Culture Collection (ATCC VR-740) and was produced in the human L132 cell line (ATCC CCL5) which was grown in minimum essential medium alpha (MEM-α) supplemented with 10% (v/v) FBS (PAA). The L132 (1 × 10 5 ) cells were seeded on coverslips and grown overnight in MEM-α supplemented with 10% (v/v) FBS. For inhibition assays in the presence of soluble hAPN, wild-type HCoV-229E (10 5.5 TCID 50 ) was pre-incubated with the fragment (residues 66–967) diluted in PBS for one hour at 37 °C before being added to cells for 2 h at 33 °C. For inhibition assays in the presence of the soluble S-protein fragments, the different fragments, diluted in PBS, were added to cells and kept at 4 °C on ice for 1 h. Medium was then removed and cells were inoculated with wild-type HCoV-229E (10 5 TCID 50 ) for 2 h at 33 °C. For both inhibition assays, after the 2-h incubation period, medium was replaced and cells were incubated at 33 °C with fresh MEM-α supplemented with 1% (v/v) FBS for 24 h before being analyzed by an immunofluorescence assay (IFA). Cells on the coverslips were directly fixed with 4% paraformaldehyde (PFA 4%) in PBS for 30 min at room temperature and then transferred to PBS. Cells were permeabilized in cold methanol (−20 °C) for 5 min and then washed with PBS for viral antigen detection. The S-protein-specific monoclonal antibody, 5-11H.6, raised against HCoV-229E (IgG1, produced in our laboratory by standard hybridoma technology), was used in conjunction with an AlexaFluor-488-labeled mouse-specific goat antibody (Life Technologies A-21202), for viral antigen detection 69 . After three washes with PBS, cells were incubated for 5 min with DAPI (Sigma-Aldrich) at 1 µg/ml to stain the nuclear DNA. To determine the percentage of L-132 cells positive for the viral S-protein, 15 fields containing a total of 150–200 cells were counted, at a magnification of ×200 using a Nikon Eclipse E800 microscope, for each condition tested in three independent experiments. Green fluorescent cells were counted as S-protein positive and expressed as a percentage of the total number of cells. Statistical significance was estimated by the analysis of variance (ANOVA) test and Tukey’s test post hoc. Monoclonal antibodies (IgG1, produced in our laboratory by standard hybridoma technology) raised against HCoV-229E (9.8E12) or HCoV-OC43 (2.8H5, negative control) that were found to be S-protein specific were tested in an infectivity neutralization assay. Wild-type HCoV-229E (10 5.5 TCID 50 ) was pre-incubated with the antibodies (1/100 of hybridoma supernatant) for 1 h at 37 °C before being added to L-132 cells for 2 h at 33 °C. Cells were washed with PBS and incubated at 33 °C with fresh MEM-α supplemented with 1% FBS (v/v) for 18 h before being analyzed by an immunofluorescence assay (IFA). Statistical significance was estimated by an ANOVA test, followed by post hoc Dunnett (two-sided) analysis. Comparative sequence analysis of HCoV-229E viral isolates The protein sequence of the HCoV-229E P100E isolate RBD (residues 293–435) was used to perform a search of the non-redundant protein sequence database using Blastp. Sequences were curated as of December 1, 2016. A total of 52 sequences were obtained with the GenBank Identifier numbers: NP_073551.1, AAK32188.1, AAK32189.1, AAK32190.1, AAK32191.1, CAA71056.1, CAA71146.1, CAA71147.1, ADK37701.1, ADK37702.1, ADK37704.1, BAL45637.1, BAL45638.1, BAL45639.1, BAL45640.1, BAL45641.1, AAQ89995.1, AAQ89999.1, AAQ90002.1, AAQ90004.1, AAQ90005.1, AAQ90006.1, AAQ90008.1, AFI49431.1, AFR45554.1, AFR79250.1, AFR79257.1, AGT21338.1, AGT21345.1, AGT21353.1, AGT21367.1, AGW80932.1, AIG96686.1 ABB90506.1, ABB90507.1, ABB90508.1, ABB90509.1, ABB90510.1, ABB90513.1. ABB90514.1, ABB90515.1, ABB90516.1, ABB90519.1, ABB90520.1, ABB90522.1, ABB90523.1, ABB90526.1, ABB90527.1, ABB90528.1, ABB90529.1, ABB90530.1, AOG74783.1. The 52 sequences were then aligned using Muscle 70 and the residue-specific sequence conservation index was mapped onto the surface of the RBD using the “render by conservation” tool in Chimera 68 . Percentage identity is mapped using a color scale with blue indicating 100% identity and red indicating 30% identity. The protein-coding regions of the eight sequences for which the entire genome were reported (GenBank Identifier numbers: NC_002645.1, JX503060.1, JX503061.1, KF514433.1, KF514430.1, KF514432.1, AF304460.1, and KU291448.1) were aligned using Muscle. The entire protein-coding region of the viral genome was treated as a continuous amino-acid string (8850 residues in total). Protein residues that were not identical among the eight sequences were counted as a difference and plotted in 100 residue bins. The sequence AAK32191.1 was chosen as the representative of Class I and the loop sequences of ABB90507.1, ABB90514.1, ABB90519.1, ABB90523.1, and AFR45554.1 were combined with the non-loop sequences of AAK32191.1 to generate the RBDs of Classes (II–VI), respectively. Data availability Coordinates and structure factors for the HCoV-229E RBD in complex with human APN were deposited in the protein data bank with PDB ID: 6ATK. The authors declare that all other data supporting the findings of this study are available within the article and its Supplementary Information files, or are available from the authors upon request.
Common cold season is back, which has people wondering why we catch the same virus, year after year. Why don't we ever develop immunity against the common cold? Professor Pierre Talbot at INRS has known about the incredible variability of coronaviruses for some time. They're responsible for the common cold as well as many other infections, including neurological diseases. Along with his research associate Marc Desforges, Professor Talbot worked on a study recently published in Nature Communications about the ways in which coronaviruses adapt and evolve, becoming ever more effective at infecting hosts without being defeated by the immune system. The small, spiky spheres, the coronaviruses are closely monitored by public health agencies, since they're able to be transmitted between species and some have a high potential mortality rate. Both SARS and MERS are caused by coronaviruses. Their ability to adapt to new environments seems due in part to the spikes on the surface of the virus—more specifically, a small, strategic part of the proteins that form those spikes. The spikes are made up of S proteins (S for spike). A specific part of the spike seems to allow the virus to attach itself to host cells. The spike's RBD (receptor binding domain), which initiates the interaction between cell and virus, is essential for infection. But RBDs are targeted by antibodies that neutralize the virus and allow the immune system to flush it out of the host's system. Coronaviruses are thus faced with an evolutionary problem. They can't infect cells without an RBD, which needs to be exposed so that it can latch onto cells. But the RBD needs to be masked to avoid being targeted by antibodies. In response, the coronavirus has developed a mechanism that helps it survive, and thrive. The RBD is made up of three parts that vary widely between strains. Thanks to this variation, antibodies are unable to detect new strains, whereas RBDs retain—and even improve—their affinity for the target cell. Plus, RBDs alternate between visible and masked states. To gain this insight, a group of researchers including Professor Talbot studied the alphacoronavirus HCoV-229E and, more specifically, the interaction between its RBD and aminopeptidase N (APN)—the host cell protein the RBD latches onto. The team crystallized the multiprotein complex and then analyzed the structures of both proteins. By observing the RBD's structure up close, the team was able to identify the three long loops that latch onto APN. As analyses of these viruses over the last fifty years have shown, these loops are virtually the only thing that varies from one strain to the next. The experiments demonstrate that the changes observed in the loops modulate an RBD's affinity with APN. The variants that have the greatest affinity are also likely to be better at infecting host cells, which helps them spread. Six different classes of HCoV-229E have popped up over the years, each with a greater RBD-APN affinity than the last. This discovery adds to our understanding of the evolution of coronaviruses and could lead to similar analyses of other coronaviruses. Although there are many elements left to explain, the RBD seems to be an important feature that must be monitored as we follow the adaptive evolution of these viruses and assess their ability to infect.
10.1038/s41467-017-01706-x
Earth
When it comes to variations in crop yield, climate has a big say
Nature Communications, www.nature.com/ncomms/2015/150 … full/ncomms6989.html Journal information: Nature Communications
http://www.nature.com/ncomms/2015/150122/ncomms6989/full/ncomms6989.html
https://phys.org/news/2015-01-variations-crop-yield-climate-big.html
Abstract Many studies have examined the role of mean climate change in agriculture, but an understanding of the influence of inter-annual climate variations on crop yields in different regions remains elusive. We use detailed crop statistics time series for ~13,500 political units to examine how recent climate variability led to variations in maize, rice, wheat and soybean crop yields worldwide. While some areas show no significant influence of climate variability, in substantial areas of the global breadbaskets, >60% of the yield variability can be explained by climate variability. Globally, climate variability accounts for roughly a third (~32–39%) of the observed yield variability. Our study uniquely illustrates spatial patterns in the relationship between climate variability and crop yield variability, highlighting where variations in temperature, precipitation or their interaction explain yield variability. We discuss key drivers for the observed variations to target further research and policy interventions geared towards buffering future crop production from climate variability. Introduction How mean historical and future climate change affects crop yields has received a great deal of attention 1 , 2 , 3 , 4 , 5 . However, how variations in climate impact crop yield, and how they vary over time, has received less attention 6 , 7 . This is important both to help us understand how climate and crop yields are linked over time and also for ensuring future food security. In particular, low-yield variability leads to stable farmer incomes 8 , 9 , 10 and food supply 1 , 11 , and prevents price spikes that have disproportionate adverse impacts on the globally food-insecure who are mostly farmers 12 , 13 . In this study, we ask how much of the year-to-year variability in observed crop yields is associated with variations in climate across global croplands? Further, we investigate which climatic variables—those related to warmth and growing season length, or those related to rainfall and moisture availability—best explain variations in yield across the world? Previous analyses that have examined how crop yields and climate were related 3 , 14 , 15 , 16 , 17 have typically used national and regional data. For example, global studies are typically at the country scale 1 , and provide little insight on the spatial patterns of the within-country impacts. In contrast, analysis at the subnational 16 , sub-subnational 2 or local sites is available for specific countries only, and thus, provide little insight on global patterns. Our study uses newly available temporal geospatial data on crop harvested area and yields of four major crops (maize, rice, wheat and soybean) across 13,500 different political units of the world 12 , 13 —a major (>50x) increase in the level of spatial detail from previous analyses that examined how crop yields and climate were related 3 , 14 , 15 , 16 , 17 . Similar to other studies 1 , 2 , we examine the recent historical period (1979–2008) but across these 13,500 different political units of the world 12 , 13 . The increased spatial resolution helps to identify where, and how strongly, climate variability is correlated with variations in crop yield in each of these political units. Given multiple breadbaskets across the globe and globally traded commodities, our study provides a consistent investigation of the differences both within and across regions. To examine how observed variations in yields were related to climate variations, we used the Climate Research Unit’s (CRU) 18 gridded monthly data, and then re-mapped the data to the ~13,500 political units where yield was measured. We explored a range of statistical models relating observed de-trended variations in temperature and precipitation during a crop’s growing season and annual conditions to the observed de-trended variations in yields at each political unit. Next we selected the ‘best-fit’ model, and then conducted F -tests to determine the goodness-of-fit of the selected model against the null model that assumes random climate variability. We conducted this analysis at each of the tracked 13,500 political units to draw conclusions on how much of the crop yield variability was explained by climate variability. Different aspects of climate variability—temperature, precipitation, and the interaction of the two—may affect crop growth and resultant productivity disproportionately. We classified how yield variability was related to either normal or extreme fluctuations in temperature or precipitation variability—or their interactions. Here, linear and squared terms represent normal and extreme variation, respectively, for example 1 , 5 , 16 . The ‘best-fit’ model at each political unit was classified into one of seven broad categories and then mapped globally: models where the yield variability was explained by (i) normal temperature or (ii) normal precipitation variations, but not both; models where the yield variability was explained by (iii) normal and extreme temperature or (iv) normal and extreme precipitation variations, but not both; (v) where yield variability was explained by extreme temperature or (vi) extreme precipitation variations, but not both; and (vii) temperature and precipitation terms and their combinations due to interactions between temperature and precipitation. We further developed reduced models of temperature and precipitation and mapped them at each political unit. The resulting global maps, which identify where and to what degree normal and extreme climate variability explains yield variability, and quantifies them, can be used to target research into causal relations between yield and climate variability, and eventually policy interventions to stabilize farmer incomes and food supply. Averaged globally over areas with significant relationships, we find that 32–39% of the maize, rice, wheat and soybean year-to-year yield variability was explained by climate variability. This translates into climate explained annual production fluctuations of ~22 million tons, ~3 million tons, ~9 million tons and ~2 million tons for maize, rice, wheat and soybean, respectively. Our spatially detailed assessment of the relationship between climate variability and yield variability shows distinct spatial patterns in the relative effects of temperature, precipitation and their interaction within and across regions. Results Yield variability We first establish where and by how much crop yields varied within countries and then identify how much of the year-to-year variation in crop yields was explained by year-to-year variations in climate. In general the coefficient of variation, or yield variability normalized by mean yields ( Fig. 1 ), has been lower in the top crop production regions of the world on account of their higher yields ( Supplementary Fig. 1 ) and conversely higher in the areas of lower yields and of less consequence to global crop production but exceptions such as the Australian wheat belt exist. Over the last three decades, maize yields had a global average variability of ~0.9 tons/ha/year (s.d.), which corresponds to ~22% of the global average yields of ~4 tons/ha/year ( Supplementary Fig. 1 ). The highest coefficient of variation—which indicates greatest relative variability—in maize yields was in areas outside the core maize grain belts, including northeastern Brazil and in parts of Africa, India, northeast Mexico and the southeast United States ( Fig. 1 ). The global average rice yield variability (standard deviation) was ~0.5 tons/ha/year (or ~13% of average rice yields). The coefficient of variation in rice yields was similarly higher in more marginal rice-producing regions such as northeastern Brazil and central India. In contrast, some wheat regions with high coefficient of variation in yields such as in Australia and the Great Plains states of the United States (U.S.) are key global wheat breadbaskets ( Fig. 1 ). The global average wheat yield variability (s.d.) was 0.4 tons/ha/year (~17% of average yields over the study period). In the top soybean production areas of the world such as in the Midwestern U.S. and Latin American countries, the coefficient of variation was low. Figure 1: Coefficient of variation of crop yields over the entire study period. The ratio of the s.d. of yield over the 30-year period to the average yield over the same period. ( a ) maize, ( b ) rice, ( c ) wheat, ( d ) soybean (sample size of ~13,500 political units × 30 years per crop). White areas indicate where the crop is not harvested or analysed. Details on crop yields are given in reference 13 . Full size image Climate explained yield variability Not all crop growing regions showed statistically significant influence of year-to-year variations in climate on crop yield variability as determined from conducting F -tests (using a threshold of P =0.1; ~13,500 political units × 30 years sample size, Fig. 2 ). However, the vast majority of crop harvesting regions did experience the influence of climate variability on crop yields: ~70% of maize harvesting regions, ~53% of rice harvesting regions, ~79% of wheat harvesting regions and ~67% of soybean harvesting regions. The percentage of global total average production harvested over these regions and thus influenced by climate variability was ~78% of maize, ~52% of rice, ~75% of wheat and ~67% of soybean. In specific locations, within the top global crop production regions, climate variability accounted for >60% of the variability in a crop’s yield, though there were also political units where climate impacts have been statistically insignificant ( Fig. 2 ). Where and how much of a crop’s yield varied on account of climate has been highly location- and crop–specific, and we describe this in greater detail in the subsequent sections. Figure 2: Total crop yield variability explained due to climate variability over the last three decades. A value of 1.0 implies that the entire variability in observed yields was explained by climate variability (coefficient of determination metric; sample size of ~13,500 political units × 30 years per crop). Similarly a value of 0.30–0.45 implies 30–45% of the variability in yields was explained by climate variability. We cutoff the range at 0.75 (or 75%) and above to a single categorical colour. No effect implies that at the P =0.10 level, there was no statistical difference between the best fit model and the null model in the political unit. White areas indicate where the crop is not harvested or analysed. ( a ) maize, ( b ) rice, ( c ) wheat, ( d ) soybean. Full size image Averaged globally over areas with significant relationships, we find that 32–39% of the maize, rice, wheat and soybean year-to-year yield variability was explained by climate variability ( Supplementary Data ). Climate variability in general explains rice yield variability the least. Regional variations in maize Approximately 75% of the global maize production is concentrated in ~57% of the harvested areas comprising the American Midwestern region, central Mexico, southern Brazil, the maize belts of Argentina and China, parts of Western Europe and South Africa, and some areas of India and Indonesia. In these major maize grain belts, ~41% of the total year-to-year yield variability (0.8 tons/ha/year) was explained by inter-annual climate variability. Approximately 50% of global maize production is concentrated in a proportionally even smaller ~31% of high yielding maize belt comprising primarily two regions—the American Midwest and the Chinese Corn Belt—and in these two regions ~42% of the corresponding yield variability (0.9 tons/ha/year) was explained by climate variability. In some specific political units within these maize breadbaskets, more than 60% of the yield variability has historically been related to climate variability including numerous counties of the U.S. Midwestern states, and in Shanxi, Hebei and Shandong provinces of China ( Fig. 2a ); political units with >75% of the yield variability explained by climate are also present, for example, many counties of Midwestern U.S. When averaged over all the statistically significant maize harvested areas with climate variability impacts globally 39% of the yield variability was explained and in the top ten global maize producing nations we find the following ( Supplementary Data ): in the United States, France and Italy 41–49% of the observed maize yield variability can be explained by climate variability, whereas in South Africa it was ~50%, and in Argentina and China it was 32 and 44%, respectively ( Fig. 2a and see Supplementary Data ). In the upper and eastern Midwest of the United States and Canada extreme temperature variability was more important, whereas in the central and western parts of Midwestern U.S. extreme precipitation variability explained maize yield variability in more counties ( Fig. 3a ); overall temperature variability was more important for explaining maize yields in the upper and eastern Midwest of the United States and precipitation variability was more important in the central and western Midwest ( Supplementary Figs 2 and 3 ). Temperature variability influenced maize yield variability more in some colder countries such as in Canada, but also in some warmer countries such as Spain and Italy with within-country variations. Figure 3: Selected models explaining crop yield variability classified into seven categories of temperature and precipitation variations. White areas indicate where the crop is not harvested or analysed. ( a ) maize, ( b ) rice, ( c ) wheat, ( d ) soybean. Regions where models with only normal temperature (T) terms are selected are shown in yellow colour; regions where models with normal and extreme temperature (T 2 ) terms are selected are shown in tan colour; regions where models with only extreme temperature terms are selected are shown in red colours. Similarly, regions where models with normal, normal and extreme, and only extreme precipitation (P) terms are selected are shown in the maps with different shades of blue. Regions where models with both temperature, precipitation and their interactions terms were selected are shown in purple colour. Full size image Regional variations in rice Approximately 75% of global rice production was from China, India and Indonesia. Averaged over all rice-harvesting areas with statistically significant climate influence (around 52% of global rice harvested areas), we estimate that yields have varied by ~0.1 tons/ha/year. Year-to-year climate variability explains ~32% of rice yield variability globally ( Fig. 2b ) with precipitation variability explaining more of the variability in South Asia and temperature variability more of the variability in Southeast and East Asia ( Fig. 3b and Supplementary Figs 2 and 3 ). In some key rice producing nations, however, climate variability was more important: in 80% of the rice harvested areas in Japan climate variability was statistically significant. Averaged over these areas we find that ~79% of the rice yield variability (~0.2 tons/ha/year) was explained, whereas, in South Korea, ~47% of the yield variability was explained by climate variability ( Fig. 2b ). In both countries temperature variability was more important ( Supplementary Figs 2 and 3 ). In India, China, Indonesia, Thailand, Brazil, Cambodia, Peru and Spain, 25–38% of the yield variability was explained by climate variability ( Supplementary Data ). As with maize, there are specific regions where >60% of yield variability was explained by climate variability such as in the central Indian states of Madhya Pradesh, Chhattisgarh and Karnataka ( Fig. 2b ). Regional variations in wheat Approximately 75% of global wheat production came from ~66% of the harvested lands in the United States, Canada, Argentina, Europe, North Africa, India, China and Australia. In these highly productive wheat belts, ~36% of the year-to-year yield variability was explained by climate variability ( Fig. 2c ). Approximately 34–45% of the wheat yield variability in the United States, Canada, United Kingdom, Turkey, Australia and Argentina was explained by climate variability. To give an indication of the magnitude of this effect, the climate driven variability in the United States wheat yields equates to, on average, more than half the entire annual production of wheat in Mexico. In the more productive regions of some countries the variability explained by climate was even higher. In the most productive Australian wheat belt (among the top 50% of global wheat producers) climate variability explained ~43% of the total yield variability and in parts of Western Australia it was > 60%. In Western Europe in the United Kingdom, France, Germany, Spain and Italy, climate variability explained ~31–51% of the wheat yield variability. In Eastern Europe and the former Soviet Republics such as the Russian Federation, Ukraine, Kazakhstan and in Hungary, 23–66% of the wheat yield variability was explained by climate variability and normal and extremes of temperature variability was important ( Fig. 3c ). Although temperature variability in Western Europe was in general more important, precipitation variability also explained part of the wheat yield variability with the exception of Spain where precipitation variability was the dominant factor ( Fig. 3c and Supplementary Figs 2 and 3 ). In India and China, the top two global wheat producers, we detected statistically significant relationships in 71 and 62% of their wheat harvested lands and on average 32 and 31% of yield variability, respectively, was explained by climate variability. In China precipitation variability explained most of the variability; in India temperature and precipitation variability were equally important ( Fig. 3c ). Averaged globally, climate variability explained ~35% of the wheat yield variability. Regional variations in Soybean Approximately 50% of the world’s total soybean production was harvested from ~42% of the land concentrated in only three countries: the United States, Brazil and Argentina. Adding soybean lands in India and China made up 75% of the top soybean production global areas. In general, the variability in soybean yield related to climate variability was higher in Argentina (~43% of yield variability of 0.5 tons/ha/year averaged over all areas with statistically significant climate influence, but ~47% when averaged over the most productive areas), followed by ~36% of the yield variability explained over all the statistically significant U.S. soybean areas. Approximately 26–34% of the yield variability in Brazilian, Indian and Chinese soybean yields was explained by climate variability with locations of substantially higher variability explained present in all countries ( Fig. 2d ). Discussion We show how much of the year-to-year variability in crop yields was associated with climate variability within and across regions. As the demand for crops increases globally 19 and productivity gains fail to keep pace with projected demands 12 , ensuring the stability of national food supplies and farmer livelihoods to variable production will be even more important. Low global food stocks in conjunction with fluctuation in agricultural production can, in particular, contribute to food price spikes 20 , 21 . Regions with high crop yield variability would disproportionately contribute to this effect especially if they are also the major breadbaskets of the world 20 , 21 . Even in regions with comparatively lower yields, fluctuations in crop production may impact the local food security. Our study is unique in giving a global spatially detailed account of where and by how much crop yields have varied and how much of this was driven by climate variability. We found that there were numerous regions where climate variability explained more than 60% of the yield variability in maize, rice, wheat and soybean ( Fig. 2 ). Many of these regions were in the most productive global areas such as Midwestern U.S. and the Chinese Corn Belt for maize, and Western Europe and Australia for wheat. Our study identifies unique spatial patterns in the effects of temperature and/or precipitation variability on yields—for example, rice and wheat in India ( Fig. 3b,c ) as well as maize and soybean in the United States ( Fig. 3a,d ). Our simple classification of the prevailing relationships between climate and crop yields enables digging deeper into trends for particular regions. While relatively high resolution compared with past research our results are constrained by the resolution of the data, which is at the political unit and monthly climate data. Within political units, at specific field/subnational locations, the climate variability impact could be higher or lower. The 32–39% of the yield variability explained by climate variability translates into large fluctuations in global crop production. For example, ~39% of the maize yield variability of 0.6 tons/ha/year explained by climate variability over 94 million ha translates into an annual fluctuation of ~22 million tons in global maize production over the study period. Similar climate variability driven average annual rice, wheat and soybean production variability is ~3, 9 and 2 million tons, respectively. These average fluctuations are similar to the total maize production of many Latin American and African countries or the total rice production of some Asian countries or total wheat production of some Eastern European countries. In some cases the impact of climate variability is higher in poorer regions such as in northeastern Brazil for maize, and Central India for rice. However, even in the most productive global areas such as wheat in Western Europe and maize in the United States Midwest the influence of climate variability on yield variability is very high and in specific political units >75%. The following section discusses our regional and continental findings in the context of previous smaller-scale research, which we use to help validate/corroborate our results and explore possible drivers. In the North China Plains (provinces of Hebei, Henan, Shandong, Beijing, Tianjin and Shanxi) though crops are irrigated 22 , water availability is a major problem 23 . Maize is a summer crop in this region and monsoonal rainfall supplements river and groundwater irrigation. High growing season temperature is common. Hence, both the temperature and precipitation variability controls maize yield variability in the North China plains. To the west of the North China Plains, in the more arid Loess Plateau region, adaptation strategies to the arid climate and the coincidence of rainfall during the later stages of crop growth 24 lead to normal and extreme temperature variability being a better explanation of maize yield variability in some areas of Gansu and Ningxia and all of Shaanxi. Although it may appear counter-intuitive that temperature variability would dominate for rainfed maize, it is consistent with findings for rainfed maize areas in the United States 25 where extreme temperature was found to be a better predictor of maize grain yield due to its control on soil water demand and transpiration rates. In contrast, wheat is a winter crop and is highly dependent on irrigation in the North China Plains. What our analysis shows is more dependence on precipitation variability for wheat yields, which may be due to the direct controlling influence on surface irrigation water availability. In northeastern China (provinces of Heilongjiang, Jilin, Liolin) maize and soybean are not widely irrigated so precipitation variability was important, but rice is irrigated so temperature variability became more important. In Japan almost all the paddy rice crop is irrigated 22 , 26 and hence temperature variability was more important compared with precipitation variability. South Korean harvested rice is similarly mostly irrigated and thus temperature variability was more important for explaining rice yield variability. In Indonesia the variability in rice yield explained by climate variability is often low (in the 0 to 15% range only) and the explanation is on account of temperature variability 27 except in some parts such as Central Java where precipitation variability is also important 28 . This is because rice is widely irrigated in Indonesia 22 , 29 . In South Asia, especially northwest India, temperature variability influences wheat yield variability widely, similar to other findings 30 but further south in central and south India precipitation variability in general is more important as between half and three-fourths of wheat is rainfed winter wheat compared with only a few percent in the northwest. Rice yield variability is more influenced by precipitation variability in India indicating the rainfed paddy growing conditions. In the more irrigated parts 22 as in northwest India precipitation and temperature variability or only temperature variability was important. In the extreme southwestern parts of India similarly precipitation and/or temperature variability was important as this region receives very high rainfall. Temperature variability was the important factor for rice yield variability in Bangladesh due to high availability of water and intense irrigation controlling the influence of precipitation variability. In some of the highly irrigated rice areas in India such as areas of West Bengal state, and the Mahanadi system in northern Orissa, climate variability was not even statistically significant ( Figs 2b and 3b ). In Australia wheat yield variability is largely explained by precipitation variability as the wheat is grown under rainfed conditions 22 and in agreement with previous findings 31 , 32 ; controlling for precipitation variability, however, temperature variability was also an important factor in explaining wheat yield variability 33 especially in parts of Western Australia, South Australia and Queensland. We found that maize yield variability is explained best by normal and extreme precipitation variability related to ENSO in many countries of Africa similar to previous findings as in Zimbabwe 34 , which in turn is related to sea surface temperature 35 . In South Africa maize is grown primarily in the Highveld region with drier conditions in the west and wetter conditions in the east 36 . Our analysis reflects these conditions with precipitation variability being more important in the drier west and temperature variability more important moving towards the wetter eastern provinces of South Africa’s Highveld. Moreover, high maize yield variability in South Africa has been a concern 36 ; indeed, we found that climate variability explained >60% of maize yield variability in the Highveld region especially in the drier western parts of the Highveld of South Africa. Elsewhere, as in Kenya, we found that maize yield variability was explained only by a complex relationship between both precipitation and temperature variability consistent with previous studies 37 . In Cameroon in West Africa and in northeastern Nigeria precipitation variability alone does not explain maize yield variability agreeing with previous findings 38 , 39 because, while rain is beneficial for stable maize production, it also triggers nitrogen leaching from nutrient poor soils, leading to a negative feedback. In many of the other West African countries rainfall variability explains maize yield variability but analyses show that this was not the case everywhere and neither does climate variability explain maize yield variability in all countries here as farmers adopt various management strategies to overcome the high rainfall variability 40 . However, other than Nigeria, our analysis in West Africa was only at the country level and within-country explanatory skill was lost on account of the scale of the available yield statistics. Overall, precipitation variability is more important in sub-Saharan Africa, pointing to the predominantly rainfed system of maize cultivation 41 . In most of the Eastern Europe and many regions of Western European countries, the effect of temperature variability in explaining wheat yield variability was more important as also found in previous regional and global studies (refs 42 , 43 , Fig. 3c ). This is because of the continental climate of Eastern Europe, which causes a greater amplitude of temperature variability 44 . Our study shows that normal, both normal and extreme, and extreme temperature variability was important in explaining wheat yield variability. In Southern Europe and in the Mediterranean regions in addition to heat stress the water limiting conditions that are common 44 , 45 , 46 resulted in precipitation variability also being important for wheat yield variability. The influence of climate variability on wheat yield variability was not statistically significant everywhere. Neither was the explained variability in statistically significant areas high everywhere. This was because farmers are already adapted, or adapting, to climate change 47 , which has made them also more adapted to variability. In the United Kingdom either precipitation variability or both temperature and precipitation variability explained ~45% of wheat yield variability; the precipitation variability is in turn related with the North Atlantic Oscillation 48 . Maize is partly irrigated in France, but irrigation does not fully mitigate dry conditions 49 ; hence precipitation variability is important and also because irrigated maize areas have only recently increased in area and thus historically precipitation variability could not be compensated as effectively as more recently. The net result is that in many maize areas of France historically both temperature and precipitation variability are important 50 . In the United States climate variability was important especially in the Midwestern U.S. for maize yields. While in the upper Midwest temperature variability was more important, in the central Midwest precipitation variability was more important. In Nebraska, a U.S. Great Plains state with a prevalence of irrigated maize in its western part, temperature variability was more important than in the eastern parts where precipitation variability was more important. Many of the counties of the Great Plains states with dryland maize meet their crop water demands partly from irrigation 51 and we identify large number of counties where both precipitation and temperature variability was important. In other rainfed maize-producing countries normal and extreme temperature conditions explained maize yield variability due to increased soil water demand that raised transpiration rates and vapor pressure deficits 25 , 52 . Overall only temperature variability explained maize and soybean yield variability in more harvested regions (~37 and 38%, respectively) compared with precipitation only explained regions (~31 and 36%, respectively); climate variability explained part of the yield variability in ~91% of the U.S. maize harvested areas and 82% of soybean harvested areas. Less adaptation of farmers to increasingly warmer temperatures may explain why in larger areas temperature variability was important 53 . Only ~46% of the maize harvested regions of Mexico have crop yield variability influenced by climate variability (~27% of the yield variability was explained). Precipitation variability was more important overall, but pockets of regions where temperature variability was more important exists such as in Sinaloa where irrigated maize is important, and in Guerrero. Temperature variability explained maize yield variability also in most Central American countries. Further south in Brazil, precipitation variability was more important overall; in specific regions temperature variability is overall more important such as Mato Grosso state due to its wetter climate, although in ~23% of Brazil’s maize harvested areas both temperature and precipitation variabilities were important in explaining part of the maize crop yield variability. In Argentina both temperature and precipitation variabilities were equally important overall, though in specific locations temperature variability was more important presumably due to irrigated maize. Although this is the most spatially detailed global assessment of the links between historical climate variability and yield done to date, our study has some limitations. For example, our estimation of crop yield variability due to climate variability may underestimate the importance of climate variability impacts at specific locations within political units. Future studies should investigate this problem at an even finer resolution globally, but this is challenging given historical yield data availability. In some countries both crop yield and weather data may have quality issues 13 , 18 . Our study is based on yield data at the county/district/municipal/department or larger political unit level, so we used crop harvested area weighted gridded weather data for the political units. However, weather data from individual stations could give a distinct climate-yield response signal due to its very localized scale. To test this latter issue, we carried out a separate analysis using daily station data from ~100 U.S. counties 54 that contributed to ~25% of total U.S. maize production. We found statistically significant correlation ( r =0.54; P =0.001) between analyses conducted by the two different data sets. A stronger relationship is likely not present with station data analysis due to the sheer size of some political units and lack of complete coverage, which is present in gridded data ( Supplementary Fig. 4 ). As our yield was measured at the political unit, the use of the harvested area weighted gridded weather data 18 for each political unit is appropriate, similar to previous upscaling usage 1 , and the likely reason that we typically found a stronger statistical relationship with yields over time ( Supplementary Fig. 5 ). In contrast, the use of station data would be appropriate if crop yields were measured at same sites or locally, and the direct use of gridded data then less appropriate without downscaling. While climate variability is a significant factor and responsible for 32–39% of global crop yield variability, it is certainly not the only controlling factor 55 . Our study only considered broad precipitation and temperature effects though in an unprecedented spatial detail; however, there are myriad other factors that could influence climate-yield relationships, as informed by more local scale research. Our study does not consider factors such as changing cloud cover (and solar radiation), wind speed, surface ozone exposure 56 , or decomposed the basic climate variables of temperature and precipitation further into the timing of heat stress 57 , the timing of dry and wet spells 58 , or soil moisture 59 . We have also not considered the amplification or dampening of climate variability impacts via other agronomic challenges such as pest and pathogen infestation 60 and irrigation 61 . Climate change may also have influenced how frequently crops are harvested 62 , 63 , for example, now allowing double cropping in hitherto colder single cropped regions, but we were unable to include such precision as the only globally available crop calendar 64 was static, even though we updated it using the most recent information available (see Supplementary Fig. 6 ). Other factors to consider in future studies are altitudinal effects 65 and the quality of crop yields 48 . What we have investigated is the influence of the variability of temperature and precipitation on crop yield variability. The unexplained yield variability includes the numerous agronomic challenges and decisions that farmers make each year such as the availability and use of agronomic inputs 57 , pest and pathogen infestations 60 , 66 , soil management 66 , 67 , irrigation 61 , distribution of varied crop maturity types 68 , socio-economic conditions 55 , 63 , 69 and political or social strife 13 . Our study therefore is an initial assessment to identify locations worldwide where historically climate variability has been relatively important in explaining crop yield variability. From the perspective of stabilizing farmer incomes and national food supply and security, this new high-resolution information at the global scale should help direct further research and policy more effectively to those regions where climate variability poses the greatest risk and provide leverage points 70 in the most critical regions. If climate variability is predicted to increase in the same regions where climate variability historically explained most of the crop yield variability, strategies to stabilize crop production should be prioritized to ensure stable future crop production and prevention of future food price spikes. The high-resolution models that we have built may be used to evaluate future climate-related yield variability research, provide cross-comparison against the results of crop simulation models and address alternate factors contributing to the spatial heterogeneity in climate-yield response. Methods Modelling set-up Further details regarding the data used are given in Supplementary Methods 1 . To determine how much of the variability in crop yields was explained by climate variability, we first detrended the crop yield and climate variables—temperature and precipitation—following 1 (see the example in Supplementary Fig. 7 ) over the period 1979–2008. Note that we use two forms of temperature and precipitation—the seasonal or growing season average value and the average conditions 12 months before harvest or the annual value to account for antecedent conditions. This resulted in four different combinations of detrended climate variables, and as we used both the linear and squared forms of seasonal and annual temperature and precipitation there was a total of eight forms of climate variables. We used these detrended variables in different combinations to linearly regress with the detrended crop yields at each of the 13,500 political units. To avoid over-fitting we limited our analysis to a total of 27 combinations of climate variables, resulting in 27 regression equations, to capture the relationships between climate variability and crop yield variability at each political unit and of the basic form: where Y c is the observed set of detrended crop yields for crop ‘c’ in units of tons/ha/year at each political unit; In equation 1, T c can represent for crop ‘c’ at a given political unit the temperature associated with the main growing season 64 or the temperature for 1 year before the crop’s harvest to capture antecedent conditions. P c similarly is the precipitation for the main growing season for the crop ‘c’ for the political unit and for 1 year before the crop’s harvest. The function f is limited to linear and quadratic forms of these two detrended meteorological parameters, as is common practice in studies correlating climate and agricultural production 1 , 5 , 16 . The terms included in each of the 27 regression equations and their classification are provided in the Supplementary Table and further details are given in Supplementary Methods 2 . Statistical tests The generated regression equations at each political unit, for example, ~13,500 sets of 27 equations per crop, were statistically tested next. We first identified which functional form of Y c = f ( T c , P c ) from the set of 27 equations at each political unit fit the data best using the Akaike Information Criterion (AIC), which penalizes equations with more terms. However, because the model that best fits the data may be no better than a random climate (null model), we conducted F tests at the P =0.10 level to determine whether the chosen model was significantly better than the null model. In 21–47% of the global crop-harvested areas, we found that the chosen model was no better than the null model at the P =0.10 significance level. Thus, in the remainder 53–79% of global crop harvested areas yield variability is significantly influenced by climate variability over the study period and our reported numbers are averages over these areas. Using the statistically significant model with the best functional representation, we next determined the coefficient of determination ( r 2 ) or explanatory power of the complete model, and the reduced models containing only temperature and only precipitation terms. The residual is the unexplained yield variations. The 30-year study period average harvested area and yield information at each subnational location was used together with the observed coefficient of determination for computing national and global harvested areas weighted averages. Global and country-specific numbers are averaged only over those 53–79% of global crop harvested areas where the statistical models were significant. Model bias and sensitivity As a simple assessment of model bias, we performed a bootstrapping exercise to assess the influence of including specific combinations of years (80% of the years selected at each iteration) in our data on the overall yield predictions (using a test set of 20% of the years) for each political unit, which we standardized as the ratio of the average bias from the 99 repetitions to the average of the crop yields for the study period in each political unit ( Supplementary Fig. 8 ). This is analogous to a leave-group-out cross validation approach used to examine uncertainty in model selection. Locations of models with more restrictive P cutoff values ( F -tests) at P =0.01 and P =0.05 are shown in Supplementary Fig. 9 . In general, even though we used a less-restrictive P value of 0.1, the models selected generally were significant at P =0.05 or less. Additional information How to cite this article : Ray, D. K. et al . Climate variation explains a third of global crop yield variability. Nat. Commun. 6:5989 doi: 10.1038/ncomms6989 (2015).
What impact will future climate change have on food supply? That depends in part on the extent to which variations in crop yield are attributable to variations in climate. A new report from researchers at the University of Minnesota Institute on the Environment has found that climate variability historically accounts for one-third of yield variability for maize, rice, wheat and soybeans worldwide—the equivalent of 36 million metric tons of food each year. This provides valuable information planners and policy makers can use to target efforts to stabilize farmer income and food supply and so boost food security in a warming world. The work was published today in the journal Nature Communications by Deepak Ray, James Gerber, Graham MacDonald and Paul West of IonE's Global Landscapes Initiative.The researchers looked at newly available production statistics for maize, rice, wheat and soybean from 13,500 political units around the world between 1979 and 2008, along with precipitation and temperature data. The team used these data to calculate year-to-year fluctuations and estimate how much of the yield variability could be attributed to climate variability. About 32 to 39 percent of year-to-year variability for the four crops could be explained by climate variability. This is substantial—the equivalent of 22 million metric tons of maize, 3 million metric tons of rice, 9 million metric tons of wheat, and 2 million metric tons of soybeans per year.The links between climate and yield variability differed among regions. Climate variability explained much of yield variability in some of the most productive regions, but far less in low-yielding regions. "This means that really productive areas contribute to food security by having a bumper crop when the weather is favorable but can be hit really hard when the weather is bad and contribute disproportionately to global food insecurity," says Ray. "At the other end of the spectrum, low-yielding regions seem to be more resilient to bad-weather years but don't see big gains when the weather is ideal." Some regions, such as in parts of Asia and Africa, showed little correlation between climate variability and yield variability. More than 60 percent of the yield variability can be explained by climate variability in regions that are important producers of major crops, including the Midwestern U.S., the North China Plains, western Europe and Japan.Depicted as global maps, the results show where and how much climate variability explains yield variability. The research team is now looking at historical records to see whether the variability attributable to climate has changed over time—and if so, what aspects of climate are most pertinent. "Yield variability can be a big problem from both economic and food supply standpoints," Ray said. "The results of this study and our follow-up work can be used to improve food system stability around the world by identifying hot spots of food insecurity today as well as those likely to be exacerbated by climate change in the future."
www.nature.com/ncomms/2015/150 … full/ncomms6989.html
Chemistry
3D printing of single atom catalysts pioneered
Fangxi Xie et al, A general approach to 3D-printed single-atom catalysts, Nature Synthesis (2023). DOI: 10.1038/s44160-022-00193-3 Journal information: Nature Synthesis
https://dx.doi.org/10.1038/s44160-022-00193-3
https://phys.org/news/2023-01-3d-atom-catalysts.html
Abstract A mass production route to single-atom catalysts (SACs) is crucial for their end use application. To date, the direct fabrication of SACs via a simple and economic manufacturing route remains a challenge, with current approaches relying on convoluted processes using expensive components. Here, a straightforward and cost-effective three-dimensional (3D) printing approach is developed to fabricate a library of SACs. Despite changing synthetic parameters, including centre transition metal atom, metal loading, coordination environment and spatial geometry, the products show similar atomic dispersion nature of single metal sites, demonstrating the generality of the approach. The 3D-printed SACs exhibited excellent activity and stability in the nitrate reduction reaction. It is expected that this 3D-printing technique can be used as a method for large-scale commercial production of SACs, thus enabling the use of these materials in a broad spectrum of industrial applications. Main Single-atom catalysts (SACs) are materials with isolated metal atoms as active sites, anchored by surrounding coordination species of solid supports 1 , 2 . Advantages include high atom economy and tunable coordination properties, giving potential for numerous applications 3 , 4 , 5 , 6 , 7 , 8 , 9 . Developing universal synthesis approaches to achieve scale production of SACs is a prerequisite for successful implementation of these catalysts in practical applications 1 . A simple and economic general synthesis approach for scale production of SACs is vital for downstream commercialization 1 , 10 , 11 . Currently, the chemical synthesis strategies of SACs can be divided into two major categories: ‘top-down’ and ‘bottom-up’ strategies 1 , 12 , 13 , 14 , 15 . A typical ‘top-down’ strategy includes the initial creation of defects on substrates and the subsequent anchoring of metal atoms to surface vacancies 13 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 . The ‘bottom-up’ strategy begins with the preparation of host materials, such as microporous crystalline frameworks or synthetic polymers. Afterwards, SACs can be achieved through the confinement of molecular complexes in hosts and subsequent post-integration process to remove the ligands of metal complexes 14 , 17 , 25 , 26 , 27 , 28 . However, both synthetic strategies require complex wet-chemistry processes, for example the complex defect-construction process or the sophisticated host-material preparation process, hindering scale production of SACs 16 , 29 , 30 , 31 , 32 , 33 , 34 . In addition, the presence of elaborate substrates and costly precursors (for example, complicated ligands or expensive artificial polymers) notably increases the overall cost of manufacturing SACs 1 , 35 . Apart from chemical synthesis strategies, the pathways of mechanochemical abrasion, thermal shockwave and laser irradiation can serve as versatile approaches to synthesize SACs 36 , 37 , 38 , 39 . But customized settings or specific equipment can be required for applying these approaches 36 . Therefore, a straightforward and cost-effective universal synthesis approach for scale production of SACs is desired but remains challenging 1 , 10 , 11 . Recently, three-dimensional (3D) printing techniques have been developed as unique manufacturing routes for mass production of targeted products. The use of 3D-printing has been considered a simple approach as it can directly fabricate target materials and avoid complex wet-chemistry processes 40 , 41 . In contrast to conventional subtractive manufacturing process, 3D-printing techniques work more economically by effectively eliminating the generation of waste materials during the manufacturing process 40 , 41 , 42 . Introduction of cheaper 3D-printing machines and an increasing number of commercially available printable materials offer easily accessible opportunities to substantially reduce the overall cost of final products 40 , 41 , 42 , 43 , 44 . Additionally, 3D-printing can automatically and efficiently construct materials with customized geometric designs from millimetres to beyond metre scale, paving a pathway for industrial-scale production 44 , 45 , 46 , 47 . However, despite the popularity of 3D-printing techniques, mainly in the biomedical field, its application in SAC production remains elusive. Herein, we report a universal 3D-printing synthesis approach to directly construct a library of SACs. By mixing the printing ink with transition metal precursors, a straightforward 3D-printing approach was developed to synthesize various SACs. Minimal alteration of the atomic dispersion was seen with synthetic variations on centre atoms, loadings of the centre atoms, coordination environments and spatial geometries, demonstrating the universality of this approach. The employment of natural polymers, including gelatin and gelatin methacryloyl (GelMA), as printing ink offers an accessible and affordable route 32 , 33 . Furthermore, the automatic and direct fabrication of centimetre-size SAC precursors avoids complicated wet-chemistry processes. These two merits of reduced costs and added convenience identify its great potential for mass production of SACs. In addition, as a proof-of-concept, the performance of 3D-printed SACs was evaluated by a nitrate reduction reaction, showcasing their potential application as electrocatalysts. Results and discussion Synthesis and structural characterization of Fe3DSAC As shown in Fig. 1a , several steps are involved in the 3D-printing approach. Initially, the hydrogel containing gelatin and GelMA was mixed with the corresponding transition metal precursors to formulate the printing ink. The 3D structure was directly and automatically constructed by the 3D-printing technique (a typical printing process is shown in the Supplementary Video 1 ). Afterwards, the as-printed sample was freeze-dried to remove residual water. Then, the 3D-printed SACs were achieved through the pyrolysis process of as-dried samples to anchor metal single atoms onto the gelatin/GelMA-derived carbon. To further elucidate the fabrication process, we take the synthesis of Fe 3D-printed SAC with a precursor hole size of 1.0 mm (denoted as Fe3DSAC or Fe3DSAC 1.0 mm) as an example. In this process, Fe(acac) 3 was employed as the Fe single-atom precursor. As shown in Fig. 1b , the 3D centimetre-size precursor was constructed via 3D-printing technique. Typical scanning electron microscopy (SEM) and the corresponding energy dispersive X-ray (EDX) images of the typical as-pyrolysis product with hole size of 2.0 mm, shown in Fig. 1c and Supplementary Fig. 1 , prove that the pristine structure from the 3D-printing technique was maintained after the pyrolysis process. The related thermogravimetric analysis result (Supplementary Fig. 2 ) shows that most of the mass loss happened between 300 and 400 °C. Additionally, EDX images indicate the homogenous distribution of carbon, nitrogen and iron in the as-prepared Fe3DSAC (ref. 26 ). As a comparison, an analogue pure carbon sample was prepared with the same procedure without the presence of Fe(acac) 3 and named as 3DCarbon. The X-ray diffraction (XRD) results of Fe3DSAC and 3DCarbon (Supplementary Fig. 3 ) validate the amorphous nature of carbon substrates and the absence of iron nanoparticles in the as-prepared Fe3DSAC. According to the high-resolution high-angle annular dark-field scanning transmission electron microscope (HAADF-STEM; Fig. 1d and Supplementary Fig. 4 ) image, iron single atoms are clearly observed as the dots labelled with white circles, demonstrating the atomic isolation of iron sites 48 . Furthermore, in the extended X-ray absorption fine structure (EXAFS) results shown in Fig. 1e , different from the strong wavelet transform (WT) signal (focused at k = 8.3 Å, r = 2.2 Å) of bulk Fe metal (denoted as Fe metal ref in Fig. 1e,f), the WT signal of Fe3DSAC is focused at k = 6.0 Å, r = 1.5 Å, indicating an absence of Fe–Fe bonds in the Fe3DSAC sample. The similarity between WT-EXAFS plots of Fe3DSAC and Fe(acac) 3 verifies the dispersed atom nature of iron sites in Fe3DSAC (refs. 49 , 50 ). Alternatively, as shown in the Fourier-transform (FT) EXAFS spectrum of the bulk Fe metal (Fig. 1f ), the Fe–Fe coordination path between 2.0 and 2.5 Å (shaded light red) is absent in the FT-EXAFS spectrum of Fe3DSAC. We conclude that the FT-EXAFS analysis confirms the atomic dispersion of iron atoms in Fe3DSAC. The notable difference in the spectra of X-ray absorption near edge structure (XANES) of the bulk Fe metal and Fe3DSAC, shown in Supplementary Fig. 5 , also indicates a notable difference in coordination environments between the materials 29 . Additionally, the FT-EXAFS curve of Fe3DSAC exhibits more similarity to Fe(acac) 3 and Fe 2 O 3 than iron foil, verifying the absence of iron clusters and iron nanoparticles in the Fe3DSAC sample. With regards to its coordination environment, the Fe L-edge and O K-edge results (Supplementary Fig. 6 ) suggest that the coordination environment of Fe3DSAC is more closely linked to the one for Fe(acac) 3 as opposed to that of FePc. This indicates that the bonding between Fe and the carbon-based substrate could be Fe–O. Fig. 1: The synthesis procedure. a , Synthesis procedure for 3D-printed SACs. b , Digital photographs of the as-prepared precursor. Scale bar, 1 cm. c , SEM image of Fe3DSAC with the precursor hole size of 2.0 mm. Scale bar, 1 mm. d , HAADF-STEM image of isolated iron sites (circled in white). Scale bar, 2 nm. e , WT-EXAFS contour plots for the Fe3DSAC and reference samples. f , g , Normalized FT-EXAFS spectra ( f ) and Fe K-edge EXAFS fitting analyses ( g ) for Fe3DSAC in R Space. The blue coloured strip in ( f ) indicates the bonds between iron atom and non-metal elements; the red coloured strip indicates the Fe–Fe bonds. The inset in ( g ) shows the structure of a Fe–O 4 –Cl local coordination derived from the EXAFS result, where the gold, red and cyan-coloured spheres represent Fe, O and Cl atoms, respectively. Source data Full size image Furthermore, the local coordination configuration of Fe3DSAC was further investigated by quantitative least-squares EXAFS curve-fitting analyses. Specifically, the major peak of Fe(acac) 3 locates at 1.52 Å while the one of Fe3DSAC shifts to 1.76 Å, indicating some alterations occur at the coordination environment of Fe3DSAC. FT-EXAFS fitting results suggest the shift of Fe–X peaks (shaded by light blue in Fig. 1f ) due to the introduction of chloride in the post-treatment process. As shown in Fig. 1g , Supplementary Fig. 7 and Supplementary Table 2 , the best-fit results for Fe3DSAC consist of two backscattering paths: Fe–O and Fe–Cl. The coordination numbers of oxygen and chloride are estimated to be 4 and 1, respectively. Additionally, the X-ray photoelectron spectroscopy (XPS) results also confirm the existence of Fe–O and Fe–Cl bonds for Fe3DSAC (Supplementary Fig. 8 ). Taken together, the fitting results confirm a Fe–O 4 –Cl moiety for the Fe3DSAC sample. This proposed coordination configuration of Fe–O 4 –Cl is illustrated by the scheme shown in the inset of Fig. 1g . Universality on central elements We extended the synthesis approach to other representative transition metals to confirm the universality of central atoms of this approach. By simply replacing Fe(acac) 3 with other metal acetylacetonates, for example, Co(acac) 2 , Ni(acac) 2 , Cu(acac) 2 , Zn(acac) 2 and Pt(acac) 2 , many SACs (denoted as CoSAC, NiSAC, CuSAC, ZnSAC and PtSAC, respectively) were obtained using similar procedures. In the HAADF-STEM image of PtSAC (Fig. 2a ), the atomically dispersed nature of Pt single-atom sites can be confirmed. Furthermore, the homogenous dispersion of platinum, nitrogen and carbon in the transmission electron microscopy (TEM) EDX elemental mapping results (Supplementary Fig. 9 ) confirm the absences of obvious platinum metallic particles in PtSAC. In the corresponding WT-EXAFS contour plots (Fig. 2b ), the absence of WT-EXAFS signal at r > 2.0 Å indicates the atomic dispersion nature of platinum sites in the as-prepared SACs 49 , 50 . The absence of Pt clusters and nanoparticles in the PtSAC is also supported by the absence of the Pt–Pt path between 2.5 and 3.0 Å in the FT-EXAFS (Supplementary Fig. 10a ) and the notable differences in XANES spectra (Supplementary Fig. 10b ) between PtSAC and the platinum metallic reference (denoted as Pt metal ref). The absence of metal–metal paths between 2.0 and 2.5 Å (FT-EXAFS) as shown in Fig. 2c and Supplementary Fig. 11 for CuSAC, NiSAC, CoSAC and ZnSAC indicates the successful preparation of SACs of corresponding transition metals. Additionally, the fitting results confirm the M–O 4 moieties for these SAC samples (Supplementary Fig. 12 ). Furthermore, the homogenous dispersion of metal, nitrogen and carbon in the atomic-resolution HAADF-STEM images (Supplementary Figs. 13 and 14 ) and TEM EDX elemental mapping results (Supplementary Figs. 15 – 18 ) confirm the absences of obvious metallic particles in as-prepared SACs. Additionally, as shown in Supplementary Fig. 19 , the absences of any crystalline peak in the XRD patterns of as-prepared samples offer further evidence for the successful preparation of SACs. Furthermore, as shown in Supplementary Figs. 20 – 23 , this approach can also be extended toward synthesizing IrSAC and MnSAC. Therefore, the synthesis approach developed from this study is universal to prepare SACs with different transition metals as centre atoms. Fig. 2: Universality of elements and metal loadings. a , Atomic-resolution HAADF-STEM image of isolated platinum sites (circled in white). Scale bar, 2 nm. b , WT-EXAFS contour plots for the PtSAC and reference samples (FT-EXAFS spectrum PtSAC, see Supplementary Fig. 10 ). c , Normalized FT-EXAFS spectra of CuSAC, NiSAC, CoSAC and ZnSAC. The aqua coloured strip in ( c ) indicates the bonds between metal elements and non-metal elements (marked as M–X in the figure); the tan coloured strip indicates the metal–metal bonds (marked as M–M in the figure). d , Metal loadings achieved in corresponding Co and Ni samples. e , Atomic-resolution HAADF-STEM image of NiSAC 1 mg ml –1 . Scale bar, 2 nm. f , Normalized FT-EXAFS spectra of Ni samples with different metal loadings and reference samples. The blue coloured strip in ( f ) indicates the bonds between nickel atom and non-metal elements; the red coloured strip indicates the nickel–nickel bonds. Source data Full size image To confirm the universality of this approach, we further adapted the synthesis approach to other common parameters, for example, metal loadings, coordination environments and spatial geometries. Universality on metal loadings Different concentrations of corresponding metal (Co and Ni) acetylacetonates in the printing ink (1 and 10 mg ml –1 ) were applied to evaluate its universality on metal loadings. Inductively coupled plasma results (Fig. 2d ) reveal the loadings of Ni-related samples with various Ni(acac) 2 concentrations (denoted as NiSAC 1 mg ml –1 or NiSAC above and NiSAC 10 mg ml –1 ) are 4.3% and 20.8%, respectively. The loadings of corresponding CoSACs (denoted as CoSAC 1 mg ml –1 or CoSAC above and CoSAC 10 mg ml –1 ) were 3.9% and 16.2%, respectively. In the corresponding HAADF-STEM image (Fig. 2e ), as indicated by white cycles, the atomic dispersion nature of nickel atoms can be confirmed. Furthermore, the absence of the Ni–Ni path between 2.0 and 2.5 Å in FT-EXAFS (Fig. 2f and Supplementary Fig. 24a ) and the absence of WT-EXAFS signal at r > 2.0 Å in WT-EXAFS (Supplementary Fig. 25 ) demonstrate the atomic dispersion nature of Ni atoms in the corresponding samples. A similar conclusion can be drawn for the Co samples based on the corresponding FT-EXAFS and WT-EXAFS results (Supplementary Figs. 26 and 24b ). Additionally, as shown in Supplementary Fig. 27 , the comparable XANES curves of samples with different loadings indicate that the impact of various metal loadings on the coordination environments is negligible. To further demonstrate the capability of regulating transition metal loadings, we employed various printing inks with different concentrations of Fe(acac) 3 to synthesize FeSACs with different loadings. The EXAFS characterizations (Supplementary Fig. 28 ) verify the dispersed atomic nature of iron sites in the as-prepared samples with different concentrations. The inductively coupled plasma results suggest that the iron contents in the corresponding samples are 7.0% and 12.1%, respectively. Combined with the results of Co and Ni samples, the capability to regulate the metal loadings is demonstrated. Universality on coordination environments As one of the most important features of SACs is their tunable coordination environments, various approaches have been applied to change coordination environments of as-prepared SACs in this study. First, different transition metal (Zn and Cu) phthalocyanine (ZnPc and CuPc) salts were applied to further verify the alteration on coordination environments due to the adjustment of transition metal precursors. HAADF-STEM images, shown in Fig. 3a,b , demonstrate the atomic dispersion nature of zinc atoms in the as-prepared samples (the one with ZnPc is denoted as ZnSAC PC while the one with Zn(acac) 2 is denoted as ZnSAC AC or ZnSAC above) derived from different zinc precursors. Furthermore, as shown in Supplementary Fig. 29 , the absences of the Zn–Zn path between 2.0 and 2.5 Å and WT-EXAFS signal at r > 2.0 Å indicates mono-atomic dispersion of Zn atoms in the corresponding samples. A similar conclusion can be reached for the Cu samples based on the corresponding FT-EXAFS and WT-EXAFS results (Supplementary Fig. 30 ). These results demonstrate the capability of applying different precursors to prepare SACs by this approach. Fig. 3: Universality of coordination environments and spatial geometries. a , b , Atomic-resolution HAADF-STEM images of ZnSAC AC ( a ) and ZnSAC PC ( b ). Scale bar, 2 nm. c , Zn K-edge EXAFS fitting analyses for as-prepared Zn samples in R Space. d , Fe K-edge EXAFS fitting analyses for Fe3DSAC and the untreated Fe3DSAC sample in R Space. e , WT-EXAFS contour plots for the treated and untreated samples (NiSACs and CoSACs). f , Scheme showing the capability of tuning central atoms and coordination environments of this approach. g , SEM image of Fe3DSAC with the precursor hole size of 1.5 mm. Scale bar, 0.5 mm. h , Normalized FT-EXAFS spectra and fitting results of Fe samples from different geometries and reference sample. Source data Full size image Additionally, as shown in Fig. 3c and Supplementary Table 2 , the FT-EXAFS curve of ZnSAC AC shows up with a major peak located at 1.57 Å. As a comparison, the main peak of ZnSAC PC locates at 1.53 Å. This difference on peak locations can be attributed to the difference between Zn–O bonds and Zn–N bonds. Furthermore, the best FT-EXAFS fitting results of ZnSAC AC suggest that it consists of one major backscattering path: Zn–O, while the coordination number of oxygen is estimated to be 4. Therefore, these results confirm a Zn–O 4 moiety for ZnSAC AC. As regards ZnSAC PC, the fitting results suggest it consists of Zn–N coordination configuration with a coordination number of 4, indicating a Zn–N 4 moiety. Consequently, the capability of employing different precursors might bring the capability to modify the coordination environments 51 , 52 , 53 . To further evaluate the universality of our approach, several widely used natural polymers in 3D-printing were employed to synthesize the SACs containing iron. In the EXAFS results (Supplementary Fig. 31 ) of as-prepared samples, the Fe–Fe coordination path between 2.0 and 2.5 Å (shaded light red) is absent, verifying the absence of iron clusters and iron nanoparticles in the as-prepared samples derived from these natural polymers as scaffold basements. In addition to modification of precursors, post-treatment could be another effective approach to directly alter the coordination environments 54 . Soaking in hydrochloride acid was selected as a typical post-treatment process to evaluate the capability to modify coordination environments through post-treatment process. Specifically, as shown in the FT-EXAFS results (Fig. 3d and Supplementary Table 2 ), the main bonding peak of untreated Fe3DSAC sample locates at 1.45 Å. Furthermore, the FT-EXAFS fitting results indicate that the untreated sample has a moiety of Fe–O 4 . As opposed to that of the untreated sample, as shown in Fig. 1g and 3d , the main FT-EXAFS peak of treated Fe3DSAC shifted to 1.76 Å, compared with the main bonding peak of untreated Fe3DSAC sample located at 1.45 Å. This demonstrates that the treated Fe3DSAC possesses a moiety of Fe–O 4 –Cl. The successful alteration of Fe–O 4 moiety to Fe–O 4 –Cl moiety validates that the post-treatment approach to change coordination environments remains effective in 3D-printed SACs 54 , 55 , 56 . To further evaluate the universality of post-treatments on modifications of as-prepared samples, samples with different central transition metal atoms (Co, Ni, Mn) were prepared following a similar post-treatment process stated above. As shown in Supplementary Fig. 32 , all as-prepared samples exhibit broader peaks located between 1.5 and 2.0 Å. Considering that the corresponding metal–metal paths of these three elements locate at above 2.0 Å, the FT-EXAFS results demonstrate the absence of metal nanoparticles and clusters in these samples. Furthermore, as shown in Fig. 2c , the main peaks of untreated samples are located between 1.4 Å and 1.5 Å. A peak closed to 2.0 Å shows up in each treated sample, which can be attributed to the metal–chloride bonds 54 , 55 , 56 . As shown in Fig. 3e , the additional WT signals focused at around k = 8.0 Å, r = 2.0 Å also reveal the existences of metal–chloride bonds in treated NiSAC and treated CoSAC. The above results of changing the coordination environments of iron, nickel and cobalt single-atom sites suggest that the coordination configurations of 3D-printed samples can be easily tuned through changing the precursor or employing post-treatment procedure, demonstrating the highly universal nature of our approach. As suggested by the results, gelatin/GelMA can be employed to prepare a library of SACs with various central-atom and different coordination environments, as shown in the scheme (Fig. 3f ). Universality on spatial geometries One advantage to the 3D-printing approach is the capability to tune the geometries. As shown in Supplementary Fig. 33 , the hole sizes of precursors were rationally customized between 1.0 mm, 1.5 mm and 2.0 mm. As shown in Figs. 1c , 3g and Supplementary Fig. 34 , the 3D-printed structures remained after the calcination, demonstrating the capability to maintain the microstructure from the 3D-printing technique even after the high-temperature processing. Additionally, as shown in Supplementary Fig. 35 , different patterns of 3D-printed electrode can also be achieved by changing the printing parameters, showcasing the capability to tune the spatial geometry of SACs. In addition, as shown in Supplementary Fig. 36 , the size of the printing precursor was expanded to 4 × 2 cm. It is expected the size could be further increased using a commercial-scale 3D-printer. Taken together, the 3D-printing technique offers another unique capability to infinitely extend the size of the as-prepared sample. Afterwards, the atomic dispersion nature of iron sites in 3D-printed SACs were demonstrated by FT-EXAFS (Fig. 3h ) and WT-EXAFS (Supplementary Fig. 37 ) analysis. The fitting results suggest a moiety of Fe–O 4 –Cl for all the as-prepared samples. It demonstrates that different settings on 3D-printing make ignorable alterations on the single-atom dispersion nature and coordination environments to the 3D-printed samples. Performance evaluation of 3D-printed SACs As a proof-of-concept, to evaluate the potential of the as-obtained SACs, Fe3DSAC was employed as an electrocatalyst for nitrate reduction reaction (eNO x RR) as iron single-atom exhibited superior performance in such a reaction 48 , 57 , 58 . The eNO x RR performance of Fe3DSAC was evaluated in an Ar-saturated 0.10 M KOH aqueous solution with 10 mM NO 3 − . The detailed electrochemical cell configuration is shown in Supplementary Fig. 38 . The as-prepared Fe3DSAC exhibited a notably increased ammonia production, compared with 3DCarbon. Specifically, as quantified by UV-Vis measurements (Fig. 4a and Supplementary Fig. 39 ), the yield of ammonia of Fe3DSAC at –0.6 V versus reversible hydrogen electrode (RHE) was ~4.55 μmol cm −2 h −1 which was about 7.5 times the 0.61 μmol cm −2 h −1 from 3DCarbon. Furthermore, as shown in Fig. 4b and Supplementary Figs. 40 and 41 , Fe3DSAC exhibited higher yields of ammonia than 3DCarbon at all the potentials, indicating good function of the 3D-printed SACs. Additionally, the electrocatalytic performance can be further tuned by changing the central elements and spatial geometries of 3D-printed SACs (Supplementary Fig. 42 ). Based on the previous fitting results of FT-EXAFS, we constructed models of iron single-atom site (denoted as FeSAC) with a moiety of Fe–O 4 –Cl to discuss the underlying mechanism of as-prepared single-iron-atom catalyst. The density functional theory (DFT) calculations, shown in Fig. 4c and Supplementary Fig. 43 , reveal that the most likely path is the one with the protonation of *NO to *NHO, followed by the formation of *NHOH and *NH key reaction intermediates. It also suggests that, among all the elementary steps, the potential limiting step on FeSAC is the protonation of *NO to *NHO/*NOH, at which the formation of *NHO exhibited more favourable thermodynamics. Eventually, electrocatalytic stability was evaluated by replacing the electrolytes every hour for eight cycles. As shown in Fig. 4d , no meaningful change to ammonia production and negligible variation in the current density was observed during long-term electrolysis, further demonstrating the stability of as-prepared Fe3DSAC. The ex situ XANES results and the fitting results of FT-EXAFS test (Fig. 4e , Supplementary Fig. 44 and Supplementary Table 2 ) confirm that the Fe–O 4 –Cl moiety of Fe3DSAC was maintained after eNO x RR, indicating the stability of single-atom sites in such a 3D-printing structure. Fig. 4: Electrocatalytic performance of 3D-printed SACs for nitrate reduction reaction. a , Comparison between the NH 3 yield of Fe3DSAC and 3DCarbon at –0.6 V versus RHE. b , NH 3 yield at varying potentials in 0.10 M KOH with addition of 10 mM NO 3 − . c , Free energy diagram showing the minimum energy pathway of iron single-atom site (enlarged optimized configurations can be found in Supplementary Table 1 ). d , Stability tests for Fe3DSAC at –0.5 V versus RHE in 0.10 M KOH with 0.10 M NO 3 − . e , Ex situ normalized FT-EXAFS spectra and fitting results of reacted Fe3DSAC, pristine Fe3DSAC and Fe metal reference. The insets in ( c , e ) show ( c ) optimized configurations of reaction intermediates and ( e ) the structure of a Fe–O 4 –Cl local coordination derived from the EXAFS result. Colour code: gold, iron; red, oxygen; cyan, chlorine; blue, nitrogen; white, hydrogen; grey, carbon. Source data Full size image Conclusions Taken together, compared with previous reported general synthesis approaches for SACs, the 3D-printing approach with GelMA and gelatin described herein to produce SACs is straightforward, affordable and effective for mass production. A highly universal synthesis approach was developed to fabricate different SACs using various synthesis parameters, including centre elements, metal loadings of centre elements, coordination environments and spatial geometries, with minimal alteration of the atomic dispersion. Furthermore, the 3D-printed SAC exhibited solid performance as an electrocatalyst for nitrate reduction reaction, showcasing the potential of 3D-printed SACs for downstream catalytic applications. More importantly, the 3D-printing technique not only offers capability to obtain SACs with desired geometry for different reactions but also provides a promising avenue to scale manufacture SACs, paving the way for implementing SACs to achieve sustainable production of valuable fuels and chemicals. Methods Synthesis of GelMA The synthesis of GelMA was based on the previous study 59 . Briefly, 45 g gelatin was dissolved in 450 ml phosphate-buffered saline at 50 °C with vigorous stirring. After the full dissolution of gelation, an additional 27 g methacrylic anhydride was added based on the total mass of gelation (0.6 g methacrylic anhydride per 1.0 g gelatin). The reaction was carried out at 50 °C for 1 h, followed by dialysis of the solution against MilliQ water for 5 days with changing water twice per day, to remove the unreacted monomers. After the dialysis, the solution was sterilely lyophilized to store at −20 °C for future use. Synthesis of Fe3DSAC and 3DCarbon GelMA (0.4 g) and gelatin (0.2 g) were dissolved in 3.6 ml water to form the base printing ink with 10 wt% GelMA and 5 wt% gelatin. Fe(acac) 3 aqueous suspension (0.4 ml of 100 mg ml –1 ) was mixed together with the base printing ink to formulate the desired printing ink (~10 mg ml –1 Fe(acac) 3 ) using a water sonication bath at 45 °C. Before printing, 0.1 ml of 1 mM tris(2,2-bipyridyl) dichlororuthenium (II) hexahydrate and 0.1 ml of 10 mM sodium persulfate were added as photo initiators. The printing ink of 3DCarbon was prepared following the same procedure without the addition of the Fe(acac) 3 . Three-dimensional-printed Fe3DSAC or 3DCarbon construction was fabricated using a BioScaffolder (SYS+ENG, Germany) 3D printer. The printing head temperature was 27 °C and the temperature for base plate was 4 °C. The specific printing parameters were: inner diameter of the printing head was 300 µm, XY-plane speed of 550 mm min –1 , Z-speed of 800 mm min –1 , auger speed of 3.5 rpm and fibre spacing of 1, 1.5 or 2 mm in a 0–90° repeating pattern. After eight layers were 3D printed, the printed samples were cross-linked under visible light with an intensity at 30 mW cm –2 for 3 min, followed by lyophilization. Afterwards, the as-prepared precursor was placed in a tube furnace and then heated to 700 °C for 2 h at a heating rate of 2 °C min −1 under argon atmosphere followed by natural cooling to room temperature. The as-prepared products were collected for further use or treatment. Synthesis of CoSAC, NiSAC, ZnSAC, CuSAC, IrSAC, MnSAC and PtSAC The synthesis procedure was similar to that of Fe3DSAC but replacing the Fe(acac) 3 by corresponding metal acetylacetonates (Co(acac) 2 , Ni(acac) 2 , Zn(acac) 2 , Cu(acac) 2 , Ir(acac) 3 , Mn(acac) 3 and Pt(acac) 2 ). The detailed print ink preparation process is described as follows: the base printing ink was prepared following the same procedure as above. Specifically, 0.4 ml of aqueous suspension (10 mg ml –1 ) of corresponding metal acetylacetonates were mixed together with the base printing ink to formulate the desired printing ink (~1 mg ml –1 ) using a water sonication bath at 45 °C. The subsequent preparation processes were similar to those for Fe3DSAC but without the post-treatment. Synthesis of NiSAC 1 mg ml –1 , NiSAC 10 mg ml –1 , CoSAC 1 mg ml –1 and CoSAC 10 mg ml –1 The synthesis procedure was similar to that for the single-atom analogue but with increased concentration of corresponding metal acetylacetonates from 1 mg ml –1 to 10 mg ml –1 . The detailed print ink preparation process is described as follows: the base printing ink was prepared following the same procedure as above. Specifically, 0.4 ml of aqueous suspension (10 mg ml –1 ) of corresponding metal (Ni and Co) acetylacetonates were mixed together with the base printing ink to formulate the desired printing ink (~1 mg ml –1 ) using a water sonication bath at 45 °C for NiSAC 1 mg ml –1 and CoSAC 1 mg ml –1 . Additionally, due to the limited solubility of metal acetylacetonates in water, 50 mg of corresponding metal (Ni and Co) acetylacetonates were dissolved in 0.5 ml ethanol to prepare the corresponding suspension (100 mg ml –1 ). The as-prepared ethanol solution of corresponding metal acetylacetonates (0.4 ml) was mixed with the base printing ink to formulate the desired printing ink (~10 mg ml –1 ) using a water sonication bath at 45 °C for NiSAC 10 mg ml –1 and CoSAC 10 mg ml –1 . The subsequent preparation processes were similar to those for Fe3DSAC but without the post-treatment. Synthesis of ZnSAC AC, ZnSAC PC, CuSAC AC and CuSAC PC The synthesis procedure was similar to those of the single-atom analogues but with replacement of metal acetylacetonates (Cu(acac) 2 , Zn(acac) 2 ) by corresponding metal phthalocyanines (copper phthalocyanine, zinc phthalocyanine). The detailed print ink preparation process is described as follows: the base printing ink was prepared following the same procedure above. Specifically, 0.4 ml of aqueous suspension (10 mg ml –1 ) of corresponding metal (Zn and Cu) acetylacetonates were mixed together with the base printing ink to formulate the desired printing ink (~1 mg ml –1 ) using a water sonication bath at 45 °C for ZnSAC AC and CuSAC AC. Additionally, 0.4 ml of aqueous suspension (10 mg ml –1 ) of corresponding metal (Zn and Cu) phthalocyanines was mixed together with the base printing ink to formulate the desired printing ink using a water sonication bath at 45 °C for ZnSAC PC and CuSAC PC. The subsequent preparation processes were similar to those of Fe3DSAC but without the post-treatment. Synthesis of FeSAC by other polymers For gelatin, 0.75 g gelatin was dissolved in 5.0 ml water to form the base printing ink with 15 wt% gelatin. Fe(acac) 3 aqueous suspension (0.5 ml of 10 mg ml –1 ) was mixed together with the base printing ink to formulate the desired printing ink (~1 mg ml –1 Fe(acac) 3 ) using a water sonication bath at 45 °C. Additionally, 0.1 ml of 1 mM tris(2,2-bipyridyl) dichlororuthenium (II) hexahydrate and 0.1 ml of 10 mM sodium persulfate were added as photo initiators. The subsequent preparation processes were similar to those for Fe3DSAC but without the post-treatment. For agarose, 50 mg agarose was dissolved in 5.0 ml water using a water bath at 75 °C. Fe(acac) 3 aqueous suspension (0.5 ml of 10 mg ml –1 ) was mixed together with the base printing ink to formulate the desired printing ink (~1 mg ml –1 Fe(acac) 3 ) using a water sonication bath at 75 °C. With the cool down to room temperature, the hydrogel was achieved, followed by lyophilization. The subsequent preparation processes were similar to those of Fe3DSAC but without the post-treatment. For alginate, 5.2 ml of 1 wt% sodium alginate aqueous solution was mixed with 0.5 ml of Fe(acac) 3 aqueous suspension (10 mg ml –1 ). Afterwards, 3.6 ml of 0.2 M CaCl 2 aqueous solution was poured into the mixed solution of sodium alginate and Fe(acac) 3 to formulate the desired printing ink, followed by lyophilization. The subsequent preparation processes are similar to those for Fe3DSAC but without the post-treatment. Post-treatment process for chloride coordination After the calcination, the as-prepared products were directly soaked in 1.0 M hydrochloric acid at 60 °C for 18 h. Following washing with distilled water three times, the treated samples were placed in an oven at 60 °C overnight to dry. Specifically, it is worth noting that all Fe3DSAC samples went through this post-treatment process expect for the untreated one. Materials characterization The morphology of the samples was characterized using FEI Quanta 450 SEM operated at 20.0 kV and FEI Titan Themis 80-200 TEM operated at 200.0 kV. The absorbance data of spectrophotometer were collected on a SHIMADZU UV-2600 UV-Vis spectrophotometer. XRD patterns were obtained using a Bruker D8 ADVANCE ECO X-ray diffractometer with Cu Kα radiation. X-ray absorption spectra were collected on the X-ray Absorption Spectroscopy and Soft X-ray Spectroscopy beamline at Australian Synchrotron. All the as-prepared SACs were granulated and measured at room temperature in fluorescence excitation mode. The corresponding reference samples were mixed with cellulose and measured in a transmission mode using standard He-filled chambers. The EXAFS raw data were background-subtracted, normalized and FT by standard procedures using ATHENA module implemented in the IFEFEIT software packages 60 . Least-squares curves fitting analyses of the EXAFS χ ( k ) data was carried out using the ARTEMIS program 60 . The WT of EXAFS raw data were conducted by a suite of software modules developed by Marina Chukalina (IMT-RAS) and Harald Funke (IRC-FZR) 49 , 50 . For the metal loadings test, each Fe, Co and Ni sample was placed in a 50-ml autoclave with a mixed solution of 7.5 ml distilled water and 2.5 ml of nitric acid (70%). After sealing, the autoclave was kept at 150 °C for 8 h to dissolve the carbon. The obtained solutions were used to determine the contents of Fe, Co and Ni by an inductively coupled plasma mass-spectrometer (Agilent 8900x QQQ-ICP-MS). Electrochemical measurements Electrochemical data were collected with a CHI-760D electrochemical workstation (CHI Instrument, Inc.). An H-type cell with three-electrode system was used in the electrochemical measurement, in which a graphite-rod (φ6 mm × 65 mm) was used as counter electrode and an Ag/AgCl (filled with saturated KCl) as a reference electrode. The cathodic chamber was separated from the anodic chamber by an anion exchange membrane (Fumasep FAA-3-50). For 3D-printed samples, the as-prepared sample was fixed by a polytetrafluoroethylene (PTFE) replaceable electrode holder with platinum sheet as the conductive substrate and served directly as the working electrode. All experiments were carried out at room temperature and all potentials were referenced against RHE based on the Nernst equation ( E RHE = E Ag/AgCl + 0.0592 × pH + 0.2). The electrolyte used was 0.10 M KOH with various concentrations of nitrate/nitrite, which was purged with ultra-high purity Ar before the electrolysis process. Quantification of NH 3 by the indophenol blue method 61 Absorbance at 650 nm for each solution was collected with a UV-Vis spectrophotometer (SHIMADZU, UV-2600). A series of standard solutions with suitable NH 4 Cl concentration diluted with 0.10 M KOH were prepared and tested to plot a calibration curve. The concentrations of ammonia in the test solutions were calculated directly from the calibration curve. Indophenol blue indicator was prepared by mixing three reagents. Chromogenic reagent (A): 5.0 g of sodium salicylate and 5.0 g of potassium sodium tartrate were dissolved in 100 ml of 1.0 M NaOH; oxidizing solution (B): 3.5 ml of sodium hypochlorite (available chlorine 10–15%) was added to 100 ml of deionized water; catalysing reagent (C): 0.2 g of sodium nitroferricyanide was dissolved in 20 ml of deionized water. For UV-Vis absorbance measurement, 2.0 ml of chromogenic reagent (A), 1.0 ml of oxidizing solution (B) and 0.2 ml of catalysing reagent (C) were added to the vial which contained 2.0 ml of the test solutions. After mixing and incubation for 1 h, the concentration of the produced indophenol blue was measured using UV-Vis absorbance spectrophotometer. Computational details All calculations were conducted using DFT with the Perdew–Burke–Ernzerhof (PBE) exchange-correlation functional in the VASP code 62 , 63 , 64 . The ionic cores were described by the projector-augmented wave method. A cut-off energy of 500 eV was used for plane wave expansion. During geometry optimization, the force convergence on each atom was set to be smaller than 0.02 eV Å −1 . A (4 × 4 × 1) Monkhorst-Pack k-point was applied to sample the Brillouin zone. The DFT-D3 method of Grimme was used in all calculations to address van der Waals interactions between atoms. A one-layer 5 × 5 graphene supercell with 15 Å of vacuum space was used to build the FeSAC model. The computational hydrogen electrode model was employed for free energy calculations 65 , 66 . In this model, the free energy of an electron–proton pair at 0 V versus RHE is by definition equal to half of the free energy of gaseous hydrogen at 101,325 Pa. The free energies of intermediates were calculated by $${{G}} = {{E}}_{{{{\mathrm{DFT}}}}} + {{{\mathrm{ZPE}}}}-{{TS}},$$ where E DFT , ZPE, T and S were adsorption energy, zero point energy, temperature and entropy, respectively. The adsorption energy of *NO 3 was calculated with respect to solvated nitrate ions (NO 3 − ) (ref. 67 ). A 1.12 eV correction was applied to compensate for the DFT-calculated error of formation energies of HNO 3 (ref. 68 ). Data availability All data supporting the findings of this study are available in the article and its Supplementary information. Source data are provided with this paper.
A large international collaboration led by Prof Shizhang Qiao, an Australian Laureate Fellow at the University of Adelaide has developed a straightforward and cost-effective synthesizing approach using a 3D printing technique to produce single-atom catalysts (SACs)—potentially paving the way for large-scale commercial production with broad industrial applications. The research has been published in Nature Synthesis. The team mailed in samples to the Australian Synchrotron during the COVID lockdown for materials characterization using the X-ray absorption spectroscopy (XAS) beamline. A catalyst is a substance that is designed to drive a specific chemical reaction to convert chemicals to other, less harmful, valuable industrial products. The efficiency at which a given catalyst aids the reaction is often found to be determined by its surface area. For example, a bulk metallic cobalt foil may aid in chemical reductions, but the same number of cobalt atoms in the form of nanoparticles would be significantly more efficient given the greater surface area available for the reaction to take place. Taken to its extreme, single-atom catalysts (SACs) refer to individual metal atoms, not bonding to metal but often dispersed uniformly on a fixed substrate (such as carbon), offering the highest possible value of atom economy. The ideal atom economy, known as 100% atom economy, for a chemical reaction is a process in which all reactant atoms are found in the desired product. Synthesis procedure for 3D-printed SACs. Credit: Xie, F., Cui, X., Zhi, X. et al. A general approach to 3D-printed single-atom catalysts. Nat. Synth (2023) The isolated metal atoms have unique and novel physical and chemical properties, driving efficient and tailored catalytic reactions with extremely high catalytic activity. However, current production methods of wet-chemical processes, mechano-chemical abrasion, thermal shockwave, and laser irradiation are considered complex, costly and impractical for mass production. "We have developed a synthesis approach that allows the use of 3D printing to fabricate single-atom catalysts. Our method has the potential to be more cost-effective and simpler than current approaches," explained Prof. Qiao. 3D printing allows the customization of geometric designs from millimeters to meters, which is important for industrial applications. The combination of 3D printing and single-atom catalysts provides a promising but simplified way to manufacture SACs at different scales. "This novel combination has the potential to advance Australia's status as a global leader in tackling the effects of climate change and help us take the lead in new techniques to make chemicals that benefit society," said Prof. Qiao. Dr Bernt Johannessen at the X-ray absorption spectroscopy beamline at ANSTO's Australian Synchrotron. Credit: Australian Nuclear Science and Technology Organisation (ANSTO) Senior scientist Dr. Bernt Johannessen, also a co-author on the paper and long-time collaborator, carried out measurements on the XAS beamline for the research team across multiple beamtime allocations (and multiple COVID lockdowns). "Pleasingly, we were able to confirm that the 3D printing technique had produced a material consisting of isolated single atom sites as opposed to nanoparticles or clusters of atoms. The instrument allows us to differentiate between cobalt bonding to light elements, like carbon, or cobalt bonding to other cobalt to form nanoparticles," confirmed by Dr. Johannessen. "The larger clusters you have, the less effective they will be as single-atom catalysts, so the confirmation of the isolated nature of single-atom sites is crucial to the project conclusions and potential industrial applications. "The XAS Beamline at ANSTO has been integral to a number of high-profile studies in this field over the past several years now, and we are looking forward to seeing how our user community continues to grow over the years ahead."
10.1038/s44160-022-00193-3
Chemistry
How AI might speed up the discovery of new drugs
Anastasiia V. Sadybekov et al, Computational approaches streamlining drug discovery, Nature (2023). DOI: 10.1038/s41586-023-05905-z Journal information: Nature
https://dx.doi.org/10.1038/s41586-023-05905-z
https://phys.org/news/2023-04-ai-discovery-drugs.html
Abstract Computer-aided drug discovery has been around for decades, although the past few years have seen a tectonic shift towards embracing computational technologies in both academia and pharma. This shift is largely defined by the flood of data on ligand properties and binding to therapeutic targets and their 3D structures, abundant computing capacities and the advent of on-demand virtual libraries of drug-like small molecules in their billions. Taking full advantage of these resources requires fast computational methods for effective ligand screening. This includes structure-based virtual screening of gigascale chemical spaces, further facilitated by fast iterative screening approaches. Highly synergistic are developments in deep learning predictions of ligand properties and target activities in lieu of receptor structure. Here we review recent advances in ligand discovery technologies, their potential for reshaping the whole process of drug discovery and development, as well as the challenges they encounter. We also discuss how the rapid identification of highly diverse, potent, target-selective and drug-like ligands to protein targets can democratize the drug discovery process, presenting new opportunities for the cost-effective development of safer and more effective small-molecule treatments. Main Despite amazing progress in basic life sciences and biotechnology, drug discovery and development (DDD) remain slow and expensive, taking on average approximately 15 years and approximately US$2 billion to make a small-molecule drug 1 . Although it is accepted that clinical studies are the priciest part of the development of each drug, most time-saving and cost-saving opportunities reside in the earlier discovery and preclinical stages. Preclinical efforts themselves account for more than 43% of expenses in pharma, in addition to major public funding 1 , driven by the high attrition rate at every step from target selection to hit identification and lead optimization to the selection of clinical candidates. Moreover, the high failure rate in clinical trials (currently 90%) 2 is largely explained by issues rooted in early discovery such as inadequate target validation or suboptimal ligand properties. Finding fast and accessible ways to discover more diverse pools of higher-quality chemical probes, hits and leads with optimal absorption, distribution, metabolism, excretion and toxicology (ADMET) and pharmacokinetics (PK) profiles at the early stages of DDD would improve outcomes in preclinical and clinical studies and facilitate more effective, accessible and safer drugs. The concept of computer-aided drug discovery 3 was developed in the 1970s and popularized by Fortune magazine in 1981, and has since been through several cycles of hype and disillusionment 4 . There have been success stories along the way 5 and, in general, computer-assisted approaches have become an integral, yet modest, part of the drug discovery process 6 , 7 . In the past few years, however, several scientific and technological breakthroughs resulted in a tectonic shift towards embracing computational approaches as a key driving force for drug discovery in both academia and industry. Pharmaceutical and biotech companies are expanding their computational drug discovery efforts or hiring their first computational chemists. Numerous new and established drug discovery companies have raised billions in the past few years with business models that heavily rely on a combination of advanced physics-based molecular modelling with deep learning (DL) and artificial intelligence (AI) 8 . Although it is too early yet to expect approved drugs from the most recent computationally driven discovery efforts, they are producing a growing number of clinical candidates, with some campaigns specifically claiming target-to-lead times as low as 1–2 months 9 , 10 , or target-to-clinic time under 1 year 11 . Are these the signs of a major shift in the role that computational approaches have in drug discovery or just another round of the hype cycle? Let us look at the key factors defining the recent changes (Fig. 1 ). First, the structural revolution—from automation in crystallography 12 to microcrystallography 13 , 14 and most recently cryo-electron microscopy technology 15 , 16 —has made it possible to reveal 3D structures for the majority of clinically relevant targets, often in a state or molecular complex relevant to its biological function. Especially impressive has been the recent structural turnaround for G protein-coupled receptors (GPCRs) 17 and other membrane proteins that mediate the action of more than 50% of drugs 18 , providing 3D templates for ligand screening and lead optimization. The second factor is a rapid and marked expansion of drug-like chemical space, easily accessible for hit and lead discovery. Just a few years ago, this space was limited to several million on-shelf compounds from vendors and in-house screening libraries in pharma. Now, screening can be done with ultra-large virtual libraries and chemical spaces of drug-like compounds, which can be readily made on-demand, rapidly growing beyond billions of compounds 19 , and even larger generative spaces with theoretically predicted synthesizability (Box 1 ). The third factor involves emerging computational approaches that strive to take full advantage of the abundance of 3D structures and ligand data, supported by the broad availability of cloud and graphics processing unit (GPU) computing resources to support these methods at scale. This includes structure-based virtual screening of ultra-large libraries 20 , 21 , 22 , using accelerated 23 , 24 , 25 and modular 26 screening approaches, as well as recent growth of data-driven machine learning (ML) and DL methods for predicting ADMET and PK properties and activities 27 . Fig. 1: Key factors driving VLS technology breakthroughs for generation of high-quality hits and leads. a , More than 200,000 protein structures in the PDB, plus private collections, have more than 90% of protein families covered with high-resolution X-ray and more recently cryo-electron microscopy structures, often in distinct functional states, with remaining gaps also filled by homology or AlphaFold2 models. b , The chemical space available for screening and fast synthesis has grown from about 10 7 on-shelf compounds in 2015 to more than 3 × 10 10 on-demand compounds in 2022, and can be rapidly expanded beyond 10 15 diverse and novel compounds. c , Computational methods for VLS include advances in fast flexible docking, modular fragment-based algorithms, DL models and hybrid approaches. d , Computational tools are supported by rapid growth of affordable cloud computing, GPU acceleration and specialized chips. Full size image Although the impacts of the recent structural revolution 17 and computing hardware in drug discovery 28 are comprehensively reviewed elsewhere, here we focus on the ongoing expansion of accessible drug-like chemical spaces as well as current developments in computational methods for ligand discovery and optimization. We detail how emerging computational tools applied in gigaspace can facilitate the cost-effective discovery of hundreds or even thousands of highly diverse, potent, target-selective and drug-like ligands for a desired target, and put them in the context of experimental approaches (Table 1 ). Although the full impact of new computational technologies is only starting to affect clinical development, we suggest that their synergistic combination with experimental testing and validation in the drug discovery ecosystem can markedly improve its efficiency in producing better therapeutics. Table 1 Comparison of experimentally driven HTS, fragment-based ligand discovery, gigascale DEL screening and gigascale VLS Full size table Box 1 Types of chemical libraries and spaces for drug discovery Pharma companies amass collections of compounds for screening in-house, whereas in-stock collections from vendors (see the figure, part a ) allow fast (less than 1 week) delivery, contain unique and advanced chemical scaffolds, are easily searchable and are HTS compatible. However, the high cost of handling physical libraries, their slow linear growth, limited size and novelty constrain their applications. More recently, virtual on-demand chemical databases (fully enumerated) and spaces (not enumerated) allow fast parallel synthesis from available building blocks, using validated or optimized protocols, with synthetic success of more than 80% and delivery in 2–3 weeks (see the figure, part b ). The virtual chemical spaces assure high chemical novelty and allow fast polynomial growth with the addition of new synthons and reaction scaffolds, including 4+ component reactions. Examples include Enamine REAL, Galaxy by WuXi, CHEMriya by Otava and private databases and spaces at pharmaceutical companies. Generative spaces, unlike on-demand spaces, comprise theoretically possible molecules and collectively could comprise all chemical space (see the figure, part c ). Such spaces are limited only by theoretical plausibility, estimated as 10 23 –10 60 of drug-like compounds. Although allowing comprehensive space coverage, the reaction path and success rate of generated compounds are unknown, and thus require computational prediction of their practical synthesizability. Examples of generative spaces and their subsets include GDB-13, GDB-17, GDB-18 and GDBChEMBL. Show more Expansion of accessible chemical space Why bigger is better The limited size and diversity of screening libraries have long been a bottleneck for detection of novel potent ligands and for the whole process of drug discovery. An average ‘affordable’ high-throughput screening (HTS) campaign 29 uses screening libraries of about 50,000–500,000 compounds and is expected to yield only a few true hits after secondary validation. Those hits, if any, are usually rather weak, non-selective, have suboptimal ADMET and PK properties and unknown binding mode, so their discovery entails years of painstaking trial-and-error optimization efforts to produce a lead molecule with satisfying potency and all the other requirements for preclinical development. Scaling of HTS to a few million compounds can be afforded only in big pharma, and it still does not make that much difference in terms of the quality of resulting hits. Likewise, virtual libraries that use in silico screening were traditionally limited to a collection of compounds available in stock from vendors, usually comprising fewer than 10 million unique compounds, therefore the scale advantage over HTS was marginal. Although chasing the full coverage of the enormous drug-like chemical space (estimated at more than 10 63 compounds) 30 is a futile endeavour, expanding the screening of on-demand libraries by several orders of magnitude to billions and more of previously unexplored drug-like compounds, either physical or virtual, is expected to change the drug discovery model in several ways. First, it can proportionally increase the number of potential hits in the initial screening 31 (Fig. 2 ). This abundance of ligands in the library also increases the chances of identification of more potent or selective ligands, as well as ligands with better physicochemical properties. This has been demonstrated in ultra-large virtual screening campaigns for several targets, revealing highly potent ligands with affinities often in the mid-nanomolar to sub-nanomolar range 20 , 21 , 22 , 23 , 26 . Second, the accessibility of hit analogues in the same on-demand spaces streamlines a generation of meaningful structure–activity relationship (SAR)-by-catalogue and further optimization steps, reducing the amount of elaborate custom synthesis. Last, although the library scale is important, properly constructed gigascale libraries can expand chemical diversity (even with a few chemical reactions 32 ), chemical novelty and patentability of the hits, as almost all on-demand compounds have never been synthesized before. Fig. 2: Benefits of a bigger chemical space. The red curves in log scale illustrate the distribution of screening hits with binding scores better than X for libraries of 10 billion, 100 million and 1 million compounds, as estimated from previous VLS and V-SYNTHES screening campaigns. The blue curves illustrate the approximate dependence of the experimental hit rate on the predicted docking score for 10-µM, 1-µM and 100-nM thresholds 20 . This analysis (semi-quantitative, as it varies from target to target) suggests that screening of more than 100 million compounds lifts the limitations of smaller libraries, extending the tail of the hit distribution towards better binding scores with high hit rates, and allowing for identification of proportionally more experimental hits with higher affinity. Note also two important factors justifying further growth of screening libraries to 10 billion and more: (1) the candidate hits for synthesis and experimental testing are usually picked as a result of target-dependent post-processing of several thousands of top-scoring compounds, which selects for novelty, diversity, drug likeness and often interactions with specific receptor residues. Thus, the more good-scoring compounds that are identified, the better overall selection can be made. (2) Saturation of the hit rate curves at best scores is not a universal rule but a result of the limited accuracy of fast scoring functions used in screening. Using more accurate docking or scoring approaches (flexible docking, quantum mechanical and free energy perturbation) in the post-processing step can extend a meaningful correlation of binding score with affinity further left (grey dashed curves), potentially bringing even more high-affinity hits for gigascale chemical spaces. Full size image Physical libraries Several approaches have been developed recently to push the library size limits in HTS, including combinatorial chemistry and large-scale pooling of the compounds for parallel assays. For example, affinity-selection mass spectrometry techniques can be applied to identify binders directly in pools of thousands of compounds 33 without the need for labelling. DNA-encoded libraries (DELs) and cost-effective approaches to generate and screen them have also been developed 34 , making it possible to work with as many as approximately 10 10 compounds in a single test tube 35 . These methods have their own limitations; as DELs are created by tagging ligands with unique DNA sequences through a linker, DNA conjugation limits the chemistries possible for the combinatorial assembly of the library. Screening of DELs may also yield a large number of false negatives by blocking important moieties for binding and, more importantly, false positives by nonspecific binding of DNA labels, so expensive off-DNA resynthesis of hit compounds is needed for their validation. To avoid this resynthesis, it has been suggested to use ML modes trained on DEL results for each target to predict drug-like ligands from on-demand chemical spaces, as described in ref. 36 . Virtual on-demand libraries In silico screening of virtual libraries by fast computational approaches has long been touted as a cost-effective way to overcome the limitations of physical libraries. Only recently, however, have synthetic chemistry and cheminformatics approaches been developed to break out of these limits and construct virtual on-demand libraries that explore much larger chemical space, as reviewed in refs. 37 , 38 . In 2017, the readily accessible (REAL) database by Enamine 19 , 39 became the first commercially available on-demand library based on the robust reaction principle 40 , whereas the US National Institutes of Health developed synthetically accessible virtual inventory (SAVI) 41 , which also uses Enamine building blocks. The REAL database uses carefully selected and optimized parallel synthesis protocols and a curated collection of in-stock building blocks, making it possible to guarantee the fast (less than 4 weeks), reliable (80% success rate) and affordable synthesis of a set of compounds 21 . Driven by new reactions and diverse building blocks, the fully enumerated REAL database has grown from approximately 170 million compounds in 2017 to more than 5.5 billion compounds in 2022 and comprises the bulk of the popular ZINC20 virtual screening database 42 . The practical utility of the REAL database has been recently demonstrated in several major prospective screening campaigns 20 , 21 , 23 , 24 , some of them taking further hit optimization steps in the same chemical space, yielding selective nanomolar and even sub-nanomolar ligands without any custom synthesis 20 , 21 . Similar ultra-large virtual libraries (that is, GalaXi ( ) and CHEMriya ( )) are available commercially, although their synthetic success rates are yet to be published. Virtual chemical spaces The modular nature of on-demand virtual libraries supports further growth by the addition of reactions and building blocks. However, building, maintaining and searching fully enumerated chemical libraries comprising more than a few billion compounds become slow and impractical. Such gigascale virtual libraries are therefore usually maintained as non-enumerated chemical spaces, defined by a specific set of building blocks and reactions (or transforms), as comprehensively reviewed in ref. 38 . Within pharma, one of the first published examples includes PGVL by Pfizer 37 , 43 , the most recent version of which uses a set of 1,244 reactions and in-house reagents to account for 10 14 compounds. Other biopharma companies have their own virtual chemical spaces 38 , 44 , although their details are often not in the public domain. Among commercially available chemical spaces, GalaXi Space by WuXi (approximately 8 billion compounds), CHEMriya by Otava (11.8 billion compounds) and Enamine REAL Space (36 billion compounds) 45 are among the largest and most established. In addition to their enormous sizes, these virtual spaces are highly novel and diverse, and have minimal overlap (less than 10%) between each other 46 . Currently, the largest commercial space, Enamine REAL Space, is an extension to the REAL database that maintains the same synthetic speed, rate and cost guarantees, covering more than 170 reactions and more than 137,000 building blocks (Box 1 ). Most of these reactions are two-component or three-component, but more four-component or even five-component reactions are being explored, enabling higher-order combinatorics. This space can be easily expanded to 10 15 compounds based on available reactions and extended building block sets, for example, 680 million of make on demand (MADE) building blocks 47 , although synthesis of such compounds involves more steps and is more expensive. To represent and navigate combinatorial chemical spaces without their full enumeration, specialized cheminformatics tools have been developed, from fragment-based chemical similarity searches 48 to more elaborate 3D molecular similarity search methods based on atomic property fields such as rapid isostere discovery engine (RIDE) 38 . An alternative approach proposed to building chemical spaces generates hypothetically synthesizable compounds following simple rules of synthetic feasibility and chemical stability. Thus, the generated databases (GDB) predict compounds that can be made of a specific number of atoms; for example, GDB-17 contained 166.4 billion molecules of up to 17 atoms of C, N, O, S and halogens 49 , whereas GDB-18 made up of 18 atoms would reach an estimated 10 13 compounds 38 . Other generative approaches based on narrower definitions of chemical spaces are now used in de novo ligand design with DL-based generative chemistry (for example, ref. 50 ), as discussed below. Although the synthetic success rate for some of the commercial on-demand chemical spaces (for example, Enamine REAL Space) have been thoroughly validated 20 , 21 , 22 , 23 , 24 , 26 , 42 , synthetic accessibilities and success rates of other chemical spaces remain unpublished 38 . These are important metrics for the practical sustainability of on-demand synthesis because reduced success rates or unreasonable time and cost would diminish its advantage over custom synthesis. Computational approaches to drug design Challenges of gigascale screening Chemical spaces of gigascale and terrascale, provided that they maintain high drug likeness and diversity, are expected to harbour millions of potential hits and thousands of potential lead series for any target. Moreover, their highly tractable robust synthesis simplifies any downstream medicinal chemistry efforts towards final drug candidates. Dealing with such virtual libraries, however, calls for new computational approaches that meet special requirements for both speed and accuracy. They have to be fast enough to handle gigascale libraries. If docking of a compound takes 10 s per CPU core, it would take more than 3,000 years to screen 10 10 compounds on a single CPU core, or cost approximately US $1 million on a computing cloud at the cheapest CPU rates. At the same time, gigascale screening must be extremely accurate, safeguarding against false-positive hits that effectively cheat the scoring function by exploiting its holes and approximations 31 . Even a one-in-a-million rate of false positives in a 10 10 compound library would comprise 10,000 false hits, which may flood out any hit candidate selection. The artefact rate and nature may depend on the target and screening algorithms and should be carefully addressed in screening and post-processing. Although there is no one simple solution for such artefacts, some practical and reasonably cost-effective remedies include: (1) selection based on the consensus of two different scoring functions, (2) selection of highly diverse hits (many artefacts cluster to similar compounds), (3) hedging the bets from several ranges of scores 31 and (4) manually curating the final list of compounds for any unusual interactions. Ultimately, it is highly desirable to fix as many remaining ‘holes in the scoring functions’ as possible, and reoptimize them for high selectivity in the range of scores where the top true hits of gigaspace are found. Missing some hits in screening (false negatives) would be well tolerated because of the huge number of potential hits in the 10 10 space (for example, losing 50% of a million potential hits is perfectly fine), so some trade-off in score sensitivity is acceptable. The major types of computational approaches to screening a protein target for potential ligands are summarized in Table 2 . Below, we discuss some emerging technologies and how they can best fit into the overall DDD pipeline to take full advantage of growing on-demand chemical spaces. Table 2 Major types of virtual screening algorithms Full size table Receptor structure-based screening In silico screening by docking molecules of the virtual library into a receptor structure and predicting its ‘binding score’ is a well-established approach to hit and lead discovery and had a key role in recent drug discovery success stories 11 , 17 , 51 . The docking procedure itself can use molecular mechanics, often in internal coordinate representation, for rapid conformational sampling of fully flexible ligands 52 , 53 , using empirical 3D shape-matching approaches 54 , 55 , or combining them in a hybrid docking funnel 56 , 57 . Special attention is devoted to ligand scoring functions, which are designed to reliably remove non-binders to minimize false-positive predictions, which is especially relevant with the growth of library size. Blind assessments of the performance of structure-based algorithms have been routinely performed as a D3R Grand Challenge community effort 58 , 59 , showing continuous improvements in ligand pose and binding energy predictions for the best algorithms. Results of the many successful structure-based prospective screening campaigns have been published over the years covering all major classes of targets, most recently GPCRs, as reviewed in refs. 17 , 51 , 60 , whereas countless more have been used in industry. The focused candidate ligand sets, predicted by such screening, often show useful (10–40%) hit rates in experimental testing 60 , yielding novel hits for many targets with potencies in the 0.1–10-μM range (for those that are published, at least). Further steps in optimization of the initial hits obtained from standard screening libraries of less than 10 million compounds, however, usually require expensive custom synthesis of analogues, which has been afforded only in a few published cases 20 , 61 . Identification of hits directly in much larger chemical spaces such as REAL Space not only can bring more and better hits 31 but also supports their optimization, as any resulting hit has thousands of analogues and derivatives in the same on-demand space. This advantage was especially helpful for such challenging targets as SARS-CoV-2 main protease (M pro ), for which hundreds of standard virtual ligand screening (VLS) attempts came up empty-handed 62 (see discussion on M pro challenges in ‘Hybrid in vitro–in silico approaches’ below). Although the initial hit rates were low even in the ultra-large screens, VirtualFlow 24 of the REAL database with 1.4 billion compounds still identified hits in the 10–100-µM range, which were optimized via on-demand synthesis 63 to yield quality leads with the best compound Z222979552 (half maximal inhibitory concentration (IC 50 ) = 1.0 μM). Another ultra-large screen of 235 million compounds, based on a newer M pro structure with a non-covalent inhibitor (Protein Data Bank (PDB) ID: 6W63 ), also produced viable hits, fast optimization of which resulted in the discovery of nanomolar M pro inhibitors in just 4 months by a combination of on-demand and simple custom chemistry 64 . The best compound in this work had good in vitro ADMET properties, with an affinity of 38 nM and a cell-based antiviral potency of 77 nM, which are comparable to clinically used PF-07321332 (nirmatrelvir) 65 . With increasing library sizes, the computational time and cost of docking itself become the main bottleneck in screening, even with massively parallel cloud computing 60 . Iterative approaches have been recently suggested to tackle libraries of this size; for example, VirtualFlow used stepwise filtering of the whole library with docking algorithms of increasing accuracy to screen approximately 1.4 billion Enamine REAL compounds 23 , 24 . Although improving speed several-fold, the method still requires a fully enumerated library and its computational cost grows linearly with the number of compounds, limiting its applicability in rapidly expanding chemical spaces. Modular synthon-based approaches The idea of designing molecules from a limited set of fragments to optimally fill the receptor binding pocket has been entertained from the early years of drug discovery, implemented, for example, in the LUDI algorithm 66 . However, custom synthesis of the designed compounds remained the major bottleneck of such approaches. The recently developed virtual synthon hierarchical enumeration screening (V-SYNTHES) 26 technology applies fragment-based design to on-demand chemical spaces, thus avoiding the challenges of custom synthesis (Fig. 3 ). Starting with the catalogue of REAL Space reactions and building blocks (synthons), V-SYNTHES first prepares a minimal library of representative chemical fragments by fully enumerating synthons at one of the attachment points, capping the other position (or positions) with a methyl or phenyl group. Docking-based screening then allows selection of the top-scoring fragments (for example, the top 0.1%) that are predicted to bind well into the target pocket. This is repeated for a second position (and then third and fourth positions, if available), and the resulting focused libraries are screened at each iteration against the target pocket. At the final step, the top approximately 50,000 full compounds from REAL Space are docked with more elaborate and accurate docking parameters or methods, and the top-ranking candidates are filtered for novelty, diversity and variety of desired drug-like properties. In post-processing, the best 50–500 compounds are selected for synthesis and testing. Our assessment suggests that combining synthons with the scaffolds and capping them with dummy minimal groups in the V-SYNTHES algorithm is a critical requirement for optimal fragment predictions because reactive groups of building blocks and scaffolds often create strong, yet false, interactions that are not present in the full molecule. Another important part of the algorithm is the evaluation of the fragment-binding pose in the target, which prioritizes those hits with minimal caps pointed into a region of the pocket where the fragment has space to grow. Fig. 3: Synthon-based hierarchical screening. An overview of the V-SYNTHES algorithm allowing effective screening of more than 31 billion compounds in REAL Space or even larger chemical spaces, while performing enumeration and docking of only small fractions of molecules. The algorithm, illustrated here using a two-component reaction based on a sulfonamide scaffold with R 1 and R 2 synthons, can be applied to hundreds of optimized two-component, three-component or more-component reactions by iteratively repeating steps 3 and 4 until fully enumerated molecules optimally fitting the target pocket are obtained. PAINS, pan assay interference compounds. Full size image Initially applied to discover new chemotypes for cannabinoid receptor CB 2 antagonists, V-SYNTHES has shown a hit rate of 23% for submicromolar ligands, which exceeded the hit rate of standard VLS by fivefold, while taking about 100 times less computational resources 26 . A similar hit rate was found for the ROCK1 kinase screening in the same study, with one hit in the low nanomolar range 26 . V-SYNTHES is being applied to other therapeutically relevant targets with well-defined pocket structures. A similar approach, chemical space docking, has been implemented by BioSolveIT, so far for two-component reactions 67 . This method is even faster, as it docks individual building block fragments and then enumerates them with scaffolds and other synthons. However, there are trade-offs for the extra speed: docking of smaller fragments without scaffolds is less reliable, and their reactive groups often have dissimilar properties from the reaction product. This may introduce strong receptor interactions that are irrelevant to the final compound and can misguide the fragment selection. This is especially true for cycloaddiction reactions and three-component scaffolds, which need further validation in chemical space docking. Apart from supporting the abundance, chemical diversity and potential quality of hits, structure-based modular approaches are especially effective in identifying hits with robust chemical novelty, as they (1) do not rely on information for existing ligands and (2) identify ligands that have never been synthesized before. This is an important factor in assuring the patentability of the chemical matter for hit compounds and the lead series arising from gigascale screening. Moreover, thousands of easily synthesizable analogues assure extensive SAR-by-catalogue for the best hits, which, for example, enabled approximately 100-fold potency and selectivity improvement for the CB 2 V-SYNTHES hits 26 . Availability of the multilayer on-demand chemical space extensions (for example, supported by MADE building blocks 47 ) can also greatly streamline the next steps in lead optimization through ‘virtual MedChem’, thus reducing extensive custom synthesis. Data-driven approaches and DL In the era of AI-based face recognition, ChatGPT and AlphaFold 68 , there is enormous interest in applications of data-driven DL approaches across drug discovery, from target identification to lead optimization to translational medicine (as reviewed in refs. 69 , 70 , 71 ). Data-driven approaches have a long history in drug discovery, in which ML algorithms such as support vector machine, random forest and neural networks have been used extensively to predict ligand properties and on-targets activities, albeit with mixed results. Accurate quantitative structure–property relationship (QSPR) models can predict physicochemical (for example, solubility and lipophilicity) and pharmacokinetic (for example, bioavailability and blood–brain barrier penetration) properties, in which large and broad experimental datasets for model training are available and continue to grow 72 , 73 , 74 . ML is also implemented in many quantitative SAR (QSAR) algorithms 75 , in which the training set and the resulting models are focused on a given target and a chemical scaffold, helping to guide lead affinity and potency optimization. Methods based on extensive ligand–target binding datasets, chemical similarity clustering and network-based approaches have also been suggested for drug repurposing 76 , 77 . The advent of DL takes data-driven models to the next level, allowing analysis of much larger and diverse datasets while deriving more complicated non-linear relationships, with vast literature describing specific DL methodologies and applications to drug discovery 27 , 70 . By its ‘learning from examples’ nature, AI requires comprehensive ligand datasets for training the predictive models. For QSPR, large public and private databases have been accumulated, with various properties such as solubility, lipophilicity or in vitro proxies for oral bioavailability and brain permeability experimentally measured for many thousands of diverse compounds, allowing prediction of these properties in a broad range of new compounds. The quality of QSAR models, however, differs for different target classes depending on data availability, with the most advances achieved for the kinase superfamily and aminergic GPCRs. An unbiased benchmark of the best ML QSAR models was given by a recent IDG-DREAM Drug-Kinase Binding Prediction Challenge with the participation of more than 200 experts 78 . The top predictive models in this blind assessment included kernel learning, gradient boosting and DL-based algorithms. The top-performing model (from team Q.E.D) used a kernel regression, protein sequence similarity and affinity values of more than 60,000 compound–kinase pairs between 13,608 compounds and 527 kinases from ChEMBL 79 and Drug Target Commons 80 databases as the training data. The best DL model used as many as 900,000 experimental ligand-binding data points for training, but still trailed the much simpler kernel model in performance. The best models achieved a Spearman rank coefficient of 0.53 with a root-mean-square error of 0.95 for the predicted versus experimental p K d values in the challenge set. Such accuracy was found to be on par with the accuracy and recall of single-point experimental assays for kinase inhibition, and may be useful in screenings for the initial hits for less explored kinases and guiding lead optimization. Note, however, that the kinase family is unique as it is the largest class of more than 500 targets, all possessing similar orthosteric binding pockets and sharing high cross-selectivity. The distant second family with systematic cross-reactivity comprises about 50 aminergic GPCRs, whereas other GPCR families and other cross-reactive protein families are much smaller. The performance and generalizability of ML and DL methods for these and other targets remain to be tested. The development of broadly generalizable or even universal models is the key aspiration of AI-driven drug discovery. One of the directions here is to extract general models of binding affinities (binding score functions) from data on both known ligand activities and corresponding protein–ligand 3D structures, for example, collected in the PDBbind database 81 or obtained from docking. Such models explore various approaches to represent the data and network architectures, including spatial graph-convolutional models 82 , 83 , 3D deep convolutional neural networks 84 , 85 or their combinations 86 . A recent study, however, found that regardless of neural network architecture, an explicit description of non-covalent intermolecular interactions in the PDBbind complexes does not provide any statistical advantage compared with simpler approximations of only ligand or only receptor that omit the interactions 87 . Therefore, the good performances of DL models based on PDBbind rely on memorizing similar ligands and receptors, rather than on capturing general information about their binding. One possible explanation for this phenomenon is that the PDBbind database does not have an adequate presentation of ‘negative space’, that is, ligands with suboptimal interaction patterns to enforce the training. This mishap exemplifies the need for a better understanding of behaviour of DL models and their dependence on the training data, which is widely recognized in the AI community. It has been shown that DL models, especially based on limited datasets lacking negative data, are prone to overtraining and spurious performance, sometimes leading to whole classes of models deemed ‘useless’ 88 or severely biased by subjective factors defining the training dataset 89 . Statistical tools are being developed to define the applicability range and carefully validate the performance of the models. One of the proposed concepts is the predictability, computability and stability framework for ‘veridical data science’ 90 . Adequate selection of quality data has been specifically identified by leaders of the AI community as the major requirement for closing the ‘production gap’, or the inability of ML models to succeed when they are deployed in the real world, thus calling for a data-centric approach to AI 91 , 92 . There have also been attempts to develop tools to make AI ‘explainable’, that is, able to formulate some general trends in the data, specifically in the drug discovery applications 93 . Despite these challenges and limitations, AI is already starting to make a substantial effect on drug discovery, with the first AI-based drug candidates making it into the preclinical and clinical studies. For kinases, the AI-driven compounds were reported as potent and effective in vivo inhibitors of the receptor tyrosine kinase DDR1, which is involved in fibrosis 9 . Phase I clinical trials have been announced for ISM001-055 (also known as INS018_055) for the treatment of idiopathic pulmonary fibrosis 10 , although the identity of the compound and its target has not been disclosed. For GPCRs, AI-driven compounds targeting 5-HT 1A , dual 5-HT 1A –5-HT 2A and A 2A receptors have recently entered clinical trials, providing further support for the AI-driven drug discovery concept. These first success stories are coming from kinase and GPCR families with already well-studied pharmacology, and the compounds show close chemical similarity to known high-affinity scaffolds 94 . It is important for the next generation of DL drug candidates to improve in novelty and applicability range. Hybrid computational approaches As discussed above, physics-based and data-driven approaches have distinct advantages and limitations in predicting ligand potency. Structure-based docking predictions are naturally generalizable to any target with 3D structures and can be more accurate, especially in eliminating false positives as the main challenge of screening. Conversely, data-driven methods may work in lieu of structures and can be faster, especially with GPU acceleration, although they struggle to generalize beyond data-rich classes of targets. Therefore, there are numerous ongoing efforts to combine physics-based and data-driven approaches in some synergistic ways in general 95 , and in drug discovery specifically 96 . In virtual screening approaches, a synergetic use of physics-based docking with data-based scoring functions may be highly beneficial. Moreover, if the physics-based and data-based scoring functions are relatively independent and both generate enrichment in the selected focused libraries, their combination can reduce the false-positive rates and improve the quality of the hits. This synergy is reflected in the latest 3DR Grand Challenge 4 results for ligand IC 50 predictions 59 , in which the top methods that used a combination of both physics-based and ML scoring outperformed those that did not use ML. Going forward, thorough benchmarking of physics-based, ML and hybrid approaches will be a key focus of a new Critical Assessment of Computational Hit-finding Experiments (CACHE), which will assess five specific scenarios relevant to practical hit and lead discovery and optimization 97 . At a deeper level, the results of accurate physics-based docking (in addition to experimental data, for example, from PDBbind 81 ) can be used to train generalized graph or 3D DL models predicting ligand–receptor affinity. This would help to markedly expand the training dataset and balance positive and negative (suboptimal binding) examples, which is important to avoid the overtraining issues described in ref. 87 . Such DL-based 3D scoring functions for predicting molecular binding affinity from a docked protein−ligand complex are being developed and benchmarked, most recently RTCNN 98 , although their practical utility remains to be demonstrated. To expand the range of structure-based docking applicability to those targets lacking high-resolution structures, it is also tempting to use AI-derived AlphaFold2 (refs. 99 , 100 ) or RosettaFold 101 3D models, which already show utility in many applications, including protein–protein and protein–peptide docking 102 . Traditional homology models based on close protein similarity, especially when refined with known ligands 103 , have been used in small-molecule docking and virtual screening 104 , therefore AlphaFold2 is expected to further expand the scope of structural modelling and its accuracy. In a recent report, AlphaFold2 models, augmented by other AI approaches, helped to identify a cyclin-dependent kinase 20 (CDK20) small-molecule inhibitor, although at a modest affinity of 8.9 μM (ref. 105 ). More general benchmarking of the performance of AlphaFold2 models in virtual screening, however, gives mixed results. In a benchmark focused on targets with existing crystal structures, most AlphaFold2 models had to be cleaned from loops blocking the binding pocket and/or augmented with known ion or other cofactors to achieve reasonable enrichment of hits 106 . For the more practical cases of targets lacking experimental structures, especially for target classes with less obvious structural homologies in the ligand-binding pocket, the performance of AlphaFold2 models in small-molecule docking showed disappointing results in recent assessments for GPCR and antibacterial targets 107 , 108 . The recently developed AphaFill approach 109 for ‘transplanting’ small-molecule cofactors and ligands form PDB structures to homologous AlphaFold2 models can potentially help to validate and optimize these models, although further assessment of their utility for docking and virtual screening is ongoing. To speed up virtual screening of ultra-large chemical libraries, several groups have suggested hybrid iterative approaches, in which results of structure-based docking of a sparse library subset are used to train ML models, which are then used to filter the whole library to further reduce its size. These methods, including MolPal 25 , Active Learning 110 and DeepDocking 111 , report as much as 14–100 reduction in the computational cost for libraries of 1.4 billion compounds, although it is not clear how they would scale to rapidly growing chemical spaces. We should emphasize here that scoring functions in fast-docking algorithms and ML models are primarily designed and trained to effectively separate potential target binders from non-binders, although they are not very accurate in predictions of binding affinities or potencies. For more accurate potency predictions, the smaller focused library of candidate binders selected by the initial AI or docking-based screening can be further analysed and ranked using more elaborate physics-based tools, including free energy perturbation methods for relative 112 and absolute 113 , 114 , 115 free energy of ligand binding. Although these methods are much slower, utilization of GPU accelerated calculations 28 holds the potential for their broader application in post-processing in virtual screening campaigns to further enrich the hit rates for high-affinity candidates (Fig. 2 ), as well as in lead optimization stages. Future challenges Further growth of readily accessible chemical spaces The advent of fast and practical methods for screening gigascale chemical spaces for drug discovery stimulates further growth of these on-demand spaces, supporting better diversity and the overall quality of identified hits and leads. Specifically developed for V-SYNTHES screening, the xREAL extension of Enamine REAL Space now comprises 173 billion compounds 116 , and can be further expanded to 10 15 compounds and beyond by tapping into an even larger building block set (for example, to 680 million of MADE building blocks 47 ), by including four-component or five-component scaffolds, and by using new click-like chemistries as they are discovered. Real-world testing of MADE-enhanced REAL Space, and other commercial and proprietary chemical spaces will allow a broader assessment of their synthesizability and overall utility 38 , 117 , 118 . In parallel, specialized ultra-large libraries can be built for important scaffolds underrepresented in general purpose on-demand spaces, for example, screening of a virtual library of 75 million easily synthesizable tetrahydropyridines recently yielded potent agonists for the 5-HT 2A receptor 119 . Further growth of the on-demand chemical space size and diversity is also supported by recent development of new robust reactions for the click-like assembly of building blocks. As well as ‘classical’ azide-alkyne cycloaddition click chemistry 120 , recognized by the 2022 Nobel Prize in chemistry 121 , and optimized click-like reactions including SuFEx 122 , more recent developments such as Ni-electrocatalysed doubly decarboxylative cross-coupling 123 show promise. Other carbon–carbon forming reactions use methyliminodiacetic acid boronates for C sp 2 –C sp 2 couplings 124 , and most recently tetramethyl N -methyliminodiacetic acid boronates 125 for stereospecific C sp 3 –C bond formation. Each of these reactions applied iteratively can generate new on-demand chemical spaces of billions of diverse compounds operating with a limited number of building blocks. Similar to the routinely used automatic assembly of amino acids in peptide synthesis, fully automated processes could be carried out with robots capable of producing a library of drug-like compounds on demand using combinations of a few thousand diverse building blocks 126 , 127 , 128 . Such machines are already working, although scaling-up production of thousands of specialized building blocks remains the bottleneck. The development of more robust generative chemical spaces can also be supported by new computational approaches in synthetic chemistry, for example, predictions of new iterative reaction sequences 129 or synthetic routes and feasibility from DL-based retrosynthetic analysis 130 . In generative models, synthesizability predictions can be coupled with predictions of potency and other properties towards higher levels of automated chemical design 131 . Thus, generative adversarial networks combined with reinforcement learning (GAN-RL) were recently used to predict synthetic feasibility, novelty and biological activity of compounds, enabling the iterative cycle of in silico optimization, synthesis and testing of the ligands in vitro 50 , 132 . When applied within a set of well-established reactions and pharmacologically explored classes of targets, these approaches already yield useful hits and leads, leading to clinical candidates 50 , 132 . However, the wider potential of automated chemical design concepts and robotic synthesis in drug discovery remains to be seen. Hybrid in vitro–in silico approaches Although blind benchmarking and recent prospective screening success stories for the growing number of targets support utility of modern computational tools, there are whole classes of challenging targets, in which existing in silico screening approaches are not expected to fare very well by themselves. Some of the hardest cases are targets with cryptic or shallow pockets that have to open or undergo a substantial induced fit to engage ligand, as often found when targeting allosteric sites, for example, in kinases or GPCRs, or protein–protein interactions in signalling pathways. Although bioinformatics and molecular dynamics approaches can help to detect and analyse allosteric and cryptic pockets 133 , computational tools alone are often insufficient to support ligand discovery for such challenging sites. The cryptic and shallow pockets, however, have been rather successfully handled by fragment-based drug discovery approaches, which start with experimental screening for the binding of small fragments. The initial hits are found by very sensitive methods, such as BIACORE, NMR, X-ray 134 , 135 and potentially cryo-electron microscopy 136 , to reliably detect weak binding, usually in the 10–100-μM range. The initial screening of the target can be also performed with fragments decorated by a chemical warhead enabling proximity-driven covalent attachment of a low-affinity ligand 137 . In either case, elaboration of initial fragment hits to full high-affinity ligands is the key bottleneck of fragment-based drug discovery, which requires a major effort involving ‘growing’ the fragment or linking two or more fragments together. This is usually an iterative process involving custom ligand design and synthesis that can take many years 134 , 138 . At the same time, structure-based virtual screening can help to computationally elaborate the fragments to match the experimentally identified conformation of the target binding pocket. Most cost-effectively, this approach can be applied when fragment hits are identified from the on-demand space building blocks or their close analogues for easy elaboration in the same on-demand space 139 . The recent examples of hybrid fragment-based computational design approaches targeting SARS-CoV-2 inhibitors highlight the challenges presented by such targets and allow head-to-head comparisons to ultra-large VLS. One of the studies was aimed at the SARS-CoV-2 NSP3 conserved macrodomain enzyme (Mac1), which is a target critical for the pathogenesis and lethality of the virus. Building on crystallographic detection of the low-affinity (180 μM) fragments weakly binding Mac1 (ref. 139 ), merging of the fragments identified a 1-μM hit, quickly optimized by catalogue synthesis to a 0.4-μM lead 140 . In the same study, an ultra-scale screening of 400 million REAL database identified more than 100 new diverse chemotypes of drug-like ligands, with follow-up SAR-by-catalogue optimization yielding a 1.7-μM lead 140 . For the SARS-CoV-2 main protease M pro , the COVID Moonshot initiative published results of crystallographic screening of 1,500 small fragments with 71 hits bound in different subpockets of the shallow active site, albeit none of them showing in vitro inhibition of protease even at 100 μM (ref. 141 ). Numerous groups crowdsourcing the follow-up computational design and screening of merged and growing fragments helped to discover several SAR series, including a non-covalent M pro inhibitor with an enzymatic IC 50 of 21 μM. Further optimization by both structure-based and AI-driven computational approaches, which used more than 10 million MADE Enamine building blocks, led to the discovery of preclinical candidates with cell-based IC 50 in the approximately 100-nM range, approaching the potency of nirmatrelvir 65 . The enormous scale, urgency and complexity of this Moonshot effort with more than 2,400 compounds synthesized on demand and measured in more than 10,000 assays are unprecedented and this highlights the challenges of de novo design of non-covalent inhibitors of M pro . Beyond the Moonshot initiative, a flood of virtual screening efforts yielded mostly disappointing results 62 , for example, the antimalaria drug ebselen, which was proposed in an early virtual screen 142 , failed in clinical trials. Most of these studies, however, screened small-ligand sets focused on repurposing existing drugs, lacked experimental support and used the first structure of M pro solved in a covalent ligand complex (PDB ID: 6LU7 ) that was suboptimal for docking non-covalent molecules 142 . In comparison, several studies screening ultra-large libraries were able to identify de novo non-covalent M pro inhibitors in the 10–100-μM range 24 , 62 , 63 , 143 , while experimentally testing only a few hundred synthesized on-demand compounds. One of these studies further elaborated on these weak VLS hits by testing their Enamine on-demand analogues, revealing a lead with IC 50 = 1 μM in cell-based assays, and validating its non-covalent binding crystallographically 63 . Another study based on a later, more suitable non-covalent co-crystal structure of M pro (PDB ID: 6W63 ) used an ultra-large docking and optimization strategy to discover even more potent 38-nM lead compounds 64 . Note that, although the results of the initial ultra-large screenings for M pro were modest, they were on par with the much more elaborate and expensive efforts of the Moonshot hybrid approach, with simple on-demand optimization leading to similar-quality preclinical candidates. These examples suggest that even for challenging shallow pockets, structure-based virtual screening can often provide a viable alternative when performed at gigascale and supported by accurate structures, sufficient testing and optimization effort. Outlook towards computer-driven drug discovery With all the challenges and caveats, the emerging capability of in silico tools to effectively tap into the enormous abundance and diversity of drug-like on-demand chemical spaces at the key target-to-hit-to-lead-to-clinic stages make it tempting to call for the transformation of the DDD ecosystem from computer-aided to computer-driven 144 (Fig. 4 ). At the early hit identification stage, the ultra-scale virtual screening approaches, both structure-based and AI-based, are becoming mainstream in providing fast and cost-effective entry points into drug discovery campaigns. At the hit-to-lead stage, the more elaborate potency prediction tools such as free energy perturbation and AI-based QSAR often guide rational optimization of ligand potency. Beyond the on-target potency and selectivity, various data-driven computational tools are routinely used in multiparameter optimization of the lead series that includes ADMET and PK properties. Of note, chemical spaces of more than 10 10 diverse compounds are likely to contain millions of initial hits for each target 20 (Box 1 ), thousands of potent and selective leads and, with some limited medicinal chemistry in the same highly tractable chemical space, drug candidates ready for preclinical studies. To harness this potential, the computational tools need to become more robust and better integrated into the overall discovery pipeline to ensure their impact in translating initial hits into preclinical and clinical development. Fig. 4: Computationally driven drug discovery. Schematic comparison of the standard HTS plus custom synthesis-driven discovery pipeline versus the computationally driven pipeline. The latter is based on easily accessible on-demand or generative virtual chemical spaces, as well as structure-based and AI-based computational tools that streamline each step of the drug discovery process. Full size image One should not forget here that any computational models, however useful or accurate, may never ensure that all of the predictions are correct. In practice, the best virtual screening campaigns result in 10–40% of candidate hits confirmed in experimental validation, whereas the best affinity predictions used in optimization rarely have accuracy better than 1 kcal mol −1 root-mean-square error. Similar limitations apply to current computational models predicting ADMET and PK properties. Therefore, computational predictions always need experimental validation in robust in vitro and in vivo assays at each step of the pipeline. At the same time, experimental testing of predictions also provides data that can feed back into improving the quality of the models by expanding their training datasets, especially for the ligand property predictions. Thus, the DL-based QSPR models will greatly benefit from further accumulating data in cell-permeability assays such as CACO-2 and MDCK, as well as new advanced technologies such as organs-on-a-chip or functional organoids to provide better estimates of ADMET and PK properties without cumbersome in vivo experiments. The ability to train ADMET and PK models with in vitro assay data representing the most relevant species for drug development (typically mouse, rat and human) would also help to address species variability as a major challenge for successful translational studies. All of this creates a virtuous cycle for improving computational models to the point at which they can drive compound selection for most DDD end points. When combined with more accurate in vitro testing, this may reduce and eventually eliminate animal test requirements (as recently indicated by FDA) 145 . Building hybrid in silico–in vitro pipelines with easy access to the enormous on-demand chemical space at all stages of the gene-to-lead process can help to generate abundant pools of diverse lead compounds with optimal potency, selectivity and ADMET and PK properties, resulting in less compromise in multiparameter optimization for clinical candidates. Running such data-rich computationally driven pipelines requires overarching data management tools for drug discovery, many of them being implemented in pharma and academic DDD centres 146 , 147 . Building computationally driven pipelines will also help to reveal weak or missing links, in which new approaches and additional data may be needed to generate improved models, thus helping to fill the remaining computational gaps in the DDD pipeline. Provided this systematic integration continues, computer-driven ligand discovery has a great potential to reduce the entry barriers for generating molecules for numerous lines of inquiry, whether it is in vivo probes for new and understudied targets 148 , polypharmacology and pluridimensional signalling, or drug candidates for rare diseases and personalized medicine.
Artificial intelligence can generate poems and essays, create responsive game characters, analyze vast amounts of data and detect patterns that the human eye might miss. Imagine what AI could do for drug discovery, traditionally a time-consuming, expensive process from the bench to the bedside. Experts see great promise in a complementary approach using AI and structure-based drug discovery, a computational method that relies on knowledge of 3D structures of biological targets. We recently caught up with Vsevolod "Seva" Katritch, associate professor of quantitative and computational biology and chemistry at the USC Dornsife College of Letters, Arts and Sciences and the USC Michelson Center for Convergent Bioscience. Katritch is the co-director of the Center for New Technologies in Drug Discovery and Development (CNT3D) at the USC Michelson Center and the lead author of a new review paper published in Nature. The paper, co-authored by USC research scientist Anastasiia Sadybekov, describes how computational approaches will streamline drug discovery. We're on the cusp of major advances in drug discovery. What brings us to this moment? There has been a seismic shift in computational drug discovery in the last few years: an explosion of data availability on clinically relevant, human-protein structures—and molecules that bind them, enormous chemical libraries of drug-like molecules, almost unlimited computing power and new, more efficient computational methods. The newest excitement is about AI-based drug discovery, but what's even more powerful is a combination of AI and structure-based drug discovery, with both approaches synergistically complementing each other. How has drug discovery been done in the past? Traditional drug discovery is mostly a trial-and-error venture. It's slow and expensive, taking an average of 15 years and $2 billion. There's a high attrition rate at every step, from target selection to lead optimization. The most opportunities for time and cost savings reside in the earlier discovery and preclinical stages. What takes place in the early stage? Let's use a lock-and-key analogy. The target receptor is the lock, and the drug that blocks or activates this receptor is a key for this lock. (Of course, the caveat is that in biology nothing is black or white, so some of the working keys switch the lock better than others, and lock is a bit malleable too.) Here's an example. Lipitor, the bestselling drug of all time, targets an enzyme involved in the synthesis of cholesterol in the liver. A receptor on the enzyme is the lock. Lipitor is the key, fitting into the lock and blocking the activity of the enzyme, triggering a series of events that decrease blood levels of bad cholesterol. Now, computational approaches allow us to digitally model many billions and even trillions of virtual keys and predict which ones are likely to be good keys. Only a few dozen of the best candidate keys are chemically synthesized and tested. This sounds much more efficient If the model is good, this process yields better results than traditional trial-and-error testing of millions of random keys. This reduces the physical requirements for synthesis of compounds and testing them more than thousandsfold, while often arriving at better results, as demonstrated by our work and work of many other groups working in this field. Can you explain the difference between the two main computational approaches, structure-based and AI-based? Following the lock-and-key analogy, the structure-based approach takes advantage of our detailed understanding of the lock's structure. If the 3D, physical structure of the lock is known, we can use virtual methods to predict the structure of a key that matches the lock. The machine learning, or AI-based approach, works best when many keys are already known for our target lock or other similar locks. AI can then analyze this mixture of similar locks and keys and predict the keys that are most likely to fit our target. It does not need exact knowledge of the lock structure, but it needs a large collection of relevant keys. Thus, the structure-based and AI-based approaches are applicable in different cases and complement each other. Are there any computational limits to this process? When testing billions and trillions of virtual compounds on cloud computers, computational costs themselves can become a bottleneck. A modular, giga-scale screening technology allows us to speed up and reduce cost dramatically by virtually predicting good parts of the key, combine them together, sort of building the key from several parts. For a 10 billion-compound library, this drops the computational costs from millions of dollars to hundreds, and it allows further scale-ups to trillions of compounds.
10.1038/s41586-023-05905-z
Biology
Tierra del Fuego: Marine ecosystems from 6,000 to 5,000 years ago
Maria Bas et al, Predicting habitat use by the Argentine hake Merluccius hubbsi in a warmer world: inferences from the Middle Holocene, Oecologia (2020). DOI: 10.1007/s00442-020-04667-z Journal information: Oecologia
http://dx.doi.org/10.1007/s00442-020-04667-z
https://phys.org/news/2020-07-tierra-del-fuego-marine-ecosystems.html
Abstract Fish skeletal remains recovered from two archaeological sites dated in the Middle Holocene of Tierra del Fuego (Argentina) were analysed to describe habitat use patterns by hake in the past and predict changes in a warmer world. Mitochondrial DNA was successfully extracted and amplified from 42 out of 45 first vertebra from ancient hake and phylogenetic analysis assigned all haplotypes to Argentine hake ( Merluccius hubbsi ). According to osteometry, the Argentine hake recovered from the archaeological site were likely adults ranging 37.2–58.1 cm in standard length. C and N stable isotope analysis showed that currently Argentine hake use foraging grounds deeper than those of Patagonian blenny and pink cusk-eel. Argentine hake, however, had a much broader isotopic niche during the Middle Holocene, when a large part of the population foraged much shallower than contemporary pink cusk-eel. The overall evidence suggests the presence of large numbers of Argentine hake onshore Tierra del Fuego during the Middle Holocene, which allowed exploitation by hunter-gatherer-fisher groups devoid of fishing technology. Interestingly, average SST off Tierra del Fuego during the Middle Holocene was higher than currently (11 °C vs 7 °C) and matched SST in the current southernmost onshore spawning aggregations, at latitude 47 °S. This indicates that increasing SST resulting from global warming will likely result into an increased abundance of adult Argentine hake onshore Tierra del Fuego, as during the Middle Holocene. Furthermore, stable isotope ratios from mollusc shells confirmed a much higher marine primary productivity during the Middle Holocene off Tierra del Fuego. Access provided by MPDL Services gGmbH c/o Max Planck Digital Library Working on a manuscript? Avoid the common mistakes Introduction Global warming will modify fish distribution and abundance around the world, with tropical and subtropical species expanding poleward (Perry et al. 2005 ; Hiddink and Ter Hofstede 2008 ; Simpson et al. 2011 ), while those inhabiting colder regions are expected to change their depth range (Perry et al. 2005 ; Simpson et al. 2011 ). As a result, food web structure and dynamics are also expected to change (Hoegh-Guldberg and Bruno 2010 ; Simpson et al. 2011 ; Bas et al. 2019 ). Making precise predictions, however, about the consequences of global warming is challenging without a broad historical perspective (Swetnam et al. 1999 ; Jackson et al. 2001 ; Lotze et al. 2011 ; Friedlander et al. 2014 ). The South-Western Atlantic Ocean is inhabited by two species of hake: the Argentine hake Merluccius hubbsi (Marini, 1933) and the Southern hake Merluccius australis (Hutton, 1872), which both support important commercial fisheries (Bezzi et al. 1995 ; Cousseau and Perrotta 1998 ; Bertolotti et al. 2001 ; Lloris et al. 2005 ). Commercial hake fishing began in Argentina in the 1960s and targeted mainly Argentine hake. Commercial fishing for Southern hake started in the 1990s, when shallow water stocks of Argentine hake were declining (Lloris et al. 2005 ). Both species are similar in morphology and biology, but the Southern hake is tightly linked to the cold Malvinas current, whereas the Argentine hake prevails on warmer waters over the continental shelf (Cousseau and Perrotta 1998 ; Lloris et al. 2005 ). Furthermore, the abundance of Argentine hake declines sharply south of 52 °S and coastal summer spawning aggregations do not exist south of 47 °S (Bezzi et al. 1995 ; Díaz de Astarloa et al. 2011 ), where annual average SST is 11 °C (Rivas 2010 ). This suggests that in a warmer world, the Argentine hake could expand southward and replace the Southern hake off Tierra del Fuego. The zooarchaeological record of ancient fishing societies offers an opportunity to explore changes through time on issues such as age-length relationships (Leach and Davidson 2001 ; Bolle et al. 2004 ), geographic distribution (Enghoff et al. 2007 ; Scartascini and Volpedo 2013 ; Bas et al. 2019 ) and trophic position (Zenteno et al. 2015 ; Braje et al. 2017 ; Szpak et al. 2018 ; Bas et al. 2019 ). The fish remains left by societies living in the warmer periods of the Holocene are particularly interesting since they offer a glimpse to a plausible future in the context of global warming (Bas et al. 2019 ). The zooarchaeological record of Tierra del Fuego dates to the Early Holocene and hake ( Merluccius sp.) abound in many archaeological sites since the Middle Holocene (Torres 2009 ; Santiago 2013 ; Zangrando et al. 2016 ). According to a diversity of proxies, climate and SST during the Middle Holocene in the southernmost tip of South America were warmer than today (Bujalesky 2007 ; Shevenell et al. 2011 ; Caniupán et al. 2014 ). Currently, annual average SST off Tierra del Fuego is 7 °C (Rivas 2010 ), but was as high as 11–12 °C at 53 °S during the Middle Holocene (Caniupán et al. 2014 ). Therefore, data on the biology of hake during the Middle Holocene can inform predictions of the future distributions of both species. The recovery of 9000 skeletal elements from an unknown number of hake specimens from Río Chico 1 archaeological site (Santiago 2013 ), clearly demonstrates the existence of a huge coastal population of at least one hake species off north-eastern Tierra del Fuego during the Middle Holocene. That population has since disappeared and morphological analysis cannot distinguish the skeletal elements of Argentine and Southern hake other than hyomandibular and urohyal bones (Lloris et al. 2005 ). Therefore, the species of hake recovered from Río Chico 1 remains unknown and, nothing is known about the habitat where the Río Chico 1 hake were captured. The strong winds and currents in this region, coupled with the absence of sailing technology during the Middle Holocene suggest that aboriginal hunter-gatherer-fisher groups likely captured hake onshore, but hard evidence is missing. Stable isotope analysis can be informative about the habitat used by ancient hake, but detailed studies comparing the stable isotope ratios from sympatric Argentine and Southern hakes have not yet been published and the comparison of the existing data is hindered by very small sample sizes (Table 1 ). Table 1 Stable isotope ratios (δ 13 C and δ 15 N) in the muscle and bone of modern specimens of marine fish species from the Patagonian Shelf (PS) and the Beagle Channel (BC) Full size table Here, we analyse mitochondrial DNA of a subsample of hake skeletal elements recovered from Río Chico 1 to separate the species, reconstruct the body size of those specimens through osteometry, and track changes in trophic position and habitat over time by analysing stable isotope ratios of C and N of modern and ancient hake. Materials and methods Study area and sample collection The study area is located at the Atlantic Coast of Isla Grande de Tierra del Fuego, Argentina. The samples were collected from two distinct archaeological sites from the Middle Holocene (Online Resource 1). Two radiocarbon dates are available for the shell midden called Río Chico 1: 5828 ± 46 BP and 5856 ± 44 BP, corresponding to 6558 cal BP and 6260 cal BP (lab codes AA75285 and AA65165, respectively; Santiago 2013 ). They are comparable to those for the nearby archaeological site known as La Arcillosa 2: 5508 ± 48 BP and 5068 ± 66 BP, corresponding to 5868 cal BP and 5776 cal BP (lab codes AA60934 and AA102166; Salemme et al. 2007 , 2014 ). The faunal assemblage at the shell midden Río Chico 1 is largely dominated by fish elements, followed by bird and mammal bones (Santiago 2013 ). Conversely, La Arcillosa 2 is a shell midden dominated by mammal remains, followed by bird and fish bones (Salemme et al. 2014 ). Limpets [ Nacella magellanica (Gmelin, 1791)] and mussels [ Mytilus chilensis (Hupé, 1854)] also occur at both sites. The latter dominates the malacological samples by more than 90% in both archaeological sites (Santiago et al. 2014 ). Bones from ancient hake of unknown species identity, pink cusk-eel [ Genypterus blacodes (Forster, 1901)] and Patagonian blenny [ Eleginops maclovinus (Cuvier, 1830)], as well as shells of limpet and mussel, were recovered from Río Chico 1 and La Arcillosa 2 (Table 2 ); all these samples were stored dry until further analysis. To avoid pseudoreplication, only the first vertebra was selected for hake, and bones from the neurocranium with the same laterality were used for the other fish species. Modern adult Argentine hake, Southern hake, pink cusk-eel and Patagonian blenny specimens were captured at the adjoining Atlantic Ocean and then bones were collected for stable isotope analysis (Online Resource 1). Shells from modern limpets and mussels were collected from the open-water beach, Punta María, on the adjoining Atlantic Ocean (Online Resource 1). All the specimens were sampled between 2016 and 2017 and stored at − 20 °C until further analysis. Table 2 Stable isotope ratios (δ 13 C and δ 15 N) of ancient and modern fish from the Atlantic coast of Tierra del Fuego Full size table Pink cusk-eel and Patagonian blenny provide benchmarks to interpret the stable isotope ratios of C in ancient hake. Previous research has demonstrated no significant differences in stable isotope ratios of C and N across skeletal elements in fish with acellular bone (Bas and Cardona 2018 ), so the stable isotope ratio of hake, pink cusk-eel and Patagonian blenny can be compared directly although different skeletal elements have been analysed. The trophic positions of the pink cusk-eel and the Patagonian blenny are similar to those of the two hake species (Table 1 ), but the two former species differ in habitat selection. The pink cusk-eel is broadly distributed across the continental shelf and the upper slope (Villarino 1998 ), although is rare in very shallow coastal habitats (Pequeño et al. 1995 ; Cousseau and Perrotta 1998 ). Conversely, the Patagonian blenny is a coastal species highly associated with shallow areas influenced by freshwater runoff (Pequeño et al. 1995 ; Cousseau and Perrotta 1998 ; Quiñones and Montes 2001 ; Licandeo et al. 2006 ; Riccialdelli et al. 2017 ). As a result, the δ 13 C value of the pink cusk-eel is much lower than that of sympatric Patagonian blenny, and similar to that of both hake species (Table 1 ). Thus, if ancient hake recovered from Río Chico 1 inhabited more onshore areas than they do currently, their δ 13 C values were expected to be significantly higher than those of contemporary pink cusk-eel and modern hake and closer to those of contemporary Patagonian blenny. Hake identification and taxonomy The species identity of the ancient hake specimens was identified through ancient mitochondrial DNA (mtDNA) analysis. DNA was extracted in the Ancient DNA Laboratory at the University of York, which follows strict protocols for contamination control and detection, including positive pressure, the use of protective clothing, UV sources for workspace decontamination, and laminar flow hoods for extraction and PCR-set-up. The 45 hake bones were subsampled prior to isotopic analysis, providing between 2.7 and 35 mg of bone for analysis (see Online Resource 2). The bone samples were crushed using a micropestle, and DNA was extracted using a silica spin column protocol (Yang et al. 1998 ) modified as reported in Yang et al. ( 2008 ); DNA was eluted in a 50 µL volume. Blank extractions and negative controls were included within all extraction batches and PCR amplifications. Primers were designed to target a 295 bp fragment of the mtDNA cytochrome b (cytb) gene, which can distinguish species within the genus Merluccius (Campo et al. 2007 ) bracketing positions 14,556–14,851 of the M. merluccius mitochondrial genome, Genbank accession NC015120; Forward primer—F14556 5′-ACCGCAAACGTCGAAATAGC-3′ and reviser primer R14851 5′-GGTACGGCAGACATTAAGTTTGTG-3′. PCR reactions were prepared and amplified following Speller et al. ( 2012 ) using an annealing temperature of 55 ºC, and successfully amplified PCR products were sequenced using forward and/or reverse primers at Eurofins Genomics, Ebersberg, Germany. The obtained sequences were visually edited using the ChromasPro software ( ), truncated to remove primer sequences, resulting in a final sequence length of 232 bp [positions 14,596–14827]; haplotypes were assigned using this longer fragment. Forty-two sequences were uploaded to the Genetic Sequence Database at the National Center for Biotechnical Information (NCBI) (GenBank ID: MK882658-MK882699). Edited sequences were initially compared with published references through the GenBank BLAST application ( ) to ensure they matched with Merluccius species. Multiple alignments of the ancient sequences with 98 published Merluccius sequences were conducted using ClustalW (Thompson et al. 1994 ), through BioEdit (Hall 2001 ). Species identifications were assigned through phylogenetic analysis of a 175 bp fragment comparable to available reference sequences. The ModelTest (Version 2.3) software (Posada and Crandall 1998 ) was employed to determine the best-fit model (GTR + G, selected by AIC), implemented in MrBayes 3.2.5 (Ronquist and Huelsenbeck 2003 ). Ten million generations of analyses were performed to produce the phylogeny and clade credibility scores, with a burnin of one million generations. Phylogenetic trees were created using FigTree 1.4.0 (Rambaut 2007 ). Size distribution analysis The caudal fin of most of the modern specimens of Argentine and Southern hake sampled for this study was damaged, so fish size was measured as standard length (SL). The overall sample size was larger and the size interval was broader for Southern hake ( N = 30; SL = 43.3–69.6 cm) than for the Argentine hake ( N = 20; SL = 55.0–67.0 cm). Fish were slightly boiled, flesh removed, the skeleton was disarticulated and stored dry. The morphometric analysis focused on vertebrae because they are the most common skeletal element recovered from Río Chico 1. The axial skeleton of hake can be differentiated into three different types of vertebrae (Online Resource 3), but only the first thoracic vertebra can be individually recognized. Accordingly, the analysis focused on the first vertebra and three measurements were considered: the dorso-ventral height of the centrum (M1), the medium-horizontal width of the centrum (M2) and craniocaudal length of the centrum (M3) (Online Resource 3; Morales and Rosenlund 1979 ). The morphometric analysis was conducted on the two species, but only the equation relating the dorso-ventral height of the centrum and the SL of Southern hake was used to assess the size of the ancient hake (see below). This is because the dorso-ventral height of most of the centra recovered from Río Chico 1 was outside the range of the centra measured from modern Argentine hake, thus indicating that the specimens recovered from Río Chico 1 were much smaller than those in our sample of Argentine hake. Conversely, the range of the centra recovered from Río Chico 1 overlapped broadly with that of the centra from the sample of modern Southern hake. Furthermore, the equations derived for both hake species yield the same results for the overlapping range of SL values (55.0–67.0 cm). Stable isotope analysis As previously reported, all modern samples were stored in a freezer at − 20 °C until analysis. Soft tissues were removed from limpets and mussels and the shells were rinsed with water, dried at room temperature and lightly scraped with sand paper to remove epibionts. Fishes were thawed at room temperature, boiled between 5 and 10 min and dissected to remove the selected bones. Shells and bones were latter dried in a stove at 60 °C for 24 h. Once dry, each sample was ground to fine powder and divided into two subsamples. This is because calcium carbonate and lipids have to be removed to obtain unbiased δ 13 C values (Newsome et al. 2006 ; Guiry et al. 2016 ; Bas and Cardona 2018 ), but demineralization consistently increases the δ 15 N values of the organic matrix (Bas and Cardona 2018 ). One subsample (“bulk” hereafter) was ground to fine powder with mortar and pestle and approximately 0.7 mg of bone powder were weighed into 3.3 × 5 mm tin cups, 7 mg of modern shell powder were weighed into 5 × 8 mm tin cups and 14 mg of ancient shell powder were weighed into 5 × 8 mm tin cups. The other bone subsample (“dml” hereafter) was ground to fine powder with mortar and pestle, dried again for 24 h at 60 °C and rinsed with a 2:1 chloroform:methanol solution to remove lipids (Folch et al. 1957 ). The chloroform:methanol solution was changed overnight until it was transparent. Bone dml subsamples were dried again for 24 h at 60 °C and demineralized with 0.5 N hydrochloric acid (HCl) until no more CO 2 bubbles were released (Newsome et al. 2006 ; Bas and Cardona 2018 ). Shell dml subsamples were first demineralised by soaking in 1 N HCl until no more CO 2 was released (Saporiti et al. 2014a ). After demineralization, shell dml subsamples were rinsed with distilled water for 24 h, dried again for 24 h at 60 °C and mixed with a 2:1 chloroform:methanol solution to remove lipids. The chloroform:methanol solution was changed overnight until it was transparent. Then, samples were dried again for 24 h at 60 °C and 0.5 mg were weighed into 3.3 × 5 mm tin cups. All tin cups were combusted at 900 °C and analysed in a continuous flow isotope ratio mass spectrometer (Flash 1112 IRMS Delta C Series EA, Thermo Finnigan; ) at Centres Científics i Tecnològics de la Universitat de Barcelona ( ) in Barcelona, Spain. Gases from the combustion of bulk shell samples passed through a CO 2 absorbent column for elemental analysis, containing CaO/NaOH. This was to avoid spectrometer saturation with CO 2 , because CaCO 3 constitutes over 90% of the shells samples and large amount of shell had to be combusted to obtain enough N to measure δ 15 N values. Abundance of stable isotopes is expressed using the δ notation, where the relative variations of stable isotope ratios is expressed as per mil (‰) deviations from predefined reference scales: Vienna Pee Dee Belemnite (VPDB) calcium carbonate for δ 13 C and atmospheric nitrogen (AIR) for δ 15 N. Due to limited supplies, however, isotopic reference materials, which included known isotopic compositions relative to international measurement standards, were analysed instead. All these isotopic reference materials were employed to recalibrate the system once every 12 samples and were analysed to compensate for any measurement drift over time. The raw data were recalculated taking into account a linear regression previously calculated for isotopic reference materials (Skrzypek 2013 ). Following Bas and Cardona ( 2018 ), only δ 13 C dml and δ 15 N bulk values were used for latter analysis. Furthermore, the carbon to nitrogen (C:N) atomic ratio of each dml subsample was used to assess the efficiency of lipid extraction (DeNiro 1985 ). The stable isotope ratios and the C:N ratios of all the samples are available as Online Resource 4. Data analysis First, linear regression was used the identify which of the three dimensions of vertebra centrum (M1, M2, and M3) best predicted the SL of modern hake specimens and that parameter was used to reconstruct the size of the ancient hake. Secondly, the stable isotope ratios of modern and ancient organisms cannot be compared directly, because the isotopic baseline may vary temporally (Casey and Post 2011 ). Nonetheless, the proteins that make up the organic matrix of mollusc shells are preserved and unaffected by diagenetic changes (Misarti et al. 2017 ), hence offering material suitable to reconstruct the changes in the isotopic baseline (Casey and Post 2011 ; Drago et al. 2017 ; Misarti et al. 2017 ; Vales et al. 2017 ). Accordingly, values of δ 13 C and δ 15 N of ancient limpets and mussels from Río Chico 1 and La Arcillosa 2 and modern conspecifics were compared independently using General Linear Models (GLM) as run in IBM SPSS Statistics (Version 23.0.0.2 for Mac) with two fixed factors (period and species) followed by a Tukey’s (HSD) post-hoc test to assess the temporal variation of the δ 13 C and δ 15 N values in shells. This approach was not possible for fish, because all the ancient Argentine hake and pink cusk-eel samples come from Río Chico 1 and all the ancient samples of the Patagonian blenny come from La Arcillosa 2. Third, Pearson correlation coefficients were computed to assess linear relationships between δ 13 C, δ 15 N and trophic position (TP) values and fish size for both hake species and for both periods. According to correlation results, GLM was run with one fixed factor (species or period) and SL as a covariate to compare the δ 13 C, δ 15 N and TP average values of each species in the two periods considered, after correcting for any baseline shift following mollusc stable isotope ratios. In cases for which correlations were not significant, a Student’s t -test was performed instead of GLM with the SL as a covariate. Normality and homoscedasticity assumptions were checked by means of Lilliefors test and Levene test, respectively. The trophic position of each species ( TP p ) was calculated as: $${\mathrm{T}\mathrm{P}}_{p}=\left[\left({\delta }^{15}{\mathrm{N}}_{p}-{\delta }^{15}{\mathrm{N}}_{m}\right)/3\right]+2$$ where δ 15 N p is the δ 15 N average values of each species; δ 15 N m is the δ 15 N average value of molluscs; three corresponds to Trophic Discrimination Factor (TDF); and mussels and limpets were considered herbivores at TP = 2 (Caut et al. 2009 ). Baseline corrections are necessary to compare the stable isotope ratios of consumers from different systems, as differences in the stable isotope ratios of primary producers propagate to consumers. On the contrary, no correction is needed to compare the TP of the consumers from different systems, because TP is calculated independently for each food web and hence accounts for any difference in the isotopic baseline. Fourth, SIBER (Stable Isotope Bayesian Ellipses in R; Jackson et al. 2011 ), was used to calculate standard ellipses to compare the size of the isotopic niche of the Argentine hake population for each period (Layman et al. 2007 ). The area of the convex hull and standard ellipse are independent from any difference in the isotopic baseline. For Argentine hake, both in the ancient and the modern specimens, the total area of the convex hull (TA) and two estimates of the ellipse area (SEA C and SEA B ) were calculated. SEA C is the area of the standard ellipse corrected for small sample size but has no information about the associated error (calculated with p.interval = 0.95). SEA B is the Bayesian estimate of the standard ellipse area and is reported as median values and 95% credible intervals, as calculated by SIBER. All codes for SIBER analyses are contained in the package SIBER (Jackson et al. 2011 ). Results Hake identification and taxonomy Mitochondrial DNA was successfully extracted and amplified from 42 of the 45 samples, identifying six cytb haplotypes (see Online Resource 2); phylogenetic analysis assigned all six haplotypes to M. hubbsi (Fig. 1 ) . The majority ( N = 30) of the samples carried the same cytb haplotype (Mhub2). Seven samples were assigned to haplotype Mhub1, two samples were assigned to Mhub5 and three samples carried unique cytb haplotypes (Mhub3, Mhub4, and Mhub6). No amplifications were observed within the blank extractions and negative controls; all unique haplotypes underwent repeat amplification and sequencing to ensure that polymorphisms were not the result of DNA damage or sequencing error. Fig. 1 Phylogenetic tree displaying the relationships between obtained ancient haplotypes (denoted by bold type) and published Merluccius cytochrome b sequences from GenBank. The Bayesian (Monte Carlo-Markov chain) consensus tree was composed using MrBayes 3.2.5 with Atlantic cod ( Gadus morhua NC002081) as the outgroup. Model parameters (GTR + G) were identified through MrModelTest 2.3 and consensus trees were generated from two runs of MrBayes using 10 million generations each. Posterior probabilities of the major nodes are listed for each of the branches Full size image Size distribution analysis The dorso-ventral height of the centrum (M1) was the best predictor of fish body size ( F 1,28 = 422.13; P < 0.001; r 2 = 0.938) and hence was used to calculate the size of the ancient hake recovered from Río Chico 1 following this equation: $$\mathrm{S}\mathrm{L}= \left(2.13+\mathrm{M}1\right)/0.016$$ The recovered hake ranged from 37.2 to 58.1 cm SL. Stable isotope analysis Collagen yield from fish bones ranged 12.3–22.5% and the C:N ratio ranged 2.3–3.7 (Online resource 4). Body size (SL) was weakly correlated with δ 15 N and TP values in modern Southern hake ( r = 0.404, P = 0.013; r = 0.404, P = 0.013; respectively), but was uncorrelated with δ 13 C values ( r = 0.197, P = 0.149). On the other hand, body size (SL) of modern Argentine hake was uncorrelated with δ 13 C, δ 15 N or TP ( r = 0.011, P = 0.483; r = 0.264, P = 0.145; r = 0.263, P = 0.146; respectively), but there was a weak, positive correlation between δ 13 C values and SL in ancient Argentine hake (r = 0.417, P = 0.003). There was no correlation between δ 15 N and TP values and SL in ancient Argentine hake ( r = 0.197, P = 0.109; r = 0.195, P = 0.111; respectively). The stable isotope ratios of C and N from mollusc shells of each period were normally distributed and were homoscedastic. Statistically significant differences existed between the δ 13 C values of limpets and mussels, as well as throughout time, but there was a statistically significant period × species interaction term (Table 3 ). Tukey’s post-hoc tests revealed that a significant interaction term emerged because the δ 13 C values of modern limpets were higher than those of conspecifics from both ancient sites and this was also true for the δ 13 C values of modern mussels and their conspecifics from La Arcillosa 2, but modern mussels did not differ from the ancient mussels from Río Chico 1 (Fig. 2 ). Conversely, differences in the δ 15 N values of contemporary mussels and limpets were not statistically significant, but they changed over time, although there was a significant period x species interaction term (Table 3 ). Tukey’s post-hoc tests revealed that the significant interaction term resulted because the δ 15 N values of modern mussels were lower than those of conspecifics from both ancient sites and the same was true for the δ 15 N values of modern limpets compared to ancient conspecifics from La Arcillosa 2 but not compared to ancient limpets from Río Chico 1 (Fig. 2 ). Nevertheless, it should be noted that mean δ 13 C and δ 15 N values of both limpets and mussels from Río Chico 1 were always in between those of La Arcillosa 2 and Punta María (Fig. 2 ). This suggests that a baseline shift certainly existed between ancient (Río Chico 1 and La Arcillosa 2) and modern (Punta María) δ 13 C and δ 15 N values and that the inconsistent differences between Río Chico 1 and Punta María likely result from a small sample size and a low statistical power. Because of this, we calculated correction factors to account for baseline shifts and allow the comparison of ancient and modern fish stable isotope ratios. To do so, we averaged the stable isotope ratios of ancient limpets and mussels from each site (Río Chico 1 δ 13 C = 2.64‰ and δ 15 N = 0.93‰; La Arcillosa 2: δ 13 C = 4.65‰ and δ 15 N = 2.61‰) and calculated the offset with the average stable isotope ratios from modern limpets and mussels. Later, we subtracted the offset from the δ 13 C and δ 15 N values of ancient fish samples, to correct for the baseline shift and allow the comparison with modern values. Baseline corrected values are shown in Table 2 as δ 13 C corr and δ 15 N corr . The δ 13 C and δ 15 N values from Argentine hake and pink cusk-eel that came from Río Chico 1 were corrected taking into account the correction factor calculated for this archaeological site (see above), and the same for he δ 13 C and δ 15 N values from Patagonian blenny that came from La Arcillosa 2. Table 3 Summary statistics of GLMs to assess the effect of sampling period and species identity (fixed factors) on the temporal variation of the δ 13 C and δ 15 N values in shells from ancient (Río Chico 1 and La Arcillosa 2; Online Resource 1) and modern (Punta María; Online Resource 1) samples Full size table Fig. 2 Stable isotope ratios (mean ± standard deviation) of ancient and modern mollusc shells from the Atlantic coast of Tierra del Fuego. Modern mollusc shells were collected in Punta María (PM; Online Resource 1). Ancient samples were recovered from Río Chico 1 (RC1) and La Arcillosa 2 (LA2) archaeological sites (Online Resource 1). Homogenous subsets were identified according to the results of post-hoc Tuckey’s tests. Letters denote statistically significant differences between δ 13 C and δ 15 N values from different periods and species, respectively. Sample size is N = 5 for each species and location. Consult Online Resource 4 for the raw data of stable isotope ratios Full size image Currently, Argentine and Southern hake differed significantly in δ 15 N, TP (Table 4 ) and δ 13 C values ( t 48 = − 2.317, P = 0.025). On the other hand, the δ 13 C corr values of ancient Argentine hake were significantly higher than those of modern ones (Table 4 ). Nonetheless, there were no significant differences between modern and ancient Argentine hake for δ 15 N corr or TP ( t 59 = 0.887, P = 0.379; t 59 = 0.077, P = 0.939, respectively). This result suggests that adult Argentine hake have not changed their trophic position over time, but have moved offshore. Related to these results, the credible intervals of the SEA B from ancient and modern Argentine hake overlap (Table 2 ), and something similar occurred with the SEA C , which overlaps but in different ways (Fig. 3 ). Overlapping area of the modern specimens correspond to 58.6%, whereas the ancient specimens correspond to 29.7%. In general, values of SEA C , SEA B and TA are higher in ancient Argentine hake than in modern ones suggesting a higher diversity of individual foraging strategies in the past. Table 4 Summary statistics of GLMs to assess the effect of species identity (fixed factor) and body size (covariate) on δ 15 N and TP of modern Argentine and Southern hake and the effect of period (fixed factor) and body size (covariate) on the δ 13 C values of modern and ancient Argentine hake Full size table Fig. 3 Isotopic niches of modern Argentine hake from the Atlantic coast of Tierra del Fuego ( N = 20) and ancient Argentine hake recovered from Río Chico 1 ( N = 42), as revealed by standard ellipse areas corrected for small sample size (SEA C ) Full size image The topologies of ancient and modern fish in the δ 13 C–δ 15 N isospace revealed two major differences at species level (Fig. 4 ). The first and most relevant was the dissimilar position of ancient Argentine hake compared to the pink cusk-eel and the Patagonian blenny, suggesting a more coastal habitat for Argentine hake during the Middle Holocene. Secondly, Patagonian blenny had lower δ 15 N values in the past, thus revealing differences in trophic position between ancient and modern samples. Fig. 4 Scatterplot of pelagic and benthic fishes into the δ 13 C–δ 15 N space. Top panel: ancient samples (6000–5000 cal BP). Bottom panel: modern samples. Error bars show standard deviation Full size image Discussion The results of this study demonstrate that ancient Argentine hake inhabiting the Atlantic coast of Tierra del Fuego during the Middle Holocene had a broader isotopic niche and foraged on average in more coastal habitats than currently. This suggests that ancient Argentine hake might have gathered at least seasonally so close to the shore to be captured by ancient hunter-gatherer-fishers devoid of sailing technology. Currently, dense coastal aggregations of hake occur only north of 47 °S during summer spawning (Bezzi et al. 1995 ; Díaz de Astarloa et al. 2011 ) and annual average SST is at least 11 °C in those areas (Rivas 2010 ). On the other hand, SST off north-eastern Tierra del Fuego is 7 °C (Rivas 2010 ) and spawning aggregations do not exist (Bezzi et al. 1995 ; Díaz de Astarloa et al. 2011 ). If environmental conditions in a warmer world would match those prevailing in the Middle Holocene, the results reported here indicate that Argentine hake might become more abundant in the foreseeable future and form summer spawning aggregations off north-eastern Tierra del Fuego. Nevertheless, this interpretation of the results presented here relies on our capacity for accurate hake species identification, body length assessment and habitat use reconstruction. Body size can be inferred confidently from fish skeletal elements (Casteel 1976 ; Smith 1995 ; Leach and Davidson 2001 ; Gabriel et al. 2012 ; Lernau and Ben-Horin 2016 ) and our results confirm that the dorso-ventral height of the centrum of the first vertebra of hake allow an accurate estimation of body size. On the other hand, the morphology of the first vertebra is of little use to tell apart closely-related species of hake, which is not a surprising result, as often the skeletal elements of species from the same genus are rather similar and have little diagnostic value (Cannon 1988 ; Yang et al. 2004 ; Lloris et al. 2005 ). In this scenario, only molecular methods allow species identification, as far as DNA is preserved in the bone tissue. This is not a problem in the cold environment of Tierra del Fuego, as demonstrated by the high recovery rate for mtDNA reported here and in Evans et al. ( 2016 ), but it could be more difficult in warmer regions. Interpreting changes in stable isotope ratios is more challenging, because potential confounding factors. The most important is the likely historic change in the isotopic baseline, which can be addressed only through adequate correction (Casey and Post 2011 ). There is growing evidence that the organic matrix of mollusc shells offers a good record of changes in the isotopic baseline (Hill et al. 2006 ; Casey and Post 2011 ; Misarti et al. 2017 ) and is not affected by diagenetic changes (Misarti et al. 2017 ). Thus, the stable isotope ratios of samples from different periods can be compared directly after an appropriate correction for baseline shifts. Nevertheless, it should be noted that the organic matrix of mollusc shells is a mixture of proteins and chitin, a polysaccharide containing N (Furuhashi et al. 2009 ). As a result, the C:N ratio of the organic matrix of mollusc shells including equal amounts of protein and chitin is close to 5.5 and hence differs from that of collagen. This does not reduce the suitability of mollusc shells for correcting changes in the isotopic values, although a different benchmark is needed to assess correctly sample preservation and removal of carbonates and lipids. Here we used the stable isotope ratios of C and N in the organic matter of mollusc shells to detect temporal shifts in the isotopic baseline. We averaged the stable isotope ratios in the shells of one species of suspension feeder (mussel) and one species of grazer (limpet), as in previous studies (Saporiti et al. 2014a , b ; Zenteno et al. 2015 ; Bas et al. 2019 ). The coastal habitat of mussels and limpets is an obvious limitation, because the baseline changes recorded in such coastal species do not necessarily parallel those operating in the offshore habitats used by hake. Unfortunately, no offshore molluscs occur in the middens from Tierra del Fuego, so we assessed changes in the habitat use of hake using two different approaches, one of them dependent and the other independent from any baseline correction (see below). The δ 15 N values of contemporary limpets and mussels did not differ, as they have the same trophic position (herbivores). Nevertheless, their δ 15 N values changed throughout time, except that of limpets from Río Chico 1, which is likely the consequence of small sample size and low statistical power. A declining pattern in the δ 15 N values of coastal molluscs had been previously reported for the South-Western Atlantic Ocean since the Middle Holocene and interpreted as indicative of a steady decline in marine primary productivity (Saporiti et al. 2014b ; Bas et al. 2019 ). The reason for such decline is unknown, but more intense upwelling has been predicted in eastern boundary currents at high latitudes as a result of global warming (Bakun 1990 ; Sydeman et al. 2014 ). The δ 13 C values of mollusc shells also revealed a baseline change over time but the reasons remain poorly understood. Again, one species from Río Chico 1 did not differ from modern conspecifics, which is likely the consequence of small sample size and low statistical power, but the overall evidence strongly suggests an increase in δ 13 C values. The massive burning of fossil fuels since the Industrial Revolution has resulted in a drop of the δ 13 C values in the atmosphere and the ocean during the past 150 years. Such drop, known as the Suess effect, has been some − 0.5‰ in the top 100 m of the water column off Tierra del Fuego (Eide et al. 2017 ) and should be recorded in the mollusc shells. However, the δ 13 C values of limpets and mussels are currently higher than in the Middle Holocene, not lower. This means that other processes have operated to alter the isotopic baseline, obscuring the Suess effect. Changes in the δ 13 C values of the primary producers associated to declining marine primary productivity, changes in the relative contribution of different types of primary producers to the organic carbon pool fuelling the food web or both might have operated. Those changes propagated to fish, as the δ 13 C values of all the fish species analysed are currently higher than during the Middle Holocene. Interestingly, changes in the δ 13 C values of benthic consumers (limpets, pink cusk-eel and Patagonian blenny; 4.2‰, 3.5‰ and 5.9‰ respectively) were more intense than those in species supported ultimately by phytoplankton (mussels and hake; 1.0 ‰ and 0.6‰, respectively). The δ 13 C values of another pelagic forager, the Fuegian sprat ( Sprattus fuegensis ) from the nearby Beagle Channel, have also increased approximately 0.4‰ since 1100 years BP (Bas et al. 2019 ), thus reinforcing the pattern of a less intense enrichment in 13 C in phytoplankton-dependent species. Results reported in this study also revealed significantly lower average δ 13 C values in modern Southern hake than in contemporary Argentine hake, although their ranges overlap. Previous studies also reported large, overlapping ranges of δ 13 C values in the muscle and bone tissue of these species, but formal statistical comparisons were hindered by very small sample sizes (Ciancio et al. 2008 ; Zangrando et al. 2016 ). For δ 15 N and TP, the overall stable isotope evidence (Ciancio et al. 2008 ; this study) confirms that adult Southern hake have a lower TP than Argentine hake of similar size. Crustaceans dominate the diet of Argentine hake which are less than 30 cm and fish and cephalopods become more important as hake grow larger, although little change in diet is observed for specimens larger than 50 cm (Angelescu and Prensky 1987 ; Belleggia et al. 2014 ; Botto et al. 2019 ). This explains why the δ 15 N and TP values of the adult Argentine hake studied here are uncorrelated with size. The diet of the Southern hake is poorly studied, but a delayed ontogenetic shift compared to Argentine hake might explain why δ 15 N and TP values are correlated with size even in adult fish. Further research is required on this topic. The most important finding of this study is the variation of the δ 13 C values of Argentine hake since the Middle Holocene and the resulting shift in the topology of the fish community. Not only ancient and modern Argentine hake differ in average δ 13 C corr values. Modern Argentine hake also have a smaller standard ellipse area than ancient ones, independent of any correction and thus reveals a much narrower isotopic niche. Furthermore, a large fraction of the isotopic niche of modern Argentine hake is encompassed by that of ancient conspecifics. Differences between average δ 13 C corr values and the actual overlap between the standard ellipses of ancient and modern Argentine hake are sensitive to the correction factor used to account for baseline shifts, but differences in standard ellipse area are independent of any correction. Thus, there is little doubt that modern Argentine hake have a narrower isotopic niche than ancient ones. A similar reduction has been reported for other species from the South-Western Atlantic Ocean which have been intensely exploited since the arrival of the European societies to South America (Drago et al. 2017 ; Bas et al. 2019 ). Changes in the topology of ancient and modern Argentine hake within the δ 13 C–δ 15 N biplot in relation to pink cusk-eel and Patagonian blenny are also independent of any baseline correction. Pink cusk-eel and Patagonian blenny differ largely in current habitat use (Pequeño et al. 1995 ; Cousseau and Perrotta 1998 ; Villarino 1998 ; Quiñones and Montes 2001 ; Licandeo et al. 2006 ; Riccialdelli et al. 2017 ) and δ 13 C values (Ciancio et al. 2008 ; this study) and they also differed largely in their δ 13 C values in the Middle Holocene (this study). The δ 13 C values of ancient Argentine hake were between those of ancient pink cusk-eel and Patagonian blenny from the same water mass, but those of modern Argentine hake are more depleted in 13 C than both contemporary pink cusk-eel and Patagonian blenny. This is strong evidence that a large fraction of the ancient hake population foraged in shallower areas than currently. Furthermore, the standard deviation of δ 13 C was broader in ancient Argentine hake, thus revealing a higher diversity of foraging strategies, according to the broader isotopic niche above reported. Conversely, there has been no change in the trophic position of Argentine hake off Tierra del Fuego since the Middle Holocene. Note that differences in the trophic position of ancient and modern Patagonian blenny are likely because of large differences in body size, as this species exhibits a strong ontogenetic dietary shift (Lloris and Rucabado 1991 ; Cousseau and Perrotta 1998 ; Martin and Bastida 2008 ; Bas and Cardona 2018 ) and the premaxilla of the Patagonian blennies recovered from La Arcillosa 2 were much bigger than those of the modern blennies used for reference. Changes in SST might have contributed substantially to the habitat shift of Argentine hake reported here. Currently, the population density of Argentine hake is higher in the northern part of Argentine shelf (Boltovskoy 1981 ; Cousseau and Perrotta 1998 ) and the southernmost coastal spawning aggregations are reported from areas less than 50 m deep off Comodoro Rivadavia (46 °S) (Bezzi et al. 1995 ; Díaz de Astarloa et al. 2011 ; Botto et al. 2019 ), where the average SST is 11 °C (Rivas 2010 ). South of that latitude, Argentine hake are distributed in deeper and colder waters and no coastal spawning aggregation is known (Bezzi et al. 1995 ; Díaz de Astarloa et al. 2011 ). Currently, the average SST off north-eastern Tierra del Fuego is 7 °C (Rivas 2010 ), but the average SST at that latitude during the Middle Holocene has been reported to be 11–12 °C (Nielsen et al. 2004 ; Bentley et al. 2009 ; Caniupán et al. 2014 ). Hence, SST values off Río Chico 1 archaeological site during the Middle Holocene likely matched those currently observed off Comodoro Rivadavia. This suggests that Argentine hake would have reached the coast off north-eastern Tierra del Fuego during the summer of the Middle Holocene for spawning, thus becoming vulnerable to hunter-gatherer-fisher people devoid of sailing technology. In conclusion, increasing SST resulting from global warming could lead to an increase in the abundance of adult Argentine hake onshore Tierra del Fuego and the development of onshore spawning aggregations. Combined with increased primary productivity, this could result into major changes in the fishing industry of the region.
Global warming will modify the distribution and abundance of fish worldwide, with effects on the structure and dynamics of food networks. However, making precise predictions on the consequences of this global phenomenon is hard without having a wide historical perspective. A study carried out at the University of Barcelona and the Southern Centre for Scientific Research (CADIC-CONICET, Argentina), analyzed the potential implications in the distribution of the Argentinian hake (Merluccius hubbsi), caused by the warming of marine waters. The study is based on the analysis of the structure of the marine ecosystems from 6,000 to 500 years ago, when temperatures were warmer than now. The results show this species could expand towards south and reach the coast of the South America extreme southern area, like it happened in the past. According to the researchers, this approach allows researchers to make predictions on the transformations to be caused by the climate change in the marine environment with important ecogical and economic implications. The study, published in the journal Oecologia, is part of the doctoral thesis by the researcher Maria Bas, member of CADIC-CONICET and the Biodiversity Research Institute (IRBio) of the University of Barcelona, co-supervised by the tenure-track 2 lecturer Lluís Cardona, from the Research Groups on Large Marine Vertebrates at the Department of Evolutionary Biology, Ecology and Environmental Sciences of the Faculty of Biology and IRBio, and by the expert Ivan Briz i Godino, from CADIC-CONICET. York University (United Kingdom) and British Columbia University (Canada) have also taken part in the study. The Middle Holocene, a plausible view of the future Researchers focused on the Atlantic coast of Isla Grande in Tierra del Fuego, in the extreme south of Argentina, where the hake is a key species for industrial fisheries. They collected samples from two archaeological sites dating from the Middle Holocene, that is, between 6,000 and 500 years ago, a period when temperatures would be analogous to those we are heading to in the future -according to climate models. "Remains from fish that lived in the warmest periods of the Holocene are specially interesting since they offer a plausible view of the future in the context of global warming. At the moment, the average annual temperature of the sea surface in Tierra del Fuego is about 7ºC, but during the Middle Holocene it reached 11 and 12ºC. Therefore, data on the biology of the hake during this period can provide information on the distribution of this species in a near future," note the authors. The presence of remains from other models of hake in the archaeological site Río Chico 1, in the north of Tierra del Fuego (Argentina), show the existence of a large population of hake in the northern east of Tierra del Fuego during the Middle Holocene. Since then, this population has disappeared due to the cooling temperatures, and their habitat remained unknown. Changes in the distribution of the Argentinian hake In order to discover the habitat of these fish, the first step in the study was to identify the remains through the mitochondrial DNA analysis and make a reconstruction of the size of old models. Then, researchers used the technique of carbon and nitrogen stable isotope analysis to study changes in the trophic position and the use of the habitat over time. This technique enables researchers to get information on the food intake, and the environment of the species that lived in a recent past, since the information is registered in the bone isotopic signal. Results show that Argentinian hake that lived in the Atlantic coast of Tierra del Fuego during the Middle Holocene had a broader isotopic niche and fed in more coastal habitats compared to those in current times. "This information, combined with strong winds and currents of the region, together with the lack of sailing technology during the Middle Holocene suggest that groups of aboriginal hunter-fisher-gatherers were likely to fish in the shore," note the authors. If the environmental conditions of a warmer world coincide with what prevails in the Middle Holocene, the Argentinian hake could be more abundant in the continental Argentinian platform of Tierra del Fuego. "From a fishing perspective, this situation suggests a potential increase of resources in shallow waters regarding Tierra del Fuego with important changes in the fishing industry in this region," highlights Lluís Cardona. According to the researchers, this methodology can be used with other species and in other areas of the planet. "In the future, we would like to know the changes that have taken place in the distribution and ecological niche of the hake and the cod in European waters," concludes the researcher.
10.1007/s00442-020-04667-z
Chemistry
Wound-healing biomaterials activate immune system for stronger skin
Activating an adaptive immune response from a hydrogel scaffold imparts regenerative wound healing, Nature Materials (2020). DOI: 10.1038/s41563-020-00844-w , www.nature.com/articles/s41563-020-00844-w Journal information: Nature Materials
http://dx.doi.org/10.1038/s41563-020-00844-w
https://phys.org/news/2020-11-wound-healing-biomaterials-immune-stronger-skin.html
Abstract Microporous annealed particle (MAP) scaffolds are flowable, in situ crosslinked, microporous scaffolds composed of microgel building blocks and were previously shown to accelerate wound healing. To promote more extensive tissue ingrowth before scaffold degradation, we aimed to slow MAP degradation by switching the chirality of the crosslinking peptides from l - to d -amino acids. Unexpectedly, despite showing the predicted slower enzymatic degradation in vitro, d -peptide crosslinked MAP hydrogel ( d -MAP) hastened material degradation in vivo and imparted significant tissue regeneration to healed cutaneous wounds, including increased tensile strength and hair neogenesis. MAP scaffolds recruit IL-33 type 2 myeloid cells, which is amplified in the presence of d -peptides. Remarkably, d -MAP elicited significant antigen-specific immunity against the d -chiral peptides, and an intact adaptive immune system was required for the hydrogel-induced skin regeneration. These findings demonstrate that the generation of an adaptive immune response from a biomaterial is sufficient to induce cutaneous regenerative healing despite faster scaffold degradation. Main The goal of regenerative medicine is to restore tissue function back to physiological activity. For biomaterial scaffolds, the optimal strategy to achieve this requires balancing material degradation with tissue regrowth. Clinical and patient factors contribute to a wide variation in chemical and physical parameters in situ, which makes striking a degradative–regenerative balance particularly difficult. Our recent development of a flowable, granular biomaterial, that is, a microporous annealed particle (MAP) gel, provides a new approach to make the balance more feasible 1 . The MAP gel is composed of randomly packed microsphere building blocks with a continuous network of interconnected micrometre-scale void spaces that allows for the infiltration of surrounding tissue without the prerequisite of material degradation 1 , 2 . This unique design resulted in improved tissue closure and improved vascularization relative to a nanoporous (but chemically equivalent formulation) hydrogel in a cutaneous wound model 1 . Mechanical support to the growing tissue by scaffolds is inherently impacted by the degradation rate of the scaffold 3 . For MAP scaffolds, degradation leads to a slow loss of porosity and reduced tissue ingrowth prior to dissolution. We hypothesized that slowing the degradation rate of MAP scaffolds would maintain the porosity and influence both wound closure rate and regenerated tissue quality. Changing the chirality of peptide moieties leads to a diminished degradation rate by endogenously present enzymes 4 , 5 . The use of chirality was made more attractive by the fact that polypeptides of d -enantiomeric amino acids do not typically elicit a robust immune response and are considered poorly immunogenic 5 . Previously, we used amino acid chirality to tune the proteolysis rate of peptide nanocapsules for the controlled release of encapsulated growth factors 4 . Therefore, we chose to use an analogous approach to slow the enzymatic degradation of our MAP scaffold by switching the chirality of the peptide crosslinker (for example, l - to d -chirality at the site of matrix metalloprotease (MMP)-mediated bond cleavage). We hypothesized that this approach would maintain the hydrogel microenvironment (for example, charge-based interactions and hydrophobicity) as it increased the long-term hydrogel integrity to allow a full infiltration of cells, and thus provide a greater integration of the entire construct with the host tissue. In the current study, we investigated how MAP hydrogels crosslinked with either d - or l -amino acid crosslinking peptides affect wound healing and skin regenerative responses using murine wound models. We provide evidence that activation of specific immune responses by the d -amino acid crosslinked MAP hydrogels elicits skin regeneration. Although immunity undoubtedly activates the foreign body response and eventual fibrosis of some implanted biomaterials 6 , 7 , the activation of the correct immune responses may enhance the regenerative ability of a biomaterial 8 , 9 . d -chiral crosslinker peptides slow MAP degradation in vitro We first used enantiomeric peptides to change the degradation rate without changing the initial material properties (for example, hydrophobicity, mesh size and charge) of the hydrogel 4 . All amino acids at the site of the enzymatic cleavage for the MMP-degradable peptide were changed to d -amino acids (Ac-GCRDGPQ d GI d W d GQDRCG-NH 2 , d -peptide). We matched the stiffness (that is, storage modulus) by rheology of both the d -peptide MAP ( d -MAP) and l -peptide ( l -MAP) formulations to that used in our previous MAP-based cutaneous application (~500 Pa; Fig. 1a ). After formulation optimization, we generated the microsphere particles using a previously published microfluidic technique 1 . Following the application of collagenase I to l -MAP, d -MAP or a 50% mixture of d -MAP and l -MAP (1:1 l/d -MAP), the l -MAP hydrogel degraded within minutes, whereas the degradation of the d -MAP by itself or within a mixture with l -MAP was minimal even after one hour (Fig. 1b and Supplementary Fig. 1 ). Fig. 1: d -MAP hydrogel degradation is enhanced in wounds of SKH1 hairless mice. a , Rheological characterization of MAP hydrogels composed of l or d -peptide crosslinked microgels. The r ratio (ratio of sulfydryl (SH) to vinyl sulfone (VS)) used to form the microgels was changed to arrive at the same storage modulus for both l - and d -MAP scaffolds. NS, no statistical significance between the l -MAP scaffold to the d -MAP scaffold indicated using a two-tailed Student’s t -test. b , Fabricated l- or d- hydrogels were tested for in vitro enzymolysis behaviour through exposure to a solution of collagenase I (5 U ml –1 ). c – f , Representative low-power view of H&E sections from healed skin 21 days after splinted excisional wounding in SKH1 mice treated by from sham ( c ), l -MAP ( d ), d -MAP ( e ) and a 1:1 mixture of l -MAP and d -MAP ( f ). g – i , Histologic quantification of dermal thickness including gels ( g ) (mm), hair follicles ( h ) and sebaceous glands ( i ). Each point represents the average of two sections from two separate slides of one wound. Each data point represents one animal and all the analysis is by one-way analysis of variance (ANOVA) ( F (3,12), 4.448 ( g ), 10.89 ( h ) and 5.074 ( i ); Tukey multiple comparisons tests, * P = 0.0460, ** P = 0.0341 ( g ), * P = 0.0220, ** P = 0.0133, *** P = 0.0007 ( h ), * P = 0.0110 ( i )). j , The incisional, unsplinted wounds were created and, 28 days afterwards, the healed wounds treated without or with the different hydrogels were tested against unwounded skin in the same mouse. The tensile strength was evaluated by tensiometry and reported as a percentage of the tensile strength of the scar tissue when compared with that of the normal skin of the same mouse. Each data point represents the average of two measurements from one wound, separate from wounds used in b – i with the analysis by one-way ANOVA ( F (3, 20), 5.400; * P = 0.0273, ** P = 0.0131). Data are plotted as a scatter plot showing the mean and s.d. Source data Full size image d -chiral crosslinker peptides enhance MAP degradation in vivo We next examined how d -MAP compares with l- MAP in vivo in a murine splinted excisional wound model 1 , 10 . We did not find any difference in the wound closure rate or any increased erythema or gross signs of inflammation in wounds treated with d -MAP, l -MAP or a 1:1 mixture of l/d- MAP any time after treatment (days 3 and 6 after wounding are shown in Supplementary Fig. 1a ). When comparing wound closure to sham treatment (no hydrogel), we found that a 1:1 mixture of l/d- MAP induced a more rapid wound closure (assessed on day 9 after wounding) than that of sham (Supplementary Fig. 2b ), similar to previous results with l- MAP hydrogel 1 . As no differences in wound closure results were noted, we next examined whether the degradation of hydrogels that contained d- amino acid crosslinkers was slowed in vivo by examining excised tissue 21 days after the wound was completely healed. Unexpectedly, histological sections of wounds treated with d- MAP or a 1:1 l/d -MAP hydrogel mixture displayed minimal to no hydrogel persistence 21 days after wounding, near to levels seen in mice not treated with hydrogel (sham), whereas wounds treated with l -MAP hydrogel displayed large amounts of hydrogel remaining (Fig. 1c–f ). d -MAP hydrogels impart tissue regenerative properties Of note, the initial examination of histological sections of d- MAP and 1:1 l/d- MAP displayed a much different overall appearance than that of the healed sham- or l- MAP-treated wounds. Previous reports suggest that, unlike large excisional wounds in adult mice (wounds larger than 1 × 1 cm), which result in significant regenerative healing with wound-induced hair neogenesis (WIHN) 11 , 12 , 13 , wounds smaller than 1 × 1 cm in mice, like the punch biopsies performed in our studies, typically heal without regeneration of new hair and fat and, instead, form scars 12 , 14 , 15 . Despite these reports, when the correct regenerative cues are provided from wound fibroblasts, through transgenic activation of specific Hedgehog signals, small wounds can regenerate 16 . Consistent with these results, histological examination of 4 mm excisional splinted wounds in mice that did not receive hydrogel (sham) displayed the typical appearance of scar tissue with a flattened epidermis, a thinned dermis with horizontally oriented collagen bundles, vertically oriented blood vessels and the lack of hair follicles and sebaceous glands (Fig. 1c,g–i ). Tissue from mice treated with the l- MAP hydrogel displayed a similar appearance, but with a thicker overall tissue compared with that of sham wounds, due to the substantial residual l -MAP hydrogels (Fig. 1d,g ). Within the dermis that surrounds the hydrogel, fibroblasts that secreted collagen and/or extracellular matrix and blood vessels formed between the hydrogel microparticles (Fig. 1d ). Only rare hair follicles and associated sebaceous glands were observed in the wound areas (Fig. 1d,h,i ). Remarkably, examination of histological sections of the d- MAP- or 1:1 l/d- MAP-treated tissue revealed a de novo regenerated appearance. The overlying epidermis often displayed physiological undulation, and numerous immature-appearing hair follicles were seen to span the length of the healed full-thickness injury (Fig. 1e–i ). Samples treated with d- MAP or 1:1 l/d- MAP also displayed an increased skin thickness despite less hydrogel remaining in these samples (Fig. 1f ). Many samples also displayed epidermal cyst formation. In samples that displayed residual hydrogel, hair follicles that directly overlaid the degrading MAP hydrogel particles were apparent (Supplementary Fig. 2c ). The presence of hair follicles in SKH1 mice was suggestive of embryonic-like tissue regeneration, a phenomenon not often observed in the murine small-wound model. To further quantify tissue regeneration, we next performed tensile strength testing on unsplinted incisional wounds in SKH1 mice using a modified literature protocol 17 . We found that scar tissue from sham wounds revealed a tensile strength that was approximately 15% of that of unwounded skin from the same animal (Fig. 1i ). Although the treatment of wounds with l- MAP hydrogel did not result in a significant increase in tissue tensile strength, treatment with either d- or l/d- MAP resulted in an ~80% improvement in tensile strength (Fig. 1j ). Hair follicles in d -MAP-treated wounds are neogenic We next repeated wound-healing experiments in C57BL/6 (B6) mice to investigate if the regenerative phenomenon observed in d- MAP treated wounds was similar to that in WIHN. We chose sham as control and d- MAP as a treatment method that showed evidence of regeneration in SKH1 mice. Similar to the sham- and l- MAP-treated wounds in SKH1 mice, the B6 mice wounds without hydrogel (sham) displayed a typical scar appearance with haematoxylin and eosin (H&E) and Masson’s trichrome staining (Fig. 2a,c,e ). In contrast, histological sections of the d- MAP-treated tissue revealed clear signs of WIHN. As in SKH1 mice, d- MAP-treated B6 mice wounds displayed undulations and numerous epidermal cysts under the epidermis, whereas the dermis was thicker. Importantly, many neogenic hair follicles developed in the wound (Fig. 2b,d,f ). The neogenic hair follicles were in the early anagen phases with an immature appearance, yet many of them had already formed new sebaceous glands (Fig. 2b ) and featured a prominent SOX9 + bulge stem cell region (Fig. 2j ). In several instances, neogenic hair follicles were physically connected to epidermal cysts (a morphology not expected from pre-existing follicles). This suggests that in d- MAP-treated wounds, epidermal cysts can be the initiation sites for de novo morphogenesis for at least some of the neogenic hair follicles (Fig. 2h ). Masson’s trichrome staining confirmed the presence of neogenic hair follicles within the collagen matrix of the wound bed (Fig. 2b,f ). Furthermore, regenerating day 18 d- MAP-treated wounds with neogenic hair follicles lacked PLIN + dermal adipocytes (Fig. 2h ), which is consistent with a slower regeneration of neogenic adipocytes that occurs four weeks after wounding in the large wound-induced WIHN 18 , 19 . Thus, the addition of d- MAP to normally non-regenerating 4 mm excisional wounds activates hair follicle neogenesis. Fig. 2: d- MAP hydrogel induces neogenesis of hair follicles in full-thickness skin wounds in B6 mice. a – f , H&E ( a , c , d ) and Masson’s trichrome staining ( b , e , f ) of healed 4 mm full-thickness splinted skin wound on day 18. Control (sham-treated) wounds heal with scarring ( a , c , e ), whereas d- MAP-treated wounds form numerous epidermal cysts (asterisks) and, prominently, regenerate de novo hair follicles (green arrowheads) ( b , d , f ). In some instances, neogenic hair follicles form in close association with epidermal cysts. As compared with normal, pre-existing anagen hair follicles at the wound edges, neogenic hair follicles display early anagen stage morphology (the wound edges in c – f are outlined by dashed lines and the d- MAP hydrogel remnants in b are marked with red arrowheads). g , h , Immunostaining for the epithelial marker KRT5 (green) and the adipocyte marker PLIN (red) reveals normal KRT5 + anagen hair follicles and many mature PLIN + dermal adipocytes (left panels in g and h ). Regeneration of new KRT5 + hair follicles (blue arrowheads in h ) along with KRT5 + epidermal cysts (yellow) was observed only in d- MAP-treated wounds (right panels in g and h ). No neogenic adipocytes were observed in hair-forming d- MAP-treated wounds. Blue shows DAPI (4′,6-diamidino-2-phenylindole) staining. i , j , Immunostaining for SOX9 (green) and SMA (red) reveals many SOX9 + epithelial cells within the bulge region of neogenic hair follicles on day 18 d- MAP-treated wounds (blue arrowheads in j ). In contrast, in control (sham-treated) wounds that undergo scarring, the dermal wound portion contains many SOX9 + cells, many of which also co-express contractile marker SMA ( i ). Expression of SMA was also seen in both control and d- MAP-treated samples in blood vessels. Scale bars, 100 μm. The images are representative of slides from four animals per group. Source data Full size image d -MAP hydrogel implants enhance myeloid cell recruitment To determine whether an enhanced immune response led to an enhanced d- MAP or 1:1 l:d- MAP degradation in the wound microenvironment, we utilized a subcutaneous implantation model that also allowed for larger amounts of hydrogel to be implanted, and thus remain present for longer than in the small excisional wound model. To test whether the subcutaneous implants of the d -MAP hydrogel resulted in an enhanced immune cell recruitment, we utilized immunofluorescent microscopy with AlexaFluor488-labelled MAP hydrogel. We found that implants that contained only l- MAP displayed a background level of CD11b cells within the hydrogel, as previously observed 1 , whereas d- MAP or l/d- MAP resulted in the robust accumulation of CD11b-expressing myeloid cells within and around the scaffold (Fig. 3a,b ). A standard histological analysis of a repeat experiment of different formulations of subcutaneously implanted MAP hydrogel confirmed the activation of type 2 immunity with an atypical type 2 granulomatous response dominated by the accumulation of individual macrophages within and around the d- MAP hydrogel implants, but not the l- MAP hydrogel implants (Supplementary Fig. 3a and Supplementary Discussion ). Immunofluorescent staining for F4/80 and CD11b confirmed the enhanced recruitment of the macrophages, without giant cell formation, in d- MAP implants (Supplementary Fig. 3b,c and Supplementary Discussion ). These results confirm that d -MAP elicits a more robust immune response and degradation by the accumulated immune cells probably contributed to the enhanced degradation of d- MAP in our previous wound experiments. Fig. 3: Peptide recognition by pattern recognition receptors is not required for myeloid cell recruitment. a , Representative confocal immunofluorescent images of stained myeloid cells (CD11b + ) within healed wounds of B6 mice in the presence of the indicated hydrogel. Scale bar, 100 μm. b . Localized immune response. Quantification of a CD11b + cellular infiltrate in healed tissue 21 days after wounding in the presence or absence of a hydrogel. Each point represents average of three slides for each wound. All the analyses are by one-way ANOVA ( F (3,21) 41.10; **** P < 0.0001). c , d , Representative high-resolution confocal immunofluorescence imaging for CD11b, F4/80, DAPI and IL-33 from subcutaneous implants of l- or d- MAP hydrogel implants ( c ) and quantification of IL-33-producing macrophages and other myeloid cells at the hydrogel edge and core ( d ). n = 5 B6 mice, mean ± s.e.m., multiple t -tests adjusted for multiple comparisons using the Holm–Sidak method. ** P = 0.00014). Scale bar, 100 μm. e – h , For the uncleaved peptide, murine BMDMs from B6 mice were stimulated with 500 μg ml –1 of full-length l- or d- crosslinker peptide in the presence or absence of the LPS (10 ng ml –1 ) for 6 h. Shown are the quantitative PCR results of four inflammatory genes ( Cxcl1 ( e ), Tnf ( f ), Il1b ( g ) and Mx1 ( h ) expression) for two separate experiments performed with n = 6. All the analyses are by one-way ANOVA ( F (5, 30), 15.66, 17.62, 107.1 and 8.229, respectively; ** P = 0.009, **** P < 0.0001). i – l , For the cleaved peptide, BMDMs were stimulated with LPS (10 ng ml –1 ) or cleaved d- crosslinker peptide (500 μg ml –1 ) that possessed an N -terminal d -amino acid ( Cxcl1 ( i ), Tnf ( j ), Il1b ( k ) and Mx1 ( l ) expression). The experiment was performed in triplicate. All the analyses are by one-way ANOVA ( F (2,6), 20.28, 30.86, 2.178 and 22.72, respectively). Data are plotted as a scatter plot showing the mean and s.d. a.u., arbitrary units. Source data Full size image Allergic responses and parasites can elicit a type 2 immune response, which includes atypical type 2 granulomatous responses, at least partially through interleukin (IL)-33 production by epithelial cells, recruited myeloid cells and resident macrophages 20 , 21 , 22 , 23 . Implanted, non-degradable microparticle-based materials elicit an IL-33-dependent type 2 innate immune response by circulating CD11b + myeloid cells and macrophages 24 . It is possible that MAP particles could activate this same programme, especially given the atypical type 2 foreign body responses observed in d- MAP samples. Indeed, 21 days after implantation, we found similar numbers of IL-33-expressing F4/80 + CD11b + macrophages in the centre and/or core of both l - and d- MAP implants (Fig. 3c,d ), consistent with both l- and d- MAP samples activating this type 2 pathway. However, there was a dramatic increase in IL-33-producing IL-33 + F4/80 + macrophages at the edges of the only d- MAP implants (Fig. 3c,d ). These results confirm that the hydrogel possesses a type 2 innate ‘adjuvant’ effect, which may activate the adaptive immune system and contribute to the enhanced immune activation with the d- MAP hydrogel. When l- MAP scaffolds are used, the immune response remains mild as the hydrogel degrades slowly over time 25 , but the presence of d- peptide accelerates the immune-mediated degradation. Free d- chiral peptides avoid pathogen recognition receptors We next tested whether d- peptides could directly activate innate immunity through a traditional PRR (pattern recognition receptor)-induced transcriptional response. We stimulated murine bone marrow derived macrophages (BMDMs) with l -peptide or d -peptide in the presence or absence of bacterial lipopolysaccharide (LPS), the Toll-like receptor 4 agonist that results in rapid macrophage transcriptional responses. We chose to examine genes reliably and potently induced downstream of the major signalling pathways downstream of a variety of cellular insult (AP-1, MAPK, NF-κB and type I IFN)) to simultaneously interrogate multiple PRR pathways 26 , 27 , 28 . To our surprise, neither l - nor d -amino acid that contained crosslinking peptides alone at high doses (1 mg ml –1 ) induced the expression of pro-inflammatory genes Tnf (NF-κB dependent), Il1b (NF-κB and MAPK dependent), Cxcl2 (AP-1 dependent early response) or Mx1 (type I IFN dependent) in murine BMDMs at six hours ( t max of the gene induction; Fig. 3e–l ). Additionally, neither l - nor d -peptides enhanced the ability of LPS to induce the expression of these same genes (Fig. 3e–h ). Previous studies showed that peptides that contain an N -terminal d- methionine can activate the innate immune receptor formyl peptide receptor 2 and formyl peptide-like receptor 2 29 , 30 , 31 . As the cleavage of a d- amino acid peptide can result in shorter peptides that contain a d -amino acid at the N -terminus, we next wished to examine whether a peptide that corresponded to the cleaved d- peptide could activate inflammatory responses in BMDMs. Similar to the results with the intact d- peptide, a high concentrations of cleaved d- peptide (1 mg ml –1 ) did not induce the transcription of Tnf , Il1b, Cxcl2 or Mx1 at six hours (Fig. 3i–l ). As there is a very low likelihood that the cleaved d- peptide will be present at such high local concentrations within the implanted hydrogel while it is being degraded in vivo, these show that d- chiral peptides are poor activators of a traditional PRR-mediated inflammatory response in macrophages and suggest that d- peptides may act as antigens to enhance immunity, which leads to the enhanced degradation of d- MAP. d- MAP elicits antigen-specific humoral immunity We next evaluated whether the d- MAP activated adaptive immunity. The adaptive immune system recognizes non-self-peptide antigens to induce cell mediated (T-cell) and humoral (B-cell) immunity. Peptides that contain d- amino acids were reported to activate or suppress T-cell dependent and T-cell independent adaptive immune responses 5 , 32 . In the context of the MAP, crosslinking peptides that are non-native may be presented to the immune system until fully degraded. D-peptides could be presented by antigen-presenting cells directly to T cells, which elicits a T-cell dependent adaptive immune response or, alternatively, the presence of d- amino acid-containing peptides on the surface of a large molecule of a MAP hydrogel could directly crosslink the B-cell receptor, which leads to a T-cell independent antibody responses similar to that of T-cell independent antigens. To test this hypothesis, we examined whether mice that were wounded or received subcutaneous implants of l- MAP, d- MAP or 1:1 l/d- MAP were able to develop T-helper cell dependent (IgG1 or IgG2a) or T-cell independent (IgG3) antibodies against l- or d- amino acid-containing crosslinkers 33 , 34 , 35 , 36 . Indeed, regardless of whether a d- containing MAP hydrogel was applied to wounded tissue or given via subcutaneous implants, mice developed a T-cell dependent IgG1 and IgG2a response against the d -amino acid-containing peptide, but not a T-cell independent IgG3 response. These results are more consistent with a T cell-dependent immune response against D-peptides (Fig. 4a, b ). IgG1 is typically associated with a Th2 ‘tissue repair’ type response, whereas IgG2a is typically associated with a Th1 ‘foreign body’ response that typically requires strong adjuvants to develop, which depend on the strain of the mice 37 , 38 . The fact that anti- d- peptide-specific IgG2a was induced when the hydrogel was given to mice in a wound environment but not when the hydrogel was given in the subcutaneous implant model suggests that, by itself, the hydrogel does not possess sufficient adjuvant effects to induce robust Th1 responses. However, the inflammation present in the wound environment may result in a mixed Th2/Th1 response to the d- MAP (Figs. 3e and 4b ). Mice that were treated with l- MAP alone did not develop antibody responses to the l- peptide. Fig. 4: d- MAP induces antibody responses and the recruitment of myeloid cells via adaptive immunity. a – f , Wound-healing model. a – c , Measurement of anti-d-specific IgG subtype antibodies (anti-d peptide IgG1 ( a , * P = 0.0384, *** P = 0.0004), anti-d peptide IgG2a ( b , * P = 0.0351, *** P = 0.0262) and anti-d peptide IgG3 ( c , * P = 0.0396)) by enzyme-linked immunosorbent assay (ELISA) 21 days after the wound healing experiments in SKH1 mice treated with the indicated hydrogels. d – f , Measurement of anti-l specific IgG subtype antibodies (anti-l peptide IgG1 ( d , * P = 0.0137), anti-l peptide IgG2a ( e , * P = 0.0115) and anti-l peptide IgG3 ( f )) by ELISA 21 days after the wound healing experiments in SKH1 mice treated with indicated hydrogels. Each data point represents one animal and all the analyses in a – f are by an unpaired two-tailed t-test that compared each condition to l only. g – i , Subcutaneous injection model. Measurement of anti-d specific IgG subtype antibodies (anti-d peptide IgG1 ( g , ** P = 0.0022), anti-d peptide IgG2a ( h ) and anti-d peptide IgG3 ( i )) in Balb/c or Balb/c.Rag2−/−γc−/− mice given a subcutaneous injection of d-MAP 21 days after injection. Each data point represents one animal and all the analyses in g – i are by an unpaired two-tailed t-test. j – l , Representative examples of confocal immunofluorescent imaging for CD11b, DAPI and hydrogel from subcutaneous implants of l- or d-MAP hydrogel implants in Balb/c or Balb/c.Rag2−/−γc−/− mice ( j ), and quantification of total DAPI+ cells ( k , * P = 0.0455, *** P = 0.0006) and CD11b+ myeloid cells ( l , **** P < 0.0001). Scale bar, 200 µm. Data are plotted as a scatter plot showing the mean and s.d. Each point represents the average of three slides for each wound. All the analyses are by an unpaired two-tailed t-test represent statistical significance by Student’s t-test for the comparison indicated. OD, outer diameter. Source data Full size image d- MAP recruits myeloid cells via adaptive immune response Our data suggest that the activation of adaptive immune responses to d- MAP contributes to the immune infiltration and degradation of d- MAP. To test this hypothesis further, we examined whether Balb/c.Rag2 −/− γc −/− mice, which are devoid of an adaptive immune system, innate lymphoid cells and IL-2/IL-15 signalling, but possess a fully functional myeloid system, will exhibit a reduced immune infiltration 39 . Indeed, the total cellularity and specific recruitment of CD11b + myeloid cells to d- MAP hydrogel in Balb/c.Rag2 −/− γc −/− mice decreased to comparable levels to those seen in l- MAP in wild-type mice (Fig. 4k,l ). d- MAP-induced skin regeneration relies on adaptive immunity To determine whether the adaptive immune response was required for the development of neogenic hair follicles, we next performed excisional splinted wounds in B6 and B6.Rag1 −/− mice and examined them 25 days after wounding with untreated (sham) or treated wounds with the 1:1 l/d- MAP gel. Of note, in preliminary studies scars induced by 4-mm-punch wounds healed with extremely small scars in B6 mice, so we used a 6 mm punch in this experiment. Sham wounds in B6 mice demonstrated obvious depigmented, irregularly shaped scars, whereas scars in B6 mice treated with 1:1 l/d- MAP gel were difficult to identify visually as they displayed hair growth over the wounds and less atrophy and/or fewer surface changes typically seen in scars (representative example is shown in Fig. 5a , and all the wound images in Supplementary Fig. 4 ). Scars in sham-treated or 1:1 l/d- MAP-treated B6.Rag1 −/− mice were smaller than those in sham-treated B6 mice, but were identifiable in B6.Rag1 −/− mice regardless of whether the wounds were sham treated or hydrogel treated (Fig. 5a ). All wound areas of the injuries (which included 1:1 l/d- MAP-treated B6 wound areas) were confirmed by examining the defect on the fascial side of the tissue after the excision of skin. Histological sections of the healed skin of mice displayed significant neogenic hairs and sebaceous glands only in wounds of wild-type mice treated with 1:1 l/d- MAP (Fig. 5b–d and Supplementary Fig. 5 ). Sham wounds in B6 and Rag −/− mice, and in the 1:1 l/d- MAP-treated B6.Rag1 −/− mice, displayed prominent scars, without hairs or sebaceous glands, which confirms the requirement of an adaptive immune system in skin regeneration by a MAP gel that contains a d -peptide (Fig. 5b–d ). These studies highlight that hair follicle structures can be regenerated through adaptive immune activation from MAP hydrogel scaffolds. Fig. 5: d -MAP requires an intact adaptive immunity to induce hair follicle neogenesis. a , Representative examples of gross clinical images of healed splinted excisional wounds in B6 or B6.Rag1 −/− mice by a digital single-lens reflex camera 17 days later of sham (no hydrogel) or 1:1 l/d- MAP treatment. Scale bar, 5 mm. b , Histologic sections of healed tissue from B6 or B6.Rag1 −/− mice. Scale bar, 200 μm. The white dashed lines denote wounded area. c , d , Quantification of the average numbers of hair follicles ( c ) and sebaceous glands d ) from three histological sections per sample from B6 mice and B6.Rag1 −/− mice. Data are plotted as a scatter plot showing the mean and s.e.m. *Two-tailed P = 0.002 by Mann–Whitney test for an interstrain/identical treatment comparison; ** P = 0.0039 by a Wilcoxon test for an intrastrain/different treatment comparison. Source data Full size image Discussion In most mammals, the natural process of scar formation and tissue fibrosis is highly evolved and a tissue-scale attempt to restore critical barrier functions for survival. This process, however, is ultimately a biological ‘triage’ that favours the rapid deposition of a fibrotic matrix to restore the barrier at the expense of a loss of function of complex tissue. In the skin, this fibrotic response results not only in a loss of functioning adnexal structures, but skin tissue that is more fragile and prone to reinjury. A major goal when engineering skin regeneration is to allow for the rapid restoration of barrier function while providing an increased tissue tensile strength and higher tissue function. Many biomaterial-based approaches, which include the addition of growth factors and decellularized extracellular matrix constructs, display limited success in restoring function in wounds. We previously showed that the MAP scaffold can accelerate wound closure in murine wounds 1 . Our findings reported here further highlight that the incorporation of a modest adaptation of MAP that enhanced a type 2 innate and adaptive immune response induced skin regeneration—hair neogenesis and improved tensile strength (Fig. 6 ). This response was dependent on the generation of an adaptive immune response to d- enantiomeric peptides and occurred without the addition of stem cells, growth factors or adjuvants. Importantly, this regenerative response was decoupled from wound closure that begins immediately, consistent with the time needed to generate an antigen-specific immune response. Fig. 6: d -MAP changes the wound fate from scar formation to regeneration by type 2 immune activation. a , Representation of the MMP cleavage sequences, amino acid chirality within the crosslinking peptides and microfluidic formation of the hydrogel microbeads that incorporate l- or d- chirality peptides. b , The use of l- or d- MAP in a wound-healing model demonstrates that both the l-MAP (green) and d- MAP (red) hydrogels fill the wound defect. Wounds that heal in the absence of a hydrogel heal with an atrophic scar and loss of tissue (top row), whereas the epidermis forms over the scaffold with both l- and d- MAP and allows an increased dermal thickness (middle two rows). However, in the case of d- MAP, the hydrogel activates the adaptive immune system over time, which results in tissue remodelling and skin regeneration as the adaptive immune system degrades the d- MAP scaffold (bottom row). PEG, polyethylene glycol. Full size image Although adaptive immunity can contribute to fibrosis, foreign body formation and the rejection of biomaterial implants 6 , 7 , 8 , adaptive immune activation from a growth factor that contains extracellular matrices can enhance muscle regeneration 8 , 9 . Further, other biomaterials were created to directly activate specific components of the immune system to treat cancer as immunotherapy platforms 40 , 41 . In concert, these studies suggest that the role of the adaptive immune system in tissue repair is substantially more complex than previously realized. Our findings suggest that an engineered type 2 immune response to sterile, degradable microparticle-based materials can trigger regeneration rather than fibrosis and further support a role of adaptive immune cells to restore tissue function. Finally, we display the potential of the MAP scaffold as a potent immunomodulatory platform. Future identification of immune factors that tip the balance towards regeneration rather than eliciting scarring or a foreign body response may lead to improved biomaterials. Methods l- MMP and d- MMP MAP hydrogel formation Microfluidic water-in-oil droplet generators were fabricated using soft lithography, as previously described 1 . To enable the microgel formation, two aqueous solutions were prepared. One solution contained a 10% w/v four-arm polyethylene glycol–vinyl sulfone (20 kDa, JenKem) in 300 mM triethanolamine (Sigma), pH 8.25, prefunctionalized with a 500 µM K-peptide (Ac-FKGGERCG-NH 2 ) (GenScript), 500 µM Q-peptide (AcNQEQVSPLGGERCG-NH 2 ) and 1 mM RGD (Ac-RGDSPGERCG-NH 2 ) (GenScript). The other solution contained either: (1) an 8 mM dicysteine-modified MMP substrate (Ac-GCRDGPQGIWGQDRCG-NH 2 ) (GenScript) with either all l -chirality amino acid residues for l- MMP microgel, or (2) d- chirality amino acid substitution of amino acids at the site of the MMP-mediated recognition and cleavage (Ac-GCRDGPQ D GI D W D GQDRCG-NH 2 ) for d -MMP microgels. We matched the stiffness of the two hydrogels that required minimal changes to that of the peptide crosslinker solution ( l- MAP, 8 mM; d- MAP, 8.2 mM). The oil phase was a heavy mineral oil (Fisher) that contained 0.25% v/v Span-80 (Sigma). The two solutions were mixed in the droplet generator and pinched immediately into monodisperse droplets. Downstream of the pinching region, a second oil inlet with a high concentration of Span-80 (5% v/v) was mixed with the flowing droplet emulsion. Both aqueous solution flow rates used were 0.75 µl min –1 , whereas both oil solutions flowed at 4 µl min –1 . The mixture was allowed to react overnight at room temperature and purified by repeated washes with an aqueous buffer of HEPES-buffered saline pH 7.4 and pelleting in a tabletop centrifuge at 18,000 g for 5 min. Raw materials were purchased endotoxin free and the final hydrogels were tested for endotoxin levels prior to implantation. Generation of MAP scaffolds from building block microgels Fully swollen and equilibrated building block microgels were pelleted at 18,000 g for 5 min and the excess buffer (HEPES pH 7.4 + 10 mM CaCl 2 ) was removed by aspiration. Subsequently, building blocks were split into aliquots, each of which contained 50 μl of the concentrated building blocks. An equal volume of HEPES pH 7.4 + 10 mM CaCl 2 was added to the concentrated building block solutions. Half of these are spiked with thrombin (Sigma) to a final concentration of 2 U ml –1 and the other half were spiked with FXIII (CSL Behring) to a final concentration of 10 U ml –1 . These solutions were then well mixed and spun down at 18,000 g , followed by the removal of excess liquid with a cleanroom wipe (American Cleanstat). Annealing was initiated by mixing equal volumes of the building block solutions that contained thrombin and FXIII using a positive displacement pipette (Gilson). These solutions were well mixed by pipetting up and down, repeatedly, in conjunction with stirring using the pipette tip. The mixed solution was then pipetted into the desired location (mould, well plate, mouse wound and so on) or loaded into a syringe for subcutaneous injection. The microgel fabrication was performed under sterile conditions. After particle fabrication, 20 µl of dry particles were digested in 200 µl of digestion solution (Collagenase IV 200 U ml –1 + DNase I 125 U ml –1 ) and incubated in 37 °C for 30 min before testing. Endotoxin concentrations were determined with the Pierce LAL Chromogenic Endotoxin Quantitation Kit (Thermo Fisher Scientific) following the manufacturer’s instructions. Particle endotoxin levels were consistently below 0.2 endotoxin U ml –1 . Degradation with collagenase Microgel degradability was confirmed with collagenase I. A 1:1 v/v mixture of microgels formed with d -MMP- or l -MMP-sensitive crosslinker was diluted in collagenase I to a final concentration of 5 U ml –1 collagenase. This mixture was added to 1 mm polydimethylsiloxane well and briefly allowed to settle. Images of the microgels were taken near the bottom of the well every 30 s for 2 h with a confocal microscope. Image analysis was carried out through a custom MATLAB script (script provided by S. C. Lesher-Perez) and ImageJ. MATLAB was used to determine the number of intact microgel spheres in each image. The previously mentioned script was applied with a minimum droplet radius of 30 pixels, a maximum droplet radius of 50 pixels and a sensitivity factor of 0.98 for the channel-separated images. Then, ImageJ was used to determine the area fraction that fluoresced for each channel and each image. The thresholding for each image was set to a minimum of 50 and a maximum of 255 and the fluorescing area fraction was recorded. Mouse excisional wound-healing model All the experiments that involved animals, animal cells or tissues were performed in accordance with the Chancellor’s Animal Research Committee ethical guidelines at the University of California Los Angeles under protocol no. 10-011 (in vivo wound healing and subcutaneous implants) or no. 1999-073 (in vitro BMDM cultures). Mouse excisional wound healing experiments were performed as previously described 1 , 10 . Briefly, 10-week-old female SKH1 mice ( n = 6, Charles River Laboratories) or 10-week-old female B6 or B6.Rag1 −/− mice ( n = 4 twice, Jackson Laboratories) were anaesthetized using a continuous application of aerosolized isoflurane (1.5 vol%) throughout the duration of the procedure and disinfected with serial washes of povidone–iodine and 70% ethanol. The nails were trimmed and buprenorphine (0.05 mg ml –1 ) was injected intramuscularly. The mice were placed on their side and the dorsal skin was pinched along the midline. A sterile 4 mm biopsy punch was then used to create two through-and-through wounds, which resulted in four clean-cut, symmetrical, full-thickness excisional wounds on either side of the dorsal midline. A small amount of adhesive (VetBond, 3M, Inc.) was then applied to one side of a rubber splint (outer diameter, ~12 mm; outer diameter, ~8 mm) and the splint was placed centred around the wound (adhesive side down). The splint was secured with eight interrupted sutures of 5-0 non-absorbable Prolene. A second splint wrapped in Tegaderm (3M, Inc.) was attached to the initial splint via a single suture to act as a hinged cover to allow wound imaging while it acted as a physical barrier above the wound bed. After the addition of thrombin (2 U ml –1 ) and 10 mM CaCl 2 , the experimental material (20 μl of L-only MAP, d- only MAP or a 1:1 v/v mixture of l- MAP and d- MAP in HEPES-buffered saline that contained factor XIII (10 U ml –1 ) and 10 mM CaCl 2 , or no hydrogel) was then added to one of the wound beds randomly to ensure each hydrogel treatment was applied to the different regions of wounded back skin to limit the potential for site-specific effects. After treatment, a Tegaderm-coated splint was applied and wound sites were covered using a self-adhering elastic bandage (VetWrap, 3M, Inc.). Animals were housed individually to prevent wound manipulation. At the culmination of the wound-healing experiment (day 21 or day 25), the mice were killed by an isoflurane overdose and cervical dislocation and imaged with a digital camera. The skin was excised and processed via either paraffin embedding for H&E or optimal cutting temperature blocks for immunofluorescence. Evaluation of wound closure Wounds were imaged daily to follow their closure. Each wound site was imaged using a high-resolution camera (Nikon Coolpix). The closure fraction was determined as described previously 1 . Briefly, closure was determined by comparing the pixel area of the wound to the pixel area within the 10 mm centre hole of the red rubber splint. Closure fractions were normalized to day 0 for each mouse and/or scaffold sample. Investigators were blinded to the treatment group identity during analysis. Wound imaging On the specified day after the wounds were created, close-up images of the wounds were taken using a Canon Powershot A2600 or a Nikon D3400 DSLR Camera with an 18–55 mm lens, and were cropped to the wound area but not manipulated further. For wound closure, the area was obtained using ImageJ by a subject blinded to the treatment. Tissue collection After the wounds healed, mice were killed on the indicated day after wounding, and tissue collected with a ~5 mm margin around the healed wound. The samples were immediately submerged in Tissue-Tek optimal cutting temperature fluid and frozen into a solid block with liquid nitrogen. The blocks were then cryosectioned by a cryostat microtome (Leica) and kept frozen until use. The sections were then fixed with 4% paraformaldehyde in 1× PBS for 30 min at room temperature, washed with 1× PBS and kept at 4 ° C until stained. For the antibody production analysis, was blood harvested via cardiac puncture to obtain the serum for ELISA. Macrophage cell culture Mouse BMDMs were generated as previously described previously 27 . Briefly, after euthanasia, the hindlimbs were removed aseptically and the bone marrow was flushed. Bone marrow cells were cultured in CMG-conditioned complete DMEM media for 6 days. Cells were then treated with intact l- or d- peptide in ultrapure H 2 O at the indicated concentration in the presence or absence of LPS (10 ng ml –1 ). Cleaved d- peptide (with an N -terminal d- amino acid) (W D GQDRCG-NH 2 ) was also used when indicated. Cells were harvested at 6 h after treatment and the expression of cytokines and chemokines was examined by quantitative PCR using specific primers, as described previously 42 . Incisional wound model As above, 10-week-old female B6 mice (Jackson Laboratories) were anaesthetized with isoflurane as above. The dorsal and side skin was dehaired using electric clippers followed by Nair (Church and Dwight, Inc.), then disinfected with serial washes of povidone–iodine and 70% ethanol. The nails were trimmed to lower the incidence of splint removal, and buprenorphine was injected intramuscular as above. An incisional 2 cm × 1 cm wound was made with a scalpel. Mice (five per group) were randomly assigned to receive 50 μl of l- MAP, d- MAP, 1:1 v/v mixture of l- MAP and d- MAP or no hydrogel (Aquaphor, Beiersdorf Inc.). The mice were wrapped with Tegaderm followed by VetWrap, as above. Histology and analysis Samples were sectioned (6–10 μm thick), then stained with H&E or Masson trichrome by the UCLA Tissue Procurement Core Laboratory using standard procedures. Sections were examined by a board-certified dermatopathologist (P.O.S.) and/or an expert in hair follicle neogenesis/regeneration (M.V.P.) who were blinded to the identity of the samples for the presence of adnexal structures in tissue sections and dermal thickness. For enumeration, two to three tissue sections from the tissue block of each wound were examined and averaged per wound to obtain the count per unit area for each sample. Wounds were splinted to prevent contraction and any sample with more than a 50% wound closure by contraction were not included. Tensiometry To evaluate the tensile properties of the healed incisional wounds, tensile testing was performed on an Instron model 3342 fitted with a 50 N load cell and data recorded using the Instron Bluehill 3 software package. Tissue was collected from the wound site 28 days after wounding and treatment as a 2 cm × 4 cm ‘dumbbell’ shape (with a 1 cm centre width in the handle portion). The sample was oriented such that the healed wound spanned the entire middle section of the dog bone (the thinner 1 cm region) and the healed wound long axis was orthogonal to the direction of tension applied. The tissue sample was loaded into the Instron and secured with pneumatic grippers, pressurized to 276 kPa. The tissue was subjected to tensile testing at an elongation rate of 5 mm min –1 and ran through material failure. For each tissue sample, stress/strain curves were calculated from force/elongation curves (provided from the Instron Bluehill software) using the known cross-sectional dimensions of the ‘dog bone’ samples (each measured with callipers prior to placement on the Instron), and by measuring the starting distance between pneumatic grips with a caliper. The starting distance was standardized by preloading the sample to 0.5 N, followed by measurement and then running of the tensile test to failure. This analysis enabled the calculation of yield stress, which are reported in Fig. 1j . Subcutaneous implants of hydrogel For subcutaneous implants, after anaesthesia, 10-week-old female Balb/c and Balb/c.Rag2 −/− γc −/− mice were injected with 50 µl of l- MAP, d- MAP or 1:1 v/v mixture of l- MAP and d- MAP ( n = 5). After 21 days, the skin and subcutaneous tissue that contained the hydrogels were removed and processed for histology and immunofluorescence, and blood was collected by cardiac puncture to obtain serum for the ELISA. B6 mice were used in another batch of experiments for immunofluorescence analysis and the histology of subcutaneous implants. Tissue section immunofluorescence, quantification of hydrogel degradation and immune infiltration Slides that contained tissue sections (10–25 µm thickness) were blocked with 3% normal goat serum (NGS) in 1× PBS + 0.05% Tween-20 (PBST). For intracellular antigens, 0.2% triton was added to the blocking buffer. Primary antibody dilutions were prepared in 5% NGS in 1× PBST as follows: rat anti-mouse CD11b clone M1-70 (BD Pharmingen, no. 553308) 1:100, F4/80 clone A3-1 (BioRAD, MCA497G) 1:400 and IL-33 (Abcam, ab187060) 1:200. Sections were stained with primary antibodies overnight at 4 °C, and subsequently washed with 3% NGS in 1× PBST. Secondary antibodies (goat anti-rat Alexa-647; Invitrogen) were all prepared in 5% NGS in 1× PBST at a dilution of 1:500. Three 5 min washes with PBST were performed after each antibody incubation. Sections were incubated in secondary antibodies for 1 h at room temperature and subsequently washed with 1× PBST. For multicolour immunofluorescence staining for primary and secondary antibodies of each antigen were performed in sequence. Sections were either mounted with antifade mounting medium with DAPI (Fisher Scientific, H1200) or counterstained with 2 μg ml –1 DAPI in 1× PBST for 30 min at room temperature and then mounted in mounting medium of Antifade Gold. Computational analysis of multicolour immunofluorescence images A MATLAB code was used for the analysis of the multicolour immunofluorescence images. The code divided the hydrogel into an edge region (300 μm from hydrogel–tissue interface) and a core region (the centre of the hydrogel to 200 μm from the inner boundary of the edge region). For each hydrogel subregion, the code read the CD11b and F4/80 signals, and binarized each to form a mask using a similar threshold for all the samples. The code then used the nuclear stain and IL-33 + stains to identify all the nuclei and IL-33 + cells. The density of each cell type was then quantified by counting the number of nuclei and IL-33 + cells that overlapped or evaded the masks divided by the area of the region of interest. Areas with defects caused by tissue sectioning were excluded from the analysis. Although it did not affect the code performance, the image condition was kept the same across all samples. ELISA To assess the anti- l- or anti- d- antibodies, sera were collected by cardiac puncture 21 days after the hydrogel application of mice (subcutaneous implant or application to wound). To detect the anti- l- and anti- d- antibodies, plates were coated with either the l- MMP or d- MMP peptide, respectively (the sequence above; GenScript). Serum samples were tested at a 1:500 dilution followed by incubation with alkaline phosphatase-labelled goat anti-mouse IgG1 or IgG2a, or IgG3 antibodies (Southern Biotechnology Associates or BD Pharmingen) and developed with p -nitrophenyl phosphate substrate (Sigma-Aldrich). The optical density at 405 nm was read using a Spectramax i3X microplate reader (Softmax Pro 3.1 software; Molecular Devices). Statistics and reproducibility All the statistical analysis was performed using Prism 6 (GraphPad, Inc.) software. Specifically, a two-tailed t -test or one-way ANOVA were used to determine the statistical significance, assuming an equal sample variance for each experimental group when individual groups are compared. For ANOVA, post hoc analysis with Tukey multiple comparison was used. For histological counting and the B6 and B6.Rag1 −/− sham versus 1:1 l/d -MAP analysis, a Wilcoxon signed rank analysis was performed and for B6 versus B6.Rag1 −/− the subcutaneous immunofluorescence analysis was performed with a t -test with a Mann–Whitney U test. The hydrogel degradation test was performed on three separate occasions for each batch of l- MAP, d- MAP and 50:50 mixture of l- MAP and d- MAP for a total of nine degradation tests. In each technical replicate at least ten microgels were imaged and analysed for fluorescence intensity. The evaluation of hair neogenesis in the B6 mice control versus d- MAP for Fig. 2 was performed on samples from n = 4 for each group. The wound healing studies to compare wild-type to B6.Rag1 −/− mice were repeated three times ( n = 4 each group). In the first experiment, all the Rag1-/- mice were euthanized due to the development of severe and worsening wound infections, and thus were not included in the final analysis. In addition, wounds and/or scars that showed more than a 50% contraction of the wound area from the underlying fascia from any group or histological processing results that failed to identify the wound and/or scar bed (that is, the sample was cut through) were removed from the final dataset. For the histological analysis, sham versus 1:1 l/d -MAP in B6 mice from three separate experiments were used ( n = 9 histological samples available out of an available n = 12 wounds performed), whereas samples in the B6.Rag1 −/− mice were obtained from the latter two experiments performed in B6 versus B6.Rag1 −/− mice carried out at the same time ( n = 6 histological samples available out of n = 8 wounds). The findings within this article were observed in two different mouse strains (CRL-SKH and C57BL/6) that have different adnexal structures (vellus hair only and mature and/or terminal follicles, respectively). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request. Source data are provided with this paper.
Researchers at Duke University and the University of California, Los Angeles, have developed a biomaterial that significantly reduces scar formation after wounding, leading to more effective skin healing. This new material, which quickly degrades once the wound has closed, demonstrates that activating an adaptive immune response can trigger regenerative wound healing, leaving behind stronger and healthier healed skin. This work builds on the team's previous research with hydrogel scaffolds, which create a structure to support tissue growth, accelerating wound healing. In their new study, the team showed that a modified version of this hydrogel activates a regenerative immune response, which can potentially help heal skin injuries like burns, cuts, diabetic ulcers and other wounds that normally heal with significant scars that are more susceptible to reinjury. This research appears online on November 9, 2020 in the journal Nature Materials. "The body forms scar tissue as fast as possible to reduce the chance of infection, to reduce pain, and, in larger wounds, to avoid water loss through evaporation," said Maani Archang, a first author on the paper and an MD/Ph.D. student in the Scumpia and Di Carlo labs at UCLA. "It's a natural process of wound healing." Current wound-healing hydrogels available for clinical use sit on the surface of the wound, where they act as a dressing and help prevent the wound from drying out. That in turn helps the wound heal faster, generally via scar formation. In their 2015 Nature Materials paper, the research team, helmed by Duke's Tatiana Segura and UCLA's Dino Di Carlo, developed microporous annealed particle (MAP) hydrogels, which are a microparticle-based biomaterial that can integrate into the wound rather than sit on the skin's surface. The beads within the MAP gel link together but leave open spaces, creating a porous structure that provides a support for cells as they grow across the wound site. As the wound closes, the gel slowly dissolves, leaving behind healed skin. Although the MAP hydrogels allowed for rapid cellular growth and faster repair, the team noticed that the healed skin had limited complex structures like hair follicles and sebaceous glands. The team was curious whether they could alter their biomaterial to improve the quality of the healed skin. "Previously we'd seen that as the wound started to heal, the MAP gel started to lose porosity, which limited how the tissue could grow through the structure," says Don Griffin, an assistant professor at the University of Virginia who is a first author on the paper and a former postdoctoral fellow in the Segura Lab. "We hypothesized that slowing down the degradation rate of the MAP scaffold would prevent the pores from closing and provide additional support to the tissue as it grows, which would improve the tissue's quality." Rather than create an entirely new gel with new materials, the team instead focused on the chemical linker that allowed the scaffold to be naturally broken down by the body. In their original MAP gels, this chemical linker is composed of an amino acid sequence taken from the body's own structural proteins and arranged in a chemical orientation called L chirality. Because this peptide sequence and orientation is common throughout the body, this helps the gel avoid triggering a strong immune response, but it also enables ready degradation through naturally present enzymes. "Our body has evolved to recognize and degrade this amino acid structure, so we theorized that if we flipped the structure to its mirror image, which is D chirality, the body would have a harder time degrading the scaffold," said Segura, a professor of biomedical engineering at Duke. "But when we put the hydrogel into a mouse wound, the updated gel ended up doing the exact opposite." The updated material integrated into the wound and supported the tissue as the wound closed. But instead of lasting longer, the team discovered that the new gel had almost entirely disappeared from the wound site, leaving behind just a few particles. However, the healed skin turned out to be stronger and included complex skin structures that are typically absent in scars. After further investigation, the researchers discovered that the reason for the stronger healing—despite the lack of longevity—was a different immune response to the gel. After a skin injury, the body's innate immune response is immediately activated to ensure that any foreign substances that enter the body are quickly destroyed. If substances can escape this first immune response, the body's adaptive immune response kicks in, which identifies and targets the invading material with more specificity. Because the original MAP gel was made with the common L peptide structure, it generated a mild innate immune response. But when the team placed the reformulated gel into a wound, the foreign D chirality activated the adaptive immune system, which created antibodies and activated cells including macrophages that targeted and cleared out the gel more quickly after the wound closed. "There are two types of immune responses that can occur after injury—a destructive response and a more mild regenerative response," said Scumpia, an assistant professor in the division of dermatology at UCLA Health and the West Los Angeles VA Medical Center. "When most biomaterials are placed in the body, they are walled off by the immune system and eventually degraded or destroyed. But in this study, the immune response to the gel induced a regenerative response in the healed tissue." "This study shows us that activating the immune system can be used to tilt the balance of wound healing from tissue destruction and scar formation to tissue repair and skin regeneration," said Segura. Working with Maksim Plikus, a regenerative tissue expert at the University of California, Irvine, the team also confirmed that key structures, like hair follicles and sebaceous glands, were correctly forming over the scaffold. When the team dug into the mechanism, they found that the cells of the adaptive immune system are required for this regenerative response. As the team continues to study the regenerative immune response to their gel, they are also exploring the possibility of using the new MAP hydrogel as an immunomodulatory platform. "The team is now exploring the best way to release immune signals from the gel to either induce skin regeneration or develop the hydrogel as a vaccine platform," said Scumpia. "I am excited about the possibility of designing materials that can directly interact with the immune system to support tissue regeneration" said Segura. "This is a new approach for us."
10.1038/s41563-020-00844-w
Physics
Quantum state of single electrons controlled by 'surfing' on sound waves
Shintaro Takada et al. Sound-driven single-electron transfer in a circuit of coupled quantum rails, Nature Communications (2019). DOI: 10.1038/s41467-019-12514-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-12514-w
https://phys.org/news/2019-10-quantum-state-electrons-surfing.html
Abstract Surface acoustic waves (SAWs) strongly modulate the shallow electric potential in piezoelectric materials. In semiconductor heterostructures such as GaAs/AlGaAs, SAWs can thus be employed to transfer individual electrons between distant quantum dots. This transfer mechanism makes SAW technologies a promising candidate to convey quantum information through a circuit of quantum logic gates. Here we present two essential building blocks of such a SAW-driven quantum circuit. First, we implement a directional coupler allowing to partition a flying electron arbitrarily into two paths of transportation. Second, we demonstrate a triggered single-electron source enabling synchronisation of the SAW-driven sending process. Exceeding a single-shot transfer efficiency of 99%, we show that a SAW-driven integrated circuit is feasible with single electrons on a large scale. Our results pave the way to perform quantum logic operations with flying electron qubits. Introduction DiVincenzo’s criteria for realising a quantum computer address the transmission of quantum information between stationary nodes 1 . Several approaches have demonstrated successful transmission of quantum states in solid-state devices such as in quantum dot (QD) arrays 2 , 3 , 4 , 5 , coupled QDs in quantum Hall edge channels 6 or microwave-coupled superconducting qubits 7 , 8 . In semiconductor heterostructures, surface acoustic waves (SAWs) offer a particularly interesting platform to transmit quantum information. Thanks to the shallow electric potential modulation on a piezoelectric substrate, a SAW forms a train of moving QDs along a depleted transport channel. This SAW train allows to drag single charge carriers from one side of such a quantum rail to the other. Employing stationary QDs as electron source and receiver, a single electron has been sent back and forth several micrometre long tracks with a transfer efficiency of about 92% 9 , 10 . Recently, SAW-driven transfer of individual spin polarised electrons has been reported 11 . These advances support the idea of a SAW-driven quantum circuit enabling the implementation of electron-quantum-optics experiments 12 , 13 , 14 and quantum computation schemes at the single-particle level 15 , 16 , 17 , 18 , 19 . The core of such a quantum circuit is a tunable beam-splitter permitting the coherent partitioning and coupling of single-flying electrons. In the past, coherent quantum phenomena such as the Hanbury-Brown–Twiss or the Hong–Ou–Mandel effect have been observed by analysing fluctuations in current through a beam-splitter structure 20 , 21 . Inspired by these experiments, a refined beam-splitter geometry has been developed to demonstrate the basic principles of flying charge qubit manipulations in a Mach–Zehnder interferometry set-up with a continuous stream of ballistic electrons 22 , 23 . This progress moreover opened up the way for precise transmission-phase measurements of QD states 24 , 25 , 26 and detailed studies on quantum phenomena such as the Kondo effect 27 , 28 . Considering the coherence times in stationary charge 29 , 30 , 31 , 32 or spin qubits 33 , 34 , 35 , it should be possible to use a surface-gate-defined beam-splitter component to implement quantum logic gates in GaAs-based heterostructures for solitary flying electron qubits. First steps in this directions have already been achieved via the demonstration of electron-quantum-optics experiments such as Hong–Ou–Mandel interference 12 , 36 or quantum state tomography 37 , 38 , 39 . To perform quantum logic operations 40 with a solitary flying electron qubit that is defined via charge or spin, besides coherent propagation of the electron wave function and single-shot detection, it will be further necessary to establish an experimental frame allowing adiabatic transport of the respective two-level system. Owing to the electrostatic isolation from the Fermi sea, SAW-driven single-electron transport is promising to demonstrate quantum logic operations with a flying electron qubit in a beam-splitter set-up. In this work we investigate the feasibility of such a beam-splitter set-up for SAW-driven single-shot transfer of a solitary electron. For this purpose, we couple a pair of quantum rails by a tunnel-barrier and partition an electron in flight into the two output channels of the circuit. Modelling the experimental results of this directional-coupler operation with quantum mechanical simulations, we deliver insight into the quantum state of the SAW-transported electron and provide a clear route to maintain adiabatic transport along a tunnel-coupled region of quantum rails. In order to realise quantum logic gates, where a pair of electrons is made to interact in flight, it is further necessary to synchronise the sending process. For this purpose, we demonstrate a SAW-driven single-electron source that is triggered by a voltage pulse on a timescale of picoseconds. Results A sound-driven single-electron circuit The sample is realised via surface electrodes forming a depleted potential landscape in the two-dimensional electron gas (2DEG) of a GaAs/AlGaAs heterostructure. An interdigital transducer (IDT) is used to send a finite SAW train towards our single-electron circuit as shown schematically in Fig. 1 a. A scanning-electron-microscopy (SEM) image of the investigated single-electron circuit is shown in Fig. 1 b. The device consists of two 22-µm-long quantum rails that are coupled along a region of 2 µm by a tunnel-barrier, which is defined by a 20 nm -wide surface gate. The SAW train allows the transport of a single electron from one gate-defined QD (source) to another stationary QD (receiver) through the circuit of coupled quantum rails (QR). Figure 1 c shows a zoom on the lower receiver QD with indications of the electrical connections. To detect the presence of an electron, a quantum point contact (QPC) is placed next to each QD. By biasing this QPC at a sensitive working point, an electron leaving or entering the QD can be detected by a jump in the current \({I}_{{\rm{QPC}}}\) 41 . Fig. 1 Sound-driven circuit of coupled quantum rails. a Schematic of the experimental set-up. An interdigital transducer (IDT) launches a SAW train towards the single-electron circuit, which is realised via metallic surface gates in a GaAs/AlGaAs heterostructure. b SEM image of the quantum rails (QR) with indications of the transport paths, U and L, and the voltages to control the coupling region. c SEM image of the lower receiver quantum dot (QD) with indication of the coupled quantum rail (QR) and the close-by quantum point contact (QPC). d Jumps in QPC current, \({I}_{{\rm{QPC}}}\) , at the upper receiver QD from thousand SAW-driven single-shot transfers with (red) and without (grey) initial loading of a solitary electron at the source QD Full size image Transfer efficiency Let us first quantify the efficiency of SAW-driven single-electron transfer along a single quantum rail. For this purpose, we decouple the two transport channels by setting a high tunnel-barrier potential using a gate voltage of \({V}_{{\rm{T}}}=-1.2\ {\rm{V}}\) . To quantify the errors of loading, sending and catching, we repeat each SAW-driven transfer sequence with a reference experiment where we initially do not load an electron at the source QD. Figure 1 d shows the jump in QPC current, \(\Delta {I}_{{\rm{QPC}}}\) , after SAW transmission at the upper receiver QD for an exemplary set of thousand single-electron transfer experiments in an optimised configuration. The grey data points stem from the reference experiments without initial loading at the source QD. The distinct peaks in the histograms of the events with (red) and without (grey) initial loading show that the presence of an electron in the QD is clearly distinguishable. Analysing 70,000 successive experiments of this kind in a single optimised configuration of the quantum rail, we quantify the efficiency of SAW-driven single-electron transport. Thanks to the low error rates of loading (0.07%) and catching (0.18%), we deduce a transfer efficiency along our 20 µm-long quantum rail of 99.75%. A similar single-shot transfer efficiency has recently been obtained with single-electron pumps emitting high-energy ballistic electrons 42 . Partitioning an electron in flight Having established highly efficient single-electron transport, we now couple the two channels to partition an electron in flight between the two quantum rails. The aim of this directional coupling is to prepare a superposition state of a flying electron qubit. We find that we can finely control the partitioning of the electron by detuning the double-well potential as indicated in Fig. 2 a, b. To achieve this effect, we sweep the voltages applied to the side electrodes of the coupling region, \({V}_{{\rm{U}}}\) and \({V}_{{\rm{L}}}\) , in opposite directions while keeping \({V}_{{\rm{T}}}\) constant. With a potential detuning, \(\Delta ={V}_{{\rm{U}}}-{V}_{{\rm{L}}}=0\) V, the quantum rails are aligned in electric potential. Setting a voltage configuration where \(\Delta \ < \ 0\) , the potential of the lower quantum rail (L) is decreased with respect to the upper path (U). For \(\Delta \,> \,0\) , the situation is reversed. Deducing the transfer probabilities to the receiver QDs from a thousand single-shot experiments per data point, we measure the partitioning of the electrons for different values of \(\Delta\) as shown in Fig. 2 c. Here, we sweep \({V}_{{\rm{U}}}\) and \({V}_{{\rm{L}}}\) in opposite directions from \(-1.26\ {\rm{V}}\) to \(-0.96\ {\rm{V}}\) while keeping \({V}_{{\rm{T}}}=-0.75\ {\rm{V}}\) . The data shows a gradual transition of the electron transfer probability from the upper (U) to the lower (L) detector QD while the total transfer efficiency stays at \(99.5\pm 0.3 \%\) . Fig. 2 Directional-coupler operation. a Schematic slices along the double-well potential, \(U\) . The horizontal lines represent the eigenstates in the moving QD, whereas the grey shading of the energy levels indicates the exponentially decreasing occupation. b Schematic showing the QDs that are formed by the SAW in the coupling region with additional indications of the surface gates and the transport paths. The black vertical bar indicates the positions of the aforementioned potential slices. c Probability, \(P\) , to end up in the upper (U) or lower (L) quantum rail for different values of potential detuning, \(\Delta\) . The lines show a fit by a Fermi function providing the scale parameter, \(\sigma\) . d Transition widths, \(\sigma\) , for different values of the tunnel-barrier voltage, \({V}_{{\rm{T}}}\) . The line shows the course of a stationary, one-dimensional model of the partitioning process Full size image An interesting feature of the observed probability transition is that it follows the course of a Fermi–Dirac distribution: $${P}_{{\rm{U}}}(\Delta )\approx \frac{1}{\exp (-\Delta /\sigma )+1}$$ (1) Fitting the experimental data with such a function (see lines in Fig. 2 c), we can quantify the width of the probability transition via the scale parameter, \(\sigma\) . To test the dependencies of the directional-coupler transition on the different properties of the device, we experimentally investigated if the width of the probability transition changes as we sweep the gate voltage configurations on different surface electrodes of the nanostructure. We find a significant narrowing of the probability transition (see Fig. 2 d) as we increase the tunnel-barrier potential. The role of excitation To obtain a better understanding of our experimental observations, we first investigate the partitioning process by means of a stationary model. We consider a one-dimensional cut of the double-well potential in the tunnel-coupling region. In this region, we have a sufficiently flat potential landscape, \(U({\boldsymbol{r}},t)\approx U(y)+{U}_{{\rm{SAW}}}(x,t)\) , such that the eigenstate problem becomes separable in the \(x\) and \(y\) coordinates. The electronic wave function \({\phi }_{i}(y)\) along the transverse \(y\) direction satisfies the one-dimensional Schrödinger equation: $$\frac{{\hslash }^{2}}{2{m}^{* }}\frac{{\partial }^{2}{\phi }_{i}(y)}{\partial {y}^{2}}+U(y)\cdot {\phi }_{i}(y)={E}_{i}{\phi }_{i}(y)$$ (2) where \(U(y)\) is a the electrostatic double-well potential for a given set of surface-gate voltages \({V}_{{\rm{U}}}\) , \({V}_{{\rm{L}}}\) and \({V}_{{\rm{T}}}\) . \({m}^{* }\) indicates the effective electron mass in a GaAs crystal. Here, we obtain \(U(y)\) for the specific geometry of the presently investigated device by solving the corresponding Poisson problem 43 , 44 . To obtain the probability of finding the electron in the upper or lower potential well, we can now simply sum up the contributions of the wave function in the eigenstates for the respective region of interest. For the upper quantum rail, we integrate the modulus squared of the wave function over the spatial region of the upper quantum rail: $${P}_{{\rm{U}}}={\sum }_{i}{p}_{i}\mathop{\int }\limits_{y> 0\ {\rm{nm}}}\left|\right.{\phi }_{i}\left(\right.y,U(y)\left)\right.{\left|\right.}^{2}\ {\rm{d}}y$$ (3) where \({p}_{i}\) is the occupation of the eigenstate \({\phi }_{i}\) . For a fixed tunnel-barrier height, we can detune the double-well potential by varying \(\Delta\) , as in experiments. It is now straightforward to calculate the directional-coupler transition for the experimental setting with any imaginable occupation of the eigenstates. Let us first consider the hypothetical situation where only the ground state is occupied. We evaluate Eq. ( 3 ) with mere ground state occupation ( \({p}_{0}=1\) ) and fixed barrier potential ( \({V}_{{\rm{T}}}=-0.7\ {\rm{V}}\) ) for different values of potential detuning, \(\Delta\) , that are changed as in experiment. Doing so, we obtain a course of the probability transition having the shape of the aforementioned Fermi–Dirac distribution. Assuming ground state occupation in the double-well potential, we obtain however an extremely abrupt transition in transfer probability with a width, \(\sigma\) , that is in the order of several microvolts what is much smaller than in our experiment. Let us now investigate how the situation changes as we populate successively excited eigenstates of the double-well potential. For this purpose we define the occupation of the eigenstates, \({\phi }_{i}\) , with eigenenergies, \({E}_{i}\) , via an exponential distribution: $${p}_{i}\propto \exp \left(\!\!-\frac{{E}_{i}-{E}_{0}}{\varepsilon }\right)$$ (4) where \(\varepsilon\) is a parameter determining the occupation of higher energy eigenstates. This approach allows us to maintain the course of a Fermi distribution as we successively occupy excited states. Increasing the occupation parameter \(\varepsilon\) , we find a broadening of the probability transition. For \(\varepsilon =3.5\ {\rm{meV}}\) we obtain simulation results showing very good agreement with the experimental data. Keeping \(\varepsilon\) constant, the one-dimensional model follows the experimentally observed transition width, \(\sigma\) , over a wide range of \({V}_{{\rm{T}}}\) as shown by the line in Fig. 2 d. Note, however, that \(\varepsilon\) only provides a rough estimate for the excitation energy that is present in our experiment due to the uncertainties that enter the model via the potential calculation. The model shows that the width of the directional-coupler transition, \(\sigma\) , reflects the occupation of excited states and thus indirectly the confinement in the moving QDs that are formed by the SAW along the tunnel-coupled quantum rails. Our analysis of the experimental data shows that the flying electron is significantly excited as it propagates through the coupling region of the present circuit. To find possible sources of charge excitation, we employed a more elaborate model to simulate the time-dependent SAW-driven propagation of the electron along different sections of our beam-splitter device 45 . For this purpose, we superimpose the static, two-dimensional potential landscape, \(U({\boldsymbol{r}})\) , with the dynamic modulation of a SAW train, \({U}_{{\rm{SAW}}}(x,t)\) , that we estimate from Coulomb-blockade measurements. Simulating the entrance of a flying electron from the injection channel into the tunnel-coupled region, we find significant excitation of the flying electron into higher energy states. To quantify adiabatic transport of the flying charge qubit, we define the qubit fidelity, \(F\) , as projection of the electron wave function on the two lowest eigenstates of the moving QD potential that is formed by the SAW along the coupled quantum rails. Figure 3 a shows courses of the qubit fidelity, \(F\) , of a flying electron state that propagates along the tunnel-coupled region for different values of peak-to-peak SAW amplitude, \(A\) . For the present experiment, we estimate \(A\) as 17 meV. For this value (red solid line), the simulation data shows an abrupt reduction of the qubit fidelity, \(F\) , due to the aforementioned excitation of the SAW-transported electron at injection from a single-quantum rail into the tunnel-coupled region. In congruence with the stationary, one-dimensional model that we applied before, the coupling into higher energy states leads to a spreading over both sides of the double-well potential as shown in Fig. 3 b and Supplementary Movie 1 . The simulation thus shows up a major source of excitation. When the electron passes from the strongly confined injection channel into the wide double-well potential it experiences an abrupt reconfiguration of the eigenstates in the moving QD what causes Landau–Zener transitions in higher energy states. Fig. 3 Time-dependent simulation of electron propagation. a Course of the qubit fidelity, \(F\) , for SAW-driven single-electron transport along the coupling region for different values of SAW amplitude, \(A\) . b Trace of the electron wave function, \(\Psi\) , along the coupled quantum rails for \(A=17\ {\rm{meV}}\) at selected times, \(t\) , indicated via the vertical dashed lines. The grey regions indicate the surface gates. c Trace of \(\Psi\) for \(A=45\ {\rm{meV}}\) Full size image Towards adiabatic transport Let us now investigate if we can reduce the probability for charge excitation by increasing the longitudinal confinement via the SAW amplitude. For \(A=30\ {\rm{meV}}\) —see red dashed line in Fig. 3 a—charge excitation is already strongly mitigated. The qubit fidelity vanishes however also in this case, since the electron still occupies low-energy states above the two-level system we are striving for. Despite non-adiabatic transport, we can already recognise coherent tunnel oscillations when looking at the trace of the wave function as shown in Supplementary Movie 2 . This shows that also excited electron states can undergo coherent tunnelling processes as previously expected in magnetic-field-assisted experiments on continuous SAW-driven single-electron transport through a quantum rail that is tunnel-coupled to an electron reservoir 46 . Increasing the SAW amplitude further to \(A=45\ {\rm{meV}}\) (blue solid line), the transport of the electron gets nearly adiabatic and clear coherent tunnel oscillations occur as shown in Fig. 3 c and Supplementary Movie 3 . The simulations show that stronger SAW confinement can indeed prohibit charge excitation and maintain adiabatic transport. In experiment, one can increase the SAW confinement via many ways such as reduced attenuation of the IDT signal, longer IDT geometries, impedance matching or the implementation of more advanced SAW generation approaches 47 , 48 , 49 . We anticipate therefore the experimental observation of coherent tunnel oscillations in follow-up investigations. Triggering single-electron transfer Achieving adiabatic single-electron transport, a SAW train could also be employed to couple a pair of flying electrons in a beam-splitter set-up. In the long run, this coupling could enable entanglement of single-flying electron qubits through their Coulomb interaction 14 or spin 15 . For this purpose, electrons must be sent simultaneously from different sources in a specific position of the SAW train. Let us now investigate if we can achieve such synchronisation by using a fast voltage pulse as trigger for the sending process with the SAW 9 . After loading an electron from the reservoir, we bring the particle into a protected configuration where it cannot be picked up by the SAW. To load the electron into a specific minimum of the SAW train, we then apply a voltage pulse at the right moment to the plunger gate of the QD as schematically indicated in Fig. 4 a, b. This pulse allows the electron to escape the stationary source QD into a specific moving QD formed by the SAW along the quantum rail. Fig. 4 Pulse-triggered single-electron transfer. a SEM image of the source quantum dot (QD) showing the pulsing gate highlighted in yellow. A fast voltage pulse on this gate allows one to trigger SAW-driven single-electron transport along the quantum rail (QR) as schematically indicated. b Measurement scheme showing the modulation, \(\delta U\) , of the electric potential at the stationary source QD: the delay of a fast voltage pulse, \(\tau\) , is swept along the arrival window of the SAW. c Measurement of the probability, \(P\) , to transfer a single electron with the SAW from the source to the receiver QD for different values of \(\tau\) . d Zoom in on a time frame of four SAW periods, \({T}_{{\rm{SAW}}}\) Full size image To demonstrate the functioning of this trigger, we use a very short voltage pulse of 90 ps corresponding to a quarter SAW period 50 . Sweeping the delay of this pulse, \(\tau\) , over the arrival window of the SAW at the source QD, we observe distinct fringes of transfer probability as shown in Fig. 4 c and more detailed in Fig. 4 d. The data shows that the fringes are exactly spaced by the SAW period. The periodicity of the transmission peaks indicates that there is a particular phase along the SAW train where a picosecond pulse can efficiently transfer an electron from the stationary source QD into a specific SAW minimum. As the voltage pulse overlaps in time with this phase, the sending process is activated and the transfer probability rapidly goes up from 2.7 ± 0.5% to 99.0 ± 0.4%. The finite background transfer probability is due to limited pulse amplitude in the present set-up. The envelope of the transfer fringes is consistent with the expected SAW profile. Comparing the directional-coupler measurement with and without triggering of the sending process, we find no change in the transition width what indicates that excitation at the source QD is comparably small or not present. By reduction of pulse attenuation along the transmission lines and optimisation of the QD structure, we anticipate further enhancements in the efficiency of the voltage-pulse trigger. The present pulsing approach allows us to synchronise the SAW-driven sending process along parallel quantum rails and represents thus an important milestone towards the coupling of single-flying electrons. Discussion A flying qubit architecture is an appealing idea to transfer and manipulate quantum information between stationary nodes of computation 1 , 14 , 16 . Thanks to the isolation during transport and the availability of highly efficient single-electron sources and receivers, SAWs represent a particularly promising candidate to deliver the first quantum logic gate for electronic flying qubits 14 , 22 , 23 . Here, we have presented important milestones to achieve this goal. First, we demonstrated the capability of the present device to partition a single-electron arbitrarily from one quantum rail into the other while maintaining a transfer efficiency above 99%. Employing quantum mechanical simulations, we reproduced the experimentally observed directional-coupler transition and identified charge excitation as remaining challenge for adiabatic transport through the coupling region of a SAW-driven single-electron circuit. Simulating SAW-driven electron propagation through the coupling region, we identified the central source of excitation and provided a clear route to remedy this problem in future investigations. We anticipate that an optimised surface-gate geometry as well as stronger SAW confinement 47 , 48 , 49 will allow coherent manipulation of a single electron in a true two-level state 29 , 30 , 31 , 32 . We demonstrated furthermore a powerful tool to synchronise the SAW-driven sending process along parallel quantum rails using a voltage-pulse trigger. With this achievement, we fulfil an important requirement to couple a pair of single electrons in a beam-splitter set-up. Our results pave the way for electron-quantum optics experiments 14 and quantum logic gates with flying electron qubits 40 at the single-particle level. Methods Experimental set-up The experiments are performed at a temperature of about \(10\ {\rm{mK}}\) using a \({}^{3}{\rm{He}}{/}^{4}{\rm{He}}\) dilution refrigerator. The present device is realised by a Schottky gate technique in a two-dimensional electron gas (2DEG) of a GaAs/AlGaAs heterostructure. The 2DEG is located at the GaAs/AlGaAs interface \(100\ {\rm{nm}}\) below the surface and has an electron density of \(n\approx 2.7\times 1{0}^{11}\ {{\rm{cm}}}^{-2}\) and a mobility of \(\mu \approx 1{0}^{6}\ {{\rm{cm}}}^{2}\ {{\rm{V}}}^{-1}{{\rm{s}}}^{-1}\) . It is formed by a Si- \(\delta\) -doped layer that is located \(55\ {\rm{nm}}\) below the surface. All nanostructures are realised by Ti/Au electrodes (Ti: 5 nm; Au: 20 nm) that are written by electron-beam lithography on the surface of the wafer. Applying a set of negative voltages on these surface electrodes, we deplete the underlying 2DEG and form the potential landscape defining our beam-splitter device. Along the quantum rails there are thus no electrons present. The SAW-transported electron is thus completely decoupled from the Fermi sea. The interdigital transducer (IDT) that we employ as source of a SAW train is placed outside of the mesa—about \(1.6\ {\rm{mm}}\) beside the single-electron circuit. It contains 120 interdigitated double fingers with a finger spacing and width of \(125\ {\rm{nm}}\) . The wavelength of the generated SAW is thus 1 µm. The aperture of the IDT fingers is 50 µm. We operate the device with a pulse-modulated, sinusoidal voltage signal oscillating at the IDT’s resonance frequency of \(2.77\ {\rm{GHz}}\) . In all of the present experiments, the duration of each oscillation pulse on the IDT was set to \(30\ {\rm{ns}}\) . The power on the signal generator was set to 25 dBm. We attenuate the IDT signal along the transmission line at two temperature stages in total by 8 dB to mitigate the injection of thermal noise. The propagation of evanescent electromagnetic waves from the IDT is suppressed by grounded metal shields. The jitter of the voltage pulse that we send from an arbitrary-waveform-generator (AWG) to the plunger gate of the source QD was measured as about 6.6 ps (FWHM) with respect to a fixed phase of the SAW burst. SAW-driven single-electron transfer To execute the sound-driven transport of a single electron, we perform a sequence of voltage movements on the surface gates defining the source and receiver QDs. In each single-shot-transfer experiment, we perform three steps before launching the SAW train: initialisation, loading and preparation to send. These steps are executed by fast voltage changes on the QD gates R and C as indicated in the SEM image shown in Fig. 5 a. In between each step we go to a protected measurement configuration (M) and read out the current through the quantum point contact (QPC) as indicated in the charge-stability diagram shown in Fig. 5 b. Comparing the QPC current before and after each step, we can deduce if an electron entered or left the QD. Fig. 5 Preparation of SAW-driven single-electron transfer. a SEM image of a source QD with indication of surface electrodes. b Charge-stability diagram showing example source-quantum-dot configurations for QPC measurement (M), initialisation (I), single-electron loading (L) and sending (S). Here, we plot \(\partial {I}_{{\rm{QPC}}}/\partial {V}_{{\rm{R}}}\) . The data show abrupt jumps in QPC current indicating charge-degeneracy lines of the QD. c Loading map showing configurations I and L. Each pixel represents the difference in QPC current, \(\Delta {I}_{{\rm{QPC}}}\) , before and after visiting the respective loading configuration. The colourscale reflects the electron number in the QD Full size image To initialise the system, we remove possibly present electrons from all QDs by visiting configuration I. We then load a single electron at the source QD by going to configuration L. Figure 5 c shows jumps in QPC current at different loading configurations (L) that are visited after initialisation via voltage variations from the measurement position, M. The data show that, depending on the voltage variations of the reservoir ( \(\delta {V}_{{\rm{R}}}\) ) and coupling gate ( \(\delta {V}_{{\rm{C}}}\) ), different numbers of electrons can be efficiently loaded into the source QD. Having accomplished the loading process, we go to a sending configuration (S) where the electron can be picked up by the SAW. At the same time as we prepare the source QD for sending, we bring the receiver QD into a configuration allowing the electron to be caught. We then launch a SAW train to execute the transfer of the loaded electron. Comparing the QPC currents before and after the SAW burst, we can assess whether the electron was successfully transported. Estimation of SAW amplitude To estimate the amplitude of potential modulation that is introduced by the SAW, we investigate the broadening of discrete energy levels in QDs by continuous SAW modulation 51 . Owing to the piezoelectric coupling, a SAW passing through a quantum dot leads to a periodic modification of the QDs chemical potential. This causes that the discrete energy states of the quantum dot oscillate with respect to the bias window. During this process—as for the situation of a classical oscillator—the quantum dot states remain most of the time close to turning points of the oscillation. Repeating Coulomb-blockade-peak measurements with increased SAW amplitude, the conductance peaks split according to the amplitude of the periodic potential modulation. The two split lobes indicate the two energies at which a QD state stays on average most of the modulation time. Consequently, one can estimate the peak-to-peak amplitude of the SAW-introduced potential modulation by determining the energy difference between these two lobes of the split peak. In order to obtain the peak-to-peak amplitude in energy units, the voltage-to-energy conversion factor \(\eta\) has to be known. We determine \(\eta\) from Coulomb-diamond measurements as exemplary shown in Fig. 6 a. Knowing the voltage-to-energy conversion factor, \(\eta\) , we can use the SAW-introduced broadening of the Coulomb-blockade peaks to deduce the amplitude of the SAW modulation, \(A\) , in energy. Figure 6 b shows an exemplary data set showing the broadening of Coulomb-blockade peaks with increasing transducer power, \(P\) . Attentuation along the transmission line is not taken into account here. The splitting of resonances in Fig. 6 b is indicated by the dashed lines. At \(P\approx 1\) dBm the side peaks of two neighbouring Coulomb-blockade peaks start to overlap. At the intersection position, the peak-to-peak amplitude of the SAW is equal to the charging energy of the quantum dot, \({E}_{{\rm{C}}}\) . The peak-to-peak amplitude of the SAW-introduced potential modulation, \(A\) , is related to the transducer power, \(P\) , by the relation: $$A\ [\,\text{eV}\,]=2\cdot \eta \cdot 1{0}^{\frac{P[\text{dBm}]\ -\ {P}_{0}}{20}},$$ (5) where \({P}_{0}\) is a fit parameter accounting for power losses. The voltage-to-energy conversion factor, \(\eta ={E}_{{\rm{C}}}/{V}_{{\rm{C}}}\) , is determined by the aforementioned Coulomb-diamond measurements. Fig. 6 Estimation of the SAW amplitude. a Exemplary Coulomb-diamond measurement allowing the extraction of the voltage-to-energy conversion factor \(\eta ={E}_{{\rm{C}}}/{V}_{{\rm{C}}}\) . b Broadening of the corresponding Coulomb-blockade peaks with increasing transducer power, \(P\) . c Amplitude of SAW-introduced potential modulation. The dashed line shows a fit of Eq. ( 5 ) to the experimentally obtained data. The confidence region (grey area) is roughly estimated from variations of measurements on four QDs on a similar sample. The plot shows an extrapolation of this region to the typically employed transducer power of 25 dBm. The inset shows a zoom into the data points Full size image Since these measurements are performed in continuous-wave mode, we trace the broadening of the Coulomb-blockade peaks only up to a transducer power of –5 dBm in order to avoid unnecessary heating. Fitting Eq. ( 5 ) via the parameter \({P}_{0}\) to the data, we estimate the SAW amplitude for the typically applied transducer power of 25 dBm with 30 ns pulse modulation. Figure 6 c shows the SAW amplitude data (zoom in inset) and the extrapolation to 25 dBm (grey area)—the value that was applied in the single-shot-transfer experiments with the present beam-splitter device. The extrapolation indicates a SAW-introduced peak-to-peak modulation of about \((17\pm 8)\) meV. Potential simulations Knowing the sample geometry, the electron density in the 2DEG and the set of applied voltages, we calculate the electrostatic potential of the gate-patterned device using the commercial Poisson solver NextNano 43 . We assume a frozen charge layer and deep-boundary conditions 44 . The central premise is that the electron density in the 2DEG is constant, with and without a grounded surface electrode on top of the GaAs/AlGaAs heterostructure. Employing this approach, we deduce a donor concentration of about \(1.6\cdot 1{0}^{10}\ {\text{cm}}^{-2}\) in the doping layer and a surface charge concentration of about \(1.3\cdot 1{0}^{10}\ {\text{cm}}^{-2}\) . With this information, we can approximately calculate the potential landscape below the surface gates in the experimentally studied voltage configuration. The accuracy of the calculated potential landscape is sufficient to draw qualitative conclusions and to perform an order-of-magnitude discussion. Time-dependent simulations To simulate the evolution of the SAW-transported electron state, we consider the full two-dimensional potential landscape, \(U({\boldsymbol{r}},t)\) , of our beam-splitter device with a 17 meV peak-to-peak potential modulation of the SAW having a wavelength of 1 µm. We calculate the evolution of the particle described via the electron wave function, \(\psi ({\boldsymbol{r}},t)\) , by solving the time-dependent Schrödinger equation: $$i\hslash \frac{\partial \psi ({\boldsymbol{r}},t)}{\partial t}=\hat{H}\psi ({\boldsymbol{r}},t)=\left[-\frac{{\hslash }^{2}}{2{m}^{* }}{\nabla }^{2}+U({\boldsymbol{r}},t)\right]\psi ({\boldsymbol{r}},t)$$ (6) where \(\hat{H}\) describes the Hamilton operator, \(U({\boldsymbol{r}},t)\) is the two-dimensional dynamic potential encountered by the electron and \({m}^{* }\) is the effective electron mass in a GaAs crystal. We numerically solve the equation using the finite-difference method 45 and discretise the wave function both spatially and in time. In one dimension, the single-particle wave function becomes: $$\psi (x,t)=\psi (m\cdot \Delta x,n\cdot \Delta t)\equiv {\psi }_{m}^{n}$$ (7) where \(m\) and \(n\) are integers and \(\Delta x\) and \(\Delta t\) are the lattice spacing in space and in time, respectively. Following the numerical integration method presented by Askar and Cakmak 52 , we evaluate the leading term in the difference between staggered time-steps: $${\psi }_{m}^{n+1}={e}^{-i\Delta t\hat{H}/\hslash }\,{\psi }_{m}^{n}\simeq \left(1-\frac{i\Delta t\hat{H}}{\hslash }\right){\psi }_{m}^{n}$$ (8) Consequently, we can write the relation between the time-steps \({\psi }_{m}^{n+1}\) , \({\psi }_{m}^{n}\) , and \({\psi }_{m}^{n-1}\) as: $${\psi }_{m}^{n+1}-{\psi }_{m}^{n-1}=\left({e}^{-i\Delta t\hat{H}/\hslash }-{e}^{i\Delta t\hat{H}/\hslash }\right)\,{\psi }_{m}^{n}\simeq -2\left(\frac{i\Delta t\hat{H}}{\hslash }\right){\psi }_{m}^{n}$$ (9) By splitting the wave function in its real and imaginary parts, \({\psi }_{m}^{n}={u}_{m}^{n}+i{v}_{m}^{n}\) , where \(u\) and \(v\) are real functions, we can evaluate the entire wave function in the same time step. Using the Taylor expansion to estimate the second order spatial derivative, \(\frac{{\partial }^{2}\psi }{\partial {x}^{2}}\simeq \frac{\psi (x\ -\ \Delta x)\ -\ 2\psi (x)\ +\ \psi (x\ +\ \Delta x)}{\Delta {x}^{2}}\) , the system of equations to solve becomes: $${u}_{m}^{n+1}={u}_{m}^{n-1}+2\left(\frac{\hslash \Delta t}{m\Delta {x}^{2}}+\frac{\Delta t}{\hslash }{U}_{m}^{n}\right){v}_{m}^{n}-\frac{\hslash \Delta t}{m\Delta {x}^{2}}\left({v}_{m-1}^{n}+{v}_{m+1}^{n}\right)$$ (10a) $${v}_{m}^{n+1}={v}_{m}^{n-1}-2\left(\frac{\hslash \Delta t}{m\Delta {x}^{2}}+\frac{\Delta t}{\hslash }{U}_{m}^{n}\right){u}_{m}^{n}+\frac{\hslash \Delta t}{m\Delta {x}^{2}}\left({u}_{m-1}^{n}+{u}_{m+1}^{n}\right)$$ (10b) By this approach we do not need to obtain the eigenstates of the dynamic QD potential for each time step. Instead, we calculate the eigenbasis only at the beginning of the simulation to form the initial wave function by pure ground state occupation. Solving the system of Eqs. ( 10a, b ) for each successive time step, we then calculate the evolution of the wave function in the dynamic potential landscape that is given by the electrostatic potential defined by the surface gates and the potential modulation of the moving SAW train. We solve the time-dependent Schrödinger equation over the entire tunnel-coupled region using Dirichlet boundary conditions. The boundaries are sufficiently far away from the position of the wave function such that no reflections are observed. To obtain the occupation of the eigenstates after a certain propagation time of the wave-packet, we calculate the eigenstates for the potential of the present time step and decompose the wave function in that basis. The method we use is shown to be convergent and accurate 45 . Data availability The data that support the findings of this study are available from the corresponding authors on reasonable request.
Researchers have successfully used sound waves to control quantum information in a single electron, a significant step towards efficient, robust quantum computers made from semiconductors. The international team, including researchers from the University of Cambridge, sent high-frequency sound waves across a modified semiconductor device to direct the behaviour of a single electron, with efficiencies in excess of 99 percent. The results are reported in the journal Nature Communications. A quantum computer would be able to solve previously unsolvable computational problems by taking advantage of the strange behaviour of particles at the subatomic scale, and quantum phenomena such as entanglement and superposition. However, precisely controlling the behaviour of quantum particles is a mammoth task. "What would make a quantum computer so powerful is its ability to scale exponentially," said co-author Hugo Lepage, a Ph.D. candidate in Cambridge's Cavendish Laboratory, who performed the theoretical work for the current study. "In a classical computer, to double the amount of information you have to double the number of bits. But in a quantum computer, you'd only need to add one more quantum bit, or qubit, to double the information." Last month, researchers from Google claimed to have reached 'quantum supremacy', the point at which a quantum computer can perform calculations beyond the capacity of the most powerful supercomputers. However, the quantum computers which Google, IBM and others are developing are based on superconducting loops, which are complex circuits and, like all quantum systems, are highly fragile. "The smallest fluctuation or deviation will corrupt the quantum information contained in the phases and currents of the loops," said Lepage. "This is still very new technology and expansion beyond the intermediate scale may require us to go down to the single particle level." Instead of superconducting loops, the quantum information in the quantum computer Lepage and his colleagues are devising use the 'spin' of an electron—its inherent angular momentum, which can be up or down—to store quantum information. "Harnessing spin to power a functioning quantum computer is a more scalable approach than using superconductivity, and we believe that using spin could lead to a quantum computer which is far more robust, since spin interactions are set by the laws of nature," said Lepage. Using spin allows the quantum information to be more easily integrated with existing systems. The device developed in the current work is based on widely-used semiconductors with some minor modifications. The device, which was tested experimentally by Lepage's co-authors from the Institut Néel, measures just a few millionths of a metre long. The researchers laid metallic gates over a semiconductor and applied a voltage, which generated a complex electric field. The researchers then directed high-frequency sound waves over the device, causing it to vibrate and distort, like a tiny earthquake. As the sound waves propagate, they trap the electrons, pushing them through the device in a very precise way, as if the electrons are 'surfing' on the sound waves. The researchers were able to control the behaviour of a single electron with 99.5 percent efficiency. "To control a single electron in this way is already difficult, but to get to a point where we can have a working quantum computer, we need to be able to control multiple electrons, which get exponentially more difficult as the qubits start to interact with each other," said Lepage. In the coming months, the researchers will begin testing the device with multiple electrons, which would bring a working quantum computer another step closer.
10.1038/s41467-019-12514-w
Medicine
Higher vitamin K intake linked to lower bone fracture risk late in life
Marc Sim et al, Dietary Vitamin K1 intake is associated with lower long-term fracture-related hospitalization risk: the Perth longitudinal study of ageing women, Food & Function (2022). DOI: 10.1039/D2FO02494B
https://dx.doi.org/10.1039/D2FO02494B
https://medicalxpress.com/news/2022-11-higher-vitamin-intake-linked-bone.html
Abstract This study examined the association between dietary Vitamin K1 intake with fracture-related hospitalizations over 14.5 years in community-dwelling older Australian women ( n = 1373, ≥70 years). Dietary Vitamin K1 intake at baseline (1998) was estimated using a validated food frequency questionnaire and a new Australian Vitamin K nutrient database, which was supplemented with published data. Over 14.5 years, any fracture ( n = 404, 28.3%) and hip fracture ( n = 153, 10.7%) related hospitalizations were captured using linked health data. Plasma Vitamin D status (25OHD) and the ratio of undercarboxylated osteocalcin (ucOC) to total osteocalcin (tOC) from serum was assessed at baseline. Estimates of dietary Vitamin K1 intake were supported by a significant inverse association with ucOC : tOC; a marker of Vitamin K status ( r = −0.12, p < 0.001). Compared to women with the lowest Vitamin K1 intake (Quartile 1, <61 μg d −1 ), women with the highest Vitamin K1 intake (Quartile 4, ≥99 μg d −1 ) had lower hazards for any fracture- (HR 0.69 95%CI 0.52–0.91, p < 0.001) and hip fracture-related hospitalization (HR 0.51 95%CI 0.32–0.79, p < 0.001), independent of 25OHD levels, as part of multivariable-adjusted analysis. Spline analysis suggested a nadir in the relative hazard for any fracture-related hospitalizations at a Vitamin K1 intake of approximately 100 μg day −1 . For hip fractures, a similar relationship was apparent. Higher dietary Vitamin K1 is associated with lower long-term risk for any fracture- and hip fracture-related hospitalizations in community-dwelling older women. Introduction Osteoporotic fractures, especially hip fractures, often result in longstanding disability and compromised independence together with increased mortality risk. 1 Although best known for its role in blood coagulation, epidemiological investigation and clinical trials suggest that Vitamin K intake is important for skeletal health. 2–6 Basic studies of Vitamin K have supported the clinical data by identifying a critical role in the γ-carboxylation of the Vitamin K dependant bone proteins, including osteocalcin (OC). 7 Specifically, OC is produced by osteoblasts and is believed to improve bone toughness, 7 which is essential for preventing fractures. The two major forms of OC are carboxylated OC (cOC) and undercarboxylated OC (ucOC), with the former linked to bone integrity. 8 The ratio of ucOC to total OC (tOC) is known to be inversely associated with dietary Vitamin K intake. 9 Cell biology studies have also identified other actions of Vitamin K on bone separate from γ-carboxylation. 10 Two main forms of Vitamin K in the diet include Vitamin K1 (phylloquinone; PK) and Vitamin K2 (menaquinones; MKs), including its different isoforms such as MK4 to MK13. Green leafy vegetables and their oils are rich sources of Vitamin K1, while dietary Vitamin K2 is obtained from animal products including meats, eggs, and cheeses. 11 In the diet, about 90% of total Vitamin K intake is estimated to come from PK. 12 From a public health perspective, when considering the well-established health benefits of higher vegetable intake, this is an important consideration when promoting dietary guidelines specific to Vitamin K. Nevertheless, the 2021 update on Vitamin K nutrition by the National Institute of Health highlights uncertainty regarding the importance of Vitamin K for fracture prevention. 13 Nutrient Reference Values for total Vitamin K intake have been set at 70 and 60 μg d −1 for Australian men and women, respectively, based on median intakes of the Australian population. 14 Such guidelines are comparable to Europe 15 but slightly lower than the USA. 16 However, the aforementioned intakes, especially in Australia, may be insufficient to support optimal bone metabolism. 9 Our previous clinical trial found that increasing daily intake of Vitamin K1-rich vegetables over four weeks significantly reduced tOC, suggesting improved bone metabolism in healthy adults. 9 This implies that Vitamin K1 could play an important role in bone health and fracture prevention. As such, the aim of this study was to investigate the relationship between dietary Vitamin K1 intake, using a newly developed Australian Vitamin K food database, with long-term fracture risk in community-dwelling older Australian women. We also sought to determine whether there were dose-dependent thresholds for dietary Vitamin K1 intakes to be associated with lower fracture risk. Materials and methods Participants A 5-year, double-blind, randomised controlled trial of daily calcium supplementation to prevent fracture in women (Calcium Intake Fracture Outcome Study, CAIFOS) commenced in 1998. Women ( n = 1500, aged ≥70) with (i) an expected survival beyond 5 years and (ii) not receiving any medication (including hormone replacement therapy) known to affect bone metabolism 17 were recruited using electoral roll. After CAIFOS, participants were invited to be part of two 5-year follow-up observational studies, leading to a total follow-up of 14.5 years; the Perth Longitudinal Study of Aging in Women (PLSAW). 1485 women completed a food frequency questionnaire at baseline and those with implausible energy intakes (<2100 kJ [500 kcal] or >14 700 kJ [3500 kcal]) ( n = 17/1485, 1.1%) or undertaking Vitamin D supplementation (due to its link with fracture, 18 n = 39/1485) were excluded. A further 45 women taking warfarin were excluded due to warfarin interfering with Vitamin K metabolism. 19 Women missing covariates were also excluded ( n = 11). The current study included 1373 women (ESI Fig. 1 † ) for the primary analysis. The 1271 women who had a measurement of total plasma 25OHD at baseline were included in analyses where 25OHD was included as a covariate. Written informed consent was obtained from all women. The Human Ethics Committee of the University of Western Australia provided ethical approval. Both CAIFOS and PLSAW complied with the Declaration of Helsinki and were retrospectively registered on the Australian New Zealand Clinical Trials Registry (#ACTRN12615000750583 and #ACTRN12617000640303). Linked data ethics approval was provided by the Human Research Ethics Committee of the Western Australian Department of Health (#2009/24). STROBE for observational studies were adhered to for this work. Dietary intake A self-administered, semiquantitative food frequency questionnaire (FFQ) developed and validated by the Cancer Council of Victoria was used to determine dietary intake at baseline in 1998. 20,21 The FFQ was designed to capture diet over a year, with such timeframes used to represent the ‘usual’ diet. Measuring spoons and cups, as well as food models and charts were provided to participants by a research assistant that also supported these women while completing the FFQ. This was done to enhance the accuracy of reported food consumption. As per FFQ guidelines, energy (kJ d −1 ) and nutrient intakes including calcium (mg d −1 ), and alcohol intake (g d −1 ) were calculated using the NUTTAB95 food composition database. Where necessary, other sources were considered. 22 For each food item, we obtained the PK values of commonly consumed foods from an Australian food database specific to Vitamin K1. 23 From the 101 foods and beverages (including alcohol) obtained from the FFQ, the Australian Vitamin K nutrient database assessed 56 food items known to contain PK. For this database, the main food groups included vegetables ( n = 20), fruits ( n = 3), animal products ( n = 16), dairy ( n = 14) and fermented foods ( n = 3). Where the PK content of the FFQ items were not quantified by this Australian Vitamin K nutrient database ( n = 45), values were obtained from the United States Department of Agriculture (USDA) Food and Nutrient Database for Dietary Studies 2017–18. 24 Upon reasonable request to the corresponding author, data on the values used and assumptions made can be provided. Fracture-related hospitalizations Over 14.5 years, all fracture-related hospitalization outcomes were identified from the Western Australian Data Linkage System (Department of Health Western Australia, East Perth, Australia) and retrieved from the Western Australia Hospital Morbidity Data Collection (HMDC). All participants records between 1998 and 2013 were extracted using the International Classification of External Causes of Injury codes and the International Classification of Diseases (ICD) coded diagnosis data pertaining to all inpatient admissions (public and private) in Western Australia. Ascertainment of hospitalizations avoids the problems of patient self–reporting and loss to follow-up. Fracture identification codes included S02–S92, M80, T02, T08, T10, T12, and T14.2. Other codes pertaining to fractures of the face (S02.2–S02.6), fingers (S62.5–S62.7), and toes (S92.4–S92.5), as well as fractures caused by motor vehicle injuries were excluded (external cause of injury codes V00–V99). Baseline characteristics Questionnaires pertaining to smoking history and physical activity were completed at baseline. Specifically, participation in sport, recreation, and/or regular physical activities undertaken in the three months prior to their baseline visit was considered. 25 Physical activity (kJ d −1 ) was then calculated by considering activity type, time undertaken and body weight. 25 An individual was considered an ex-smoker/current smoker if they had consumed >1 cigarette per day for more than 3 months at any time in their life. To calculate body mass index (BMI, kg m −2 ), digital scales were used to assessed body weight, while height was obtained using a stadiometer. Treatment with either placebo or calcium during CAIFOS was included as a covariate. Self-reported prevalent osteoporotic fractures at baseline were included if the fracture occurred after 50 years of age, were associated with minimal trauma (falling from standing height or less), and not a fracture of the face, skull, or phalanges. Biochemistry In the morning after an overnight fast (0830 to 1030 h), baseline venous blood samples (plasma and serum) were obtained and subsequently stored at −80 °C. A validated LC-MS/MS (Liquid Chromatography Tandem Mass Spectrometry) method adopted at the RDDT Laboratories (Bundoora, VIC, Australia) was used to measure plasma 25-hydroxyVitamin D2 (25OHD 2 ) and D3 (25OHD 3 ). 26 Values were summed to obtained total plasma 25OHD concentration for each individual ( n = 1271). Coefficients of variation (CVs) were 10.1% at a 25OHD 2 mean concentration of 12 nmol L −1 and 11.3% at a 25OHD 3 mean concentration of 60 nmol L −1 . One nmol L −1 of 25OHD is equivalent to 0.4 ng mL −1 . For descriptive purposes, the season where the blood sample was obtained (Summer [December to February], Autumn [March to May], Winter [June to August] and Spring [September to November] was subsequently combined into two groups, Summer/Autumn vs. Winter/Spring. Sandwich electrochemiluminescence immunoassay using the Roche Cobas N-Mid Osteocalcin assay (Roche Diagnostics, Mannheim) was used to determine serum tOC ( n = 1188). The inter-assay CV were 3.5% and 6.3% at levels of 18 and 90 ng mL −1 , respectively. Adopting the method by Gundberg et al. 27 and Chubb et al. , 28 serum ucOC was measured by the same reagent assay with pre-treatment of the serum samples using 5 mg mL −1 of hydroxyapatite (Calbiochem). Average inter-assay imprecision for percentage binding of cOC to OC was 8.6% at 20 ng mL −1 and 5.6% at 100 ng mL −1 , respectively. Statistical analysis For statistical analysis, a combination of Stata (version 14 StataCorp LLC, College Station, Texas, USA), IBM SPSS (version 25.0, IBM Corp., Armonk, NY, USA) and R (version 3.4.2, R Foundation for Statistical Computing, Vienna, Austria) 29 were used. To explore the relationship between dietary Vitamin K1 intake with a biomarker of Vitamin K status (ucOC : tOC), Spearman's correlation and generalized linear models were adopted. Vitamin K1 intake was modelled using restricted cubic splines to investigate the potential nonlinearity of this relationship. P -Values for the overall effect of the exposure on the outcomes (false discovery rate corrected) and for a test of non-linearity were obtained using likelihood ratio tests to compare appropriate nested models. Associations are presented graphically using the ‘effects’ R package. 30 To determine where significant differences between quartiles (Q) of Vitamin K1 intake exist, ratios of means and 95% CIs were obtained from the model. For this analysis, the exposure was fitted as a continuous variable. Results are reported for the median Vitamin K1 intake in each Q relative to the median value in Q4. Cox-proportional hazards models were used to investigate the relationship between Vitamin K1 with outcomes including (i) any fracture-related hospitalization and (ii) hip fracture-related hospitalizations. Schoenfeld residuals indicated that proportional hazards assumptions were not violated. Hazard ratios (HRs) and 95% CIs were obtained from the model with Vitamin K1 fitted as a continuous variable through a restricted cubic spline using the ‘rms’ R package. 31 HR estimates were graphed and calculated relative to a reference value being the median Vitamin K1 intake of Q1, whilst being plotted against fracture outcomes, with 95% confidence bands provided. Wald tests were used to obtained p -values for HRs. For visual simplicity only, the x -axis was truncated at 3 SD above the mean. Three models of adjustment were used for all survival analysis, (i) Model 1: age, treatment code (placebo/calcium) and BMI; (ii) Model 2: Model 1 plus physical activity (kcal d −1 ), smoking history (yes/no), calcium intake (mg d −1 ), alcohol intake (g d −1 ) and prevalent osteoporotic fracture (yes/no) and; (iii) Model 3: Model 2 plus 25OHD and season. Additional analysis As Vitamin K content of food can vary according to region (ref. 19 for review), we undertook analysis where we examined the intraclass correlation coefficient between dietary Vitamin K1 intake calculated in the current study (primarily using the Australian database) as compared to other international databases such as the USDA database for Vitamin K1. As prevalent chronic disease can influence fracture risk, survival analyses (using Model 3) were re-run with additional adjustments for women with prevalent diabetes ( n = 80) as well as atherosclerotic vascular disease (ASVD, n = 149). Prevalent ASVD was determined using primary discharge diagnoses from hospital records over the previous 18-years (1980–1998). These included ischemic heart disease and failure, cerebrovascular disease (excluding haemorrhage) and peripheral arterial disease. Prevalent diabetes mellitus was determined based on medication use at baseline. We also assessed if diet quality, assessed via the Nutrient Rich Foods Index standardized per 1000 kJ of energy intake (NRFI, described previously 32 ), influenced the association between Vitamin K1 intake by adjusting for it in addition to Model 3 covariates. Finally, we also assessed the multivariable-adjusted relationship between ucOC : tOC and fracture outcomes. Results The median (IQR) intake for Vitamin K1 was 78.7 (61.2–99.2) μg d −1 . Baseline characteristics for the 1373 women by quartiles of Vitamin K1 intake are presented in Table 1 . Mean (±SD) age was 75.1 ± 2.7 years, while BMI and 25OHD was 27.2 ± 4.7 kg m −2 and 66.9 ± 28.6 mmol L −1 , respectively. Compared to women with the lowest Vitamin K1 intake (Q1), those in Q4 had a higher calcium intake, physical activity levels and 25OHD levels. Table 1 Baseline characteristics in all participants and by quartiles (Q) of Vitamin K1 intake a All participants Quartiles of Vitamin K1 intake Quartile 1 Quartile 2 Quartile 3 Quartile 4 <61.2 μg d −1 61.2 to <78.7 μg d −1 78.7 to <99.2 μg d −1 ≥99.2 μg d −1 Number 1373 344 341 347 341 Age, years 75.1 ± 2.7 75.3 ± 2.8 74.9 ± 2.7 75.2 ± 2.7 75.1 ± 2.6 Treatment group (calcium), n (%) 680 (50.2) 166 (48.3) 163 (47.8) 179 (51.6) 181 (53.1) Body mass index (BMI), kg m −2 27.2 ± 4.7 27.2 ± 4.9 27.0 ± 4.5 27.4 ± 4.9 27.1 ± 4.6 Smoked ever, yes n (%) 515 (37.5) 141 (41.0) 130 (38.1) 125 (36.0) 119 (34.9) Physical activity, kcal day −1 474 (151–860) 405 (0–854) 485 (181–844) 424 (173–843) 534 (220–907) Calcium intake, mg d −1 953 ± 347 823 ± 305 910 ± 312 993 ± 354 1088 ± 358 Alcohol intake, g d −1 1.9 (0.3–9.9) 1.8 (0.3–10.0) 2.1 (0.3–10.3) 1.8 (0.3–9.5) 1.7 (0.3–9.6) Prevalent fracture, yes n (%) 376 (27.4) 92 (26.7) 109 (32.0) 90 (25.9) 85 (24.9) 25OHD b , mmol L −1 66.9 ± 28.6 60.8 ± 27.3 68.5 ± 31.5 70.1 ± 27.7 68.0 ± 27.1 Blood sample collection season b Summer/Autumn, n (%) 311 (22.7) 82 (26.2) 71 (22.5) 80 (24.3) 78 (24.9) Winter/Spring, n (%) 960 (69.9) 231 (73.8) 245 (77.5) 249 (75.7) 235 (75.1) Prevalent diabetes 84 (6.1) 19 (5.5) 24 (7.0) 20 (5.8) 21 (6.2) Prevalent ASVD 156 (11.4) 36 (10.5) 41 (12.1) 41 (11.8) 11.2 (2.8) a Data presented as mean ± SD, median (interquartile range; for non-normally distributed variables) or number n and (%). b n = 1271. Median Vitamin K1 intake for Q1, Q2, Q3 and Q4 was 49.3, 70.1, 87.6 and 119.5 μg d −1 , respectively. ASVD, atherosclerotic vascular disease. Vitamin K and the fraction of undercarboxylated osteocalcin to total osteocalcin (ucOC : tOC) Vitamin K1 was inversely correlated with ucOC : tOC (rho = −0.12, p < 0.001). Graphic representation of the relationship between Vitamin K1 ( p < 0.001) with ucOC : tOC are presented in Fig. 1 , with estimated means and 95%CI presented in ESI Table 1. † An inverse linear relationship was observed between Vitamin K1 and ucOC : tOC ( p for non-linearity = 0.337). Specifically, compared to Q1, women in Q4 had 6.1% lower ucOC : tOC. Further analysis indicated that Vitamin K1 was positively associated with cOC ( p = 0.030), but not tOC ( p = 0.454) or ucOC ( p = 0.217) (ESI Fig. 2 † ). Fig. 1 Multivariable-adjusted relationship between Vitamin K1 with the fraction of undercarboxylated osteocalcin to total osteocalcin (ucOC : tOC) obtained by generalized regression models in 1188 women. The 95% confidence intervals are represented by the shading. Model adjusted for age, treatment (calcium/placebo), body mass index, smoking history, physical activity, calcium and alcohol intake. The rug plot along the x -axis depicts each observation. Any fracture-related hospitalization Over 14.5 years (15 514 person-years) of follow-up (mean ± SD; 11.3 ± 4.1 year), 28.0% (384/1373) of women experienced a fracture-related hospitalization. The proportion of women who experienced a fracture-related hospitalization was between 7% to 10.1% higher in women with the lowest Vitamin K1 intake (Q1) compared to all other quartiles. The non-linear multivariable-adjusted relationship (Model 3) between Vitamin K1 and any fracture-related hospitalization is presented in Fig. 2a ( p for non-linearity = 0.010). Compared to Q1, women with higher Vitamin K1 intakes, i.e. those in Q2, Q3 and Q4, had a 26%, 31% and 31% lower relative hazard for a fracture-related hospitalization, respectively (Model 2, Table 2 ). The inverse association between Vitamin K1 intake and any fracture-related hospitalization plateaued at intakes around 100 μg d −1 , after which point estimates remained stable. Results remained similar with and without adjustment for 25OHD (Model 2 vs. Model 3). Fig. 2 Hazard ratios from Cox proportional hazards model with restricted cubic spline curves describing the association between Vitamin K1 and (A) any fracture-related hospitalization and (B) hip fracture-related hospitalizations over 14.5 years. Model adjusted age, treatment, BMI, smoking history, physical activity, calcium, alcohol intake, prevalent osteoporotic fracture, plasma 25OHD and season (Model 3). The hazard ratio compares the specific intake of Vitamin K1 (horizontal axis) to the median intake in the lowest quartile (49 μg d −1 ). The 95% confidence intervals are represented by the shading, with the rug plot along the x -axis depicting each observation. Table 2 Hazard ratios (95%CI) for any fracture and hip fracture-related hospitalizations over 14.5 years by quartiles of Vitamin K1 intake Quartiles of Vitamin K1 a Quartile 1 Quartile 2 Quartile 3 Quartile 4 <61.2 μg d −1 61.2 to <78.7 μg d −1 78.7 to <99.2 μg d −1 ≥99.2 μg d −1 14.5 years any fracture-related hospitalization Events, n (%) 119 (34.6) 94 (27.6) 85 (24.5) 86 (25.2) Model 1 Ref. 0.74 (0.64–0.87)* 0.67 (0.55–0.82)* 0.65 (0.50–0.85)* Model 2 Ref. 0.74 (0.63–0.87)* 0.69 (0.56–0.84)* 0.69 (0.52–0.91)* Model 3 Ref. 0.75 (0.64–0.89)* 0.68 (0.55–0.85)* 0.69 (0.52–0.92)* 14.5 years hip fracture-related hospitalization Events, n (%) 47 (13.7) 44 (12.9) 30 (8.6) 27 (7.9) Model 1 Ref. 0.74 (0.58–0.95)* 0.62 (0.44–0.87)* 0.51 (0.33–0.79)* Model 2 Ref. 0.74 (0.58–0.95)* 0.62 (0.44–0.88)* 0.51 (0.33–0.79)* Model 3 Ref. 0.76 (0.58–0.99)* 0.62 (0.43–0.90)* 0.51 (0.32–0.83)* a Estimated hazard and 95%CI from Cox proportional hazards analysis comparing the median Vitamin K1 intake from each quartile (Q) compared to Q1. Median intake Q1, Q2, Q3 and Q4 for Vitamin K1 was 49.3, 70.1, 87.6 and 119.5 μg d −1 , respectively. Model 1: adjusted for age, treatment and body mass index. Model 2: Model 1 + smoking history, physical activity, calcium, alcohol intake and prevalent osteoporotic fracture. Model 3: Model 2 + 25OHD and season (Winter/Spring, Summer/Autumn). * p < 0.05 compared to Q1. Hip fracture-related hospitalization Over 14.5 years (16 752 person-years) of follow-up (mean ± SD; 12.2 ± 3.5 years), 10.8% (148/1373) of women experienced a hip fracture-related hospitalization. The proportion of women who experienced a hip fracture-related hospitalization was between 0.8% to 5.8% higher in women with the lowest Vitamin K1 intake (Q1) compared to all other quartiles ( Table 2 ). The linear multivariable-adjusted relationship (Model 3) between Vitamin K1 and any hip fracture-related hospitalization is presented in Fig. 2b . This relationship appeared similar to that for any fracture but with a more linear-trend ( p for non-linearity = 0.592). Compared to Q1, women with higher Vitamin K1 intake in Q2, Q3 and Q4 had a 26%, 38% and 49% lower relative hazard for a hip fracture-related hospitalization, respectively (Model 2, Table 2 ). Results remained similar with and without adjustment for 25OHD (Model 2 vs. Model 3). Additional analysis A high level of agreement was observed between the Vitamin K1 intake calculated primarily using the Australian and USDA database (ICC 0.83 95%CI 0.41–0.93, mean intake 83 ± 31 vs. 98 ± 43 μg d −1 ). There appeared to be lower measured Vitamin K1 when primarily adopting the Australian database (ESI Fig. 3 † ). The addition of prevalent ASVD and diabetes status to Model 3 did not alter the relationship between Vitamin K1, any fracture ( p = 0.002, p for non-linearity = 0.007) or hip fracture-related hospitalization ( p = 0.002, p for non-linearity = 0.550) (ESI Fig. 4 † ). Specifically, compared to Q1, women in Q2, Q3 and Q4 had lower relative hazards for any fracture (between 25% to 32%) or a hip fracture-related hospitalization (between 24% to 49%) (ESI Table 2 † ). The addition of NRFI to Model 3 did not alter the relationship between Vitamin K1 intake, any fracture ( p = 0.002, p for non-linearity = 0.009), and hip fracture-related hospitalization ( p = 0.013, p for non-linearity = 0.616) (ESI Fig. 5 † ). Women in Q2, Q3 and Q4 had lower relative hazards for any fracture (between 25% to 35%) or a hip fracture-related hospitalization (between 25% to 53%) compared to women in Q1 (ESI Table 3 † ). When considering any fracture-related hospitalization, a Vitamin K1 intake of >100 μg d −1 did not appear to confer additional benefits ( Fig. 2 ); this corresponded to a ucOC : tOC of ∼0.47 ( Fig. 1 ). As such, we adopted this ucOC : tOC cut-point to assess the relationship between ucOC : tOC status and fractures. Compared to women with better Vitamin K status (ucOC : tOC < 0.47), those with poorer Vitamin K status (ucOC : tOC ≥0.47) had greater relative hazards for any fracture (HR 1.31 95%CI 1.01–1.66, p = 0.021) but not hip-fracture related hospitalizations (HR 1.16 95%CI 0.81–1.67, p = 0.427) in the multivariable-adjusted analysis (Model 3). Discussion Our results demonstrate that in community-dwelling older women, higher dietary Vitamin K1 intake was associated with lower long-term fracture risk, independent of many established factors for fracture rates including BMI, calcium intake, Vitamin D status and prevalent disease. Specifically, a threshold for women with a Vitamin K1 intake of ≥99 μg d −1 was associated with a 31% and 49% lower risk for any fracture or hip fracture-related hospitalization, respectively. The association of Vitamin K1 with any fracture was non-linear, with a nadir in the relative hazard at an intake of approximately 100 μg day −1 . For hip fractures the association although similar appeared to have a more linear trend. Previously, we have demonstrated that increasing Vitamin K1 intake through green leafy vegetables (200 g d −1 providing 160 μg d −1 of Vitamin K1) over a 4-week period facilitates maximal carboxylation of OC. 9 Thus current data examining dietary Vitamin K1 intake and fracture risk is further supported by our analysis of the significant inverse relationship between Vitamin K1 intake, estimated using recent analysis of Australian foods, 23 and the ratio of ucOC to tOC. Furthermore, when considering the cut-point of lower ucOC : tOC (<0.47) which corresponded to a Vitamin K1 intake of ≥100 μg d −1 , these women also recorded 31% higher hazards for any fracture. Incidentally, others have also suggested that dietary Vitamin K intakes of <100 μg d −1 may be too low for the carboxylation of Vitamin K dependant proteins. 33 Perhaps, greater Vitamin K may also promote bone health by inhibiting various bone resorbing agents including prostaglandins 34 and interleukin-6. 35 However, it is essential here to acknowledge uncertainty as to whether pharmaceutical Vitamin K including its forms ( e.g. K1 or K2) have similar effects compared to its dietary form, especially due to potential influence of other nutrients within the food matrix. This is especially relevant when interpretating data such as a meta -analysis by Mott and colleagues of 19 RCTs ( n = 6759 participants) that examined bone outcomes ( e.g. bone mineral density and fractures). Most included studies typically adopted large doses of Vitamin K2 in the form of pharmaceutical MK4 (1.5 mg up to 45 mg) or MK7 (up to 375 μg) 5,6 and reported no benefits of Vitamin K supplements for bone mineral density and vertebral fractures. Evidence for a reduction in clinical fractures in post-menopausal or osteoporotic patients were reported. However, such outcomes should be interpreted with caution due the integrity of Vitamin K supplementation trials in this area from Japan. 36 Furthermore, when considering Vitamin K metabolism, recent evidence suggests that dietary Vitamin K1 and K2 (MK4, MK7, and MK9) all served as precursors to tissue MK4 in mice. 37 At least in mice models, perhaps the form of dietary Vitamin K ( e.g. PK or MK) may be less relevant, warranting investigation in humans. When considering dietary Vitamin K1 intake, our findings are comparable to previous work in 2807 older Norwegian men and women (aged 71–75 years) from the Hordaland Health Study. 38 Here, over ∼10 years of follow-up, individuals with the lowest Vitamin K1 intakes (Q1, <53 μg d −1 ) had greater risk (HR 1.57 95%CI 1.09–2.26) for a hip fracture compared to individuals with the highest intake (Q4, >109 μg d −1 ). 38 Notably, dietary Vitamin K1 intake across quartiles was comparable to the current study. Similar results have been reported in 335 men and 553 women (mean age ∼75 years) from the Framingham Heart Study. 39 In this study individuals in the highest quartile of Vitamin K1 intake (∼254 μg d −1 ) had a lower relative risk (RR 0.35; 95% CI: 0.13, 0.94) for a hip fracture than those with the lowest intake (∼56 μg d −1 ). In the largest study to date of 72 327 women from the Nurses’ Health Study (aged 38–63 years), up to 30% lower relative risk for a hip fracture (RR: 0.70; 95% CI: 0.53, 0.93) was recorded when Vitamin K1 intakes were greater than 109 μg d −1 . These associations remained unchanged when considering calcium and Vitamin D intake. 3 Despite these positive findings, in 2944 Chinese individuals (over 65 years, 45.5% female) no relationship was observed between Vitamin K1 intake, non-vertebral fractures and hip fractures over 6.9 years. 40 Perhaps, the low proportion of individuals who experienced a non-vertebral fracture and/or hip fracture (6.3% and 1.6%, respectively) in-combination with the high median intake of Vitamin K1 in this cohort contributed to these results (∼240 μg d −1 compared to 70 μg d −1 in the present study). Nevertheless, a meta -analysis of 80 982 individuals reports an inverse relationship between dietary Vitamin K1 intake and risk of fractures (highest vs. the lowest intake, RR 0.78 95%CI 0.56–0.99; I 2 = 59.2%, p for heterogeneity = 0 .04). 4 Alternatively, current evidence suggests that pharmaceutical Vitamin K1 has minimal effect on bone mineral density. For example, a double-blind, placebo-controlled trial over 3 years reported that individuals randomized ( n = 452, 60–80 years) equally to receive a multivitamin that contained either 500 μg d −1 or no Vitamin K1 (a dose attainable in the diet) plus a daily calcium (600 mg elemental calcium) and Vitamin D (400 IU) supplement did not affect femoral neck, spine (L2–L4), and total-body BMD. 41 To this end, further investigation is required into other potential mechanisms by which dietary Vitamin K1 may confer benefits on fracture risk observed here. Our study presents with limitations that must be acknowledged. Specifically, due to its observational nature, causality cannot be established. Furthermore, generalisability of findings may only be limited to older community-dwelling Caucasian women and not other groups such as older men. Nevertheless, to minimise residual confounding we considered a range of dietary factors implicated in bone health such as calcium and alcohol intake. A higher Vitamin K1 intake may also represent a healthier diet, especially since it is found in higher concentrations in vegetables. However, we evaluated the potential for such confounding by adjusting for a NRFI and report similar results. Finally, dietary information was self-reported, which could have led to misclassification of these variables. Nevertheless, a validated and reproducible method of dietary intake assessment was adopted here. Strengths of this study include the prospective design and population-based setting with long-term (14.5 years) ascertainment of verified fracture-hospitalizations in a population predisposed to fracture; older women. We also demonstrated a significant, cross-sectional inverse association, between estimated dietary Vitamin K1 intake, using our recently published laboratory evaluation of the Vitamin K content of Australian foods, and a biomarker of Vitamin K status being ucOC : tOC. We also considered a range of potential confounders ( e.g. prevalent disease, physical activity levels and circulating 25OHD levels) for the relationship between Vitamin K and fractures in this cohort. When considering the societal implications of our results, current dietary recommendations in Australia suggest Adequate Intakes of 70 and 60 μg d −1 for males and females of all ages, respectively. 14 However regarding bone health, such intakes may be inadequate and are actually substantially lower (23% to 50%) compared to the Adequate Intakes of 90 and 120 μg d −1 promoted for older males and females (≥51 years) in the United States. 16 In conclusion, we demonstrate that higher dietary Vitamin K1 is associated with lower long-term risk for any fracture and hip fracture-related hospitalizations in a large cohort of community-dwelling older women. Specifically, we identify that a Vitamin K1 intake of approximately ≥100 μg d −1 is associated with lower fracture risk in older women. Most importantly, such intakes can easily be achieved by consuming one to two serves per day (between 75 to 150 g) of vegetables such as spinach, kale, broccoli and cabbage. 23 These recommendations are in-line with public health guidelines advocating higher vegetable intake ( e.g. ≥5 serves daily), 42 which include one to two serves of green leafy vegetables. Author contributions MS, AS, LCB, JMH, RLP, JRL designed the research. KZ, EB, WHL, RLP, JRL conducted the research. MS, AS, RM, LCB, NPB analysed the data. MS, AS, RLP, JRL wrote the paper. MS has primary responsibility for the final content. All authors read and approved the final manuscript. Conflicts of interest MS, AS, LCB, NPB, RM, EB, WHL, KZ, JMH, JRL, RLP declare no conflicts of interest. Acknowledgements The authors wish to thank the staff at the Western Australia Data Linkage Branch, Hospital Morbidity Data Collection and Registry of Births, Deaths and Marriages for their work on providing the data for this study. The Perth Longitudinal Study of Ageing in Women (PLSAW) was funded by Healthway, the Western Australian Health Promotion Foundation and by project grants 254627, 303169 and 572604 from the National Health and Medical Research Council (NHMRC) of Australia. This work was supported by a Department of Health, Western Australia, Merit Award. M. S. is supported by a Royal Perth Hospital Career Advancement Fellowship (CAF 130/2020), an Emerging Leader Fellowship and project grant from the Western Australian Future Health and Innovation Fund. L. C. B. is supported by a National Health and Medical Research Council (NHMRC) of Australia Emerging Leadership Investigator Grant (ID: 1172987) and a National Heart Foundation of Australia Post-Doctoral Research Fellowship (ID: 102498). J. M. H. is supported by an NHMRC of Australia Senior Research Fellowship (ID: 1116973). J. R. L. is supported by a National Heart Foundation of Australia Future Leader Fellowship (ID: 102817). None of these funding agencies had any role in the conduct of the study; collection, management, analysis or interpretation of the data; or preparation, review or approval of the manuscript.
Breaking bones can be life changing events—especially as we age, when hip fractures can become particularly damaging and result in disability, compromised independence and a higher mortality risk. But research from Edith Cowan University's Nutrition and Health Innovation Research Institute has revealed there may be something you can do to help reduce your risk of fractures later in life. In collaboration with the University of Western Australia, the study looked at the relationship between fracture-related hospitalizations and vitamin K1 intake in almost 1,400 older Australian women over a 14.5-year period from the Perth Longitudinal Study of Aging Women. It found women who ate more than 100 micrograms of vitamin K1 consumption—equivalent to about 125g of dark leafy vegetables, or one-to-two serves of vegetables—were 31% less likely to have any fracture compared to participants who consumed less than 60 micrograms per day, which is the current vitamin K adequate intake guideline in Australia for women. There were even more positive results regarding hip fractures, with those who ate the most vitamin K1 cutting their risk of hospitalization almost in half (49%). Study lead Dr. Marc Sim said the results were further evidence of the benefits of vitamin K1, which has also been shown to enhance cardiovascular health. "Our results are independent of many established factors for fracture rates, including body mass index, calcium intake, Vitamin D status and prevalent disease," he said. "Basic studies of vitamin K1 have identified a critical role in the carboxylation of the vitamin K1-dependant bone proteins such as osteocalcin, which is believed to improve bone toughness. "A previous ECU trial indicates dietary vitamin K1 intakes of less than 100 micrograms per day may be too low for this carboxylation. "Vitamin K1 may also promote bone health by inhibiting various bone resorbing agents." So, what should we eat, and how much? Dr. Sim said eating more than 100 micrograms of vitamin K1 daily was ideal—and, happily, it isn't too difficult to do. "Consuming this much daily vitamin K1 can easily be achieved by consuming between 75–150g, equivalent to one to two serves, of vegetables such as spinach, kale, broccoli and cabbage," he said. "It's another reason to follow public health guidelines, which advocate higher vegetable intake including one to two serves of green leafy vegetables—which is in-line with our study's recommendations." The study is published in Food & Function.
10.1039/D2FO02494B
Biology
Possible genetic link found between hypothyroidism and development of canine T-zone lymphoma
Julia D. Labadie et al, Genome-wide association analysis of canine T zone lymphoma identifies link to hypothyroidism and a shared association with mast-cell tumors, BMC Genomics (2020). DOI: 10.1186/s12864-020-06872-9 Journal information: BMC Genomics
http://dx.doi.org/10.1186/s12864-020-06872-9
https://phys.org/news/2020-09-genetic-link-hypothyroidism-canine-t-zone.html
Abstract Background T zone lymphoma (TZL), a histologic variant of peripheral T cell lymphoma, represents about 12% of all canine lymphomas. Golden Retrievers appear predisposed, representing over 40% of TZL cases. Prior research found that asymptomatic aged Golden Retrievers frequently have populations of T zone-like cells (phenotypically identical to TZL) of undetermined significance (TZUS), potentially representing a pre-clinical state. These findings suggest a genetic risk factor for this disease and caused us to investigate potential genes of interest using a genome-wide association study of privately-owned U.S. Golden Retrievers. Results Dogs were categorized as TZL ( n = 95), TZUS ( n = 142), or control ( n = 101) using flow cytometry and genotyped using the Illumina CanineHD BeadChip. Using a mixed linear model adjusting for population stratification, we found association with genome-wide significance in regions on chromosomes 8 and 14. The chromosome 14 peak included four SNPs (Odds Ratio = 1.18–1.19, p = .3 × 10 − 5 –5.1 × 10 − 5 ) near three hyaluronidase genes ( SPAM1, HYAL4, and HYALP1 ). Targeted resequencing of this region using a custom sequence capture array identified missense mutations in all three genes; the variant in SPAM1 was predicted to be damaging. These mutations were also associated with risk for mast cell tumors among Golden Retrievers in an unrelated study. The chromosome 8 peak contained 7 SNPs (Odds Ratio = 1.24–1.42, p = 2.7 × 10 − 7 –7.5 × 10 − 5 ) near genes involved in thyroid hormone regulation ( DIO2 and TSHR ). A prior study from our laboratory found hypothyroidism is inversely associated with TZL risk. No coding mutations were found with targeted resequencing but identified variants may play a regulatory role for all or some of the genes. Conclusions The pathogenesis of canine TZL may be related to hyaluronan breakdown and subsequent production of pro-inflammatory and pro-oncogenic byproducts. The association on chromosome 8 may indicate thyroid hormone is involved in TZL development, consistent with findings from a previous study evaluating epidemiologic risk factors for TZL. Future work is needed to elucidate these mechanisms. Background T zone lymphoma (TZL), a histologic variant of peripheral T cell lymphoma (PTCL), accounts for about 12% of all canine lymphomas [ 1 , 2 ] but is almost never seen in human patients. In dogs, this disease follows an indolent course with average survival of > 2 years independent of treatment, compared to < 1 year with most other lymphoma subtypes [ 3 , 4 , 5 ]. TZL can be readily diagnosed by histopathology or by flow cytometric identification of a homogeneous expansion of T cells lacking expression of CD45, a pan-leukocyte surface marker [ 3 , 6 , 7 ]. Previously, we observed that > 30% of Golden Retrievers without lymphocytosis or lymphadenopathy have T cells phenotypically similar (lacking CD45 expression) to TZL in their blood [ 8 ]; as we are unsure of the clinical relevance of this finding, we have adopted the term T zone-like cells of undetermined significance (TZUS) for these dogs [ 9 ]. We hypothesize that TZUS may represent a pre-clinical state that could undergo neoplastic transformation and progress to overt TZL. Few studies have investigated the pathogenesis of canine TZL. We recently reported that both hypothyroidism and omega-3 supplementation are associated with decreased odds of TZL [ 9 ]. It has also been noted that over 40% of TZL cases are Golden Retrievers [ 3 ]. This finding suggests a genetic predisposition for TZL and caused us to pursue a study to identify potential pathways of interest. To date, no studies have agnostically evaluated germline risk for PTCL in dogs or humans. The objective of this study was to identify genetic risk factors for canine TZL using a genome-wide association study (GWAS) and subsequent targeted sequencing. This aim of this study is to provide insight into the etiology and underlying risk for developing this disease. Results The source population included 95 TZL cases (ages 7–14 years), 142 TZUS dogs > 9 years old (dogs with no clinical signs of TZL, but > 1% of T cells were CD5 + CD45 − ), and 101 control dogs > 9 years old (dogs with no clinical signs of TZL and no CD5 + CD45 − T cells). Sixteen dogs were removed due to low genotyping rate (< 97.5%; 7 TZL, 5 TZUS, 4 controls) and 6 were removed due to suspected European origin (2 TZUS, 4 controls). After quality filtering a final dataset of 267 dogs (79 TZL, 108 TZUS, 80 controls), and 110,405 single nucleotide polymorphism (SNPs) were used for association analyses. TZUS and controls indistinguishable by GWAS When the combined TZL and TZUS group was compared to controls, no p -values were outside the 95% confidence interval threshold on the quantile-quantile (QQ)-plot (Additional file 1 A). In contrast, when TZL were compared to the combined TZUS and control group, a group of SNPs significantly deviated from the expected distribution (Fig. 1 ). Supporting this, pairwise GWAS of TZL versus controls and TZL versus TZUS had suggestive associations for this group of SNPs, despite none of the p -values falling outside the 95% confidence interval (CI) on the QQ-plot (Additional file 1 B and C). This implies TZUS and controls are similar, and the enhanced power from combining them as a reference group allows those SNPs to reach genome-wide significance. In contrast, the TZUS versus control comparison did not share any suggestive SNPs with the TZL versus control comparison, as would be expected if TZL and TZUS were similar. We thus chose to combine TZUS and controls for our main analysis and will reference it as the “TZL versus all” comparison for the remainder of the paper. Fig. 1 GWA for TZL cases vs. combined reference (TZUS + controls). Left, QQ-plot demonstrating observed p -values deviate from the expected at a significance level of p < 10 − 4 . Shaded area indicates 95% confidence interval. Right, Manhattan plot showing peaks that are significantly associated with TZL at a genome-wide level of p < 10 − 4 Full size image Top peak is near thyroid stimulating hormone receptor locus The strongest GWAS peak contained seven SNPs on chromosome 8 from 52,650,576–53,818,371 bp (Fig. 1 ; Table 1 ). The associated allele for these SNPs was present in about 16% of TZL (range 15–25%) compared to 6% of the reference group (range 4–12%). The top SNP (BICF2P948919; Odds Ratio [OR] = 1.39, p = 2.66 × 10 − 7 ) was located at 53,818,371 bp and was in strong linkage disequilibrium (LD) (R 2 > 0.7) with three significantly associated SNPs in that region and moderate LD (R 2 0.25–0.6) with the other three significantly associated SNPs (Fig. 2 ). Using the PLINK clumping analysis, we determined that the four SNPs in strong LD (including the top SNP) formed one haplotype block, and the remaining three SNPs were not in strong enough LD with any other SNPs to form blocks. The p -values for all seven associated SNPs on chromosome 8 were non-significant (range 0.17–0.99) in the conditional analysis, suggesting they represent one signal (Table 1 ). The haplotype block containing the top SNP is within the non-coding region of Suppressor of Lin-12-Like Protein 1 ( SEL1L ) gene (Fig. 2 ). Having at least one risk haplotype was substantially more common among TZL (29%) versus TZUS or controls (12 and 7.5%, respectively). Table 1 SNPs significantly associated with TZL at the genome-wide level Full size table Fig. 2 Close-up of the chromosome 8 peak. a R 2 from top SNP (BICF2P948919) is depicted to show LD structure. b Close-up view of the genes located in the region with R 2 > 0.2. All associated SNPs are depicted in red; the haplotype block containing the top 4 SNPs is highlighted in yellow. c Haplotype block containing the 4 associated SNPs (BICF2P1080535, BICF2P1048848, BICF2P184533, and BICF2P948919). The risk haplotype was TAGG and non-risk was CGAA. Dogs were considered recombined if neither combination was present Full size image Targeted resequencing of the chromosome 8 region identifies potential regulatory variants Targeted resequencing was performed on 16 dogs selected for variation in risk and non-risk haplotypes. Sequence capture of the 3 Mb region on chromosome 8 identified 814 single nucleotide variants (SNVs) and 229 insertions and deletions (indels) that passed our filters. Median coverage across the region was 131x. Three synonymous coding variants were found in the SEL1L gene (cfa8:53,771,782, cfa8:52,779,502, cfa8:53,797,623). All other identified variants were potential modifiers, including 3′ UTR variants (three SNVs and one indel near CEP128 , two SNVs near GTF2A1 ), up- and downstream gene variants, intron variants, and non-coding transcript exon variants (Additional files 2 and 3 ). Evaluation of the corresponding positions in the human genome determined multiple variants were in potential regulatory elements (of 685 that were converted [541 SNV, 144 indels]; based on H3K27AC marks and GeneHancer scoring). Two sets of variants were in enhancers for DIO2 (Type II Iodothyronine Deiodinase) and seven sets of variants were in enhancers for combinations of CEP128 (Centrosomal Protein 128), GTF2A1 (General Transcription Factor IIA Subunit 1), STON2 (Stonin2), and SEL1L (Additional file 4 ). Shared association with mast cell tumor cases on chromosome 14 The second top association peak is on chromosome 14 and contains four SNPs from 11,778,977–11,807,161 bp (Table 1 ). All SNPs were in strong LD (R 2 > 0.9) with the top SNP (OR = 1.18, p = 8.39 × 10 − 5 ). Three of the four SNPs had previously been reported to be associated with mast cell tumors (MCTs) among American Golden Retrievers [ 10 ]. Thus, we assessed our data in combination with the American Golden Retriever data from the publicly available MCT dataset. Footnote 1 After independently conducting the quality control protocol outlined in the methods section for each dataset, files were merged so that the new “case” population included TZL and MCT cases, whereas the reference population contained TZUS and controls from the TZL dataset and controls from the MCT dataset. Multidimensional scaling (MDS) was performed using PLINK to assess for population stratification (Additional file 5 ). The chromosome 14 peak for the combined dataset was wider and more strongly associated, with the top SNP reaching p = 1.5 × 10 − 9 (Fig. 3 ; similar association shown in Additional file 6 A without the addition of controls from the MCT dataset). A GWAS including the TZL dataset and only MCT controls showed no increased association at the chromosome 14 peak (Additional file 6 B), confirming that this is a shared association for the two different cancers and not simply a result of increased power from the additional controls. We evaluated haplotype blocks in the combined dataset. The top SNP from the combined dataset was the same as the top SNP in the TZL-only dataset (BICF2G630521681; Table 2 ). These SNPs are part of a nine SNP haplotype block that spans 11,695,969–11,807,161 bp (Fig. 4 ). When we ran a conditional GWAS controlling for the top SNP, none of the SNPs in the larger associated region remained significant ( p > 0.3), suggesting they all represent one signal (Table 2 , Additional file 7 ). The haplotype block containing the top SNP spans three hyaluronidase enzymes, including Sperm Adhesion Molecule 1 ( SPAM1 ; formerly called HYAL1 ), Hyaluronoglucosaminidase 4 ( HYAL4 ), and a hyaluronidase 4-like gene ( ENSCAFG00000024436 / HYALP1 ). In our dataset, 85% of TZL cases (67/79) had at least one risk haplotype (versus 71% of TZUS [77/108] and 65% of controls [52/80]); 34% of TZL were homozygous (27/79) for the risk haplotype (versus 7% of TZUS [11/108] and 9% of controls [7/80]) (Fig. 4 ). Fig. 3 GWA for combined TZL and MCT datasets. QQ-plot (left) and Manhattan plot (right) Full size image Table 2 SNPs in the chromosome 14 haplotype block from the combined TZL + MCT GWAS Full size table Fig. 4 Close-up of the chromosome 14 peak depicting change in signal with MCT dataset added. a TZL dataset only. b Combined TZL and MCT dataset; c Close-up of the region from 8-12Mbp containing SNPs with R 2 > 0.2. d Close-up view of genes located in the region from 11 to 12 Mb. The four SNPs significantly associated with TZL are depicted in red and the nine-SNP haplotype block they represent is shaded in yellow. e Close-up of the region from 11.7–11.8 Mbp where coding mutations (shown in red) were found on resequencing. f Haplotype block containing nine associated SNPs on cfa14 (BICF2G630521558, BICF2G630521572, BICF2G630521606, BICF2G630521619, BICF2P867665, TIGRP2P186605, BICF2G630521678, BICF2G630521681, BICF2G630521696). The risk haplotype was CTTCGGACG and non-risk was TCCTTAGTA. Dogs were considered recombined if neither combination was present and were considered unknown if the genotype for one or more SNPs was missing Full size image Targeted resequencing of chromosome 14 region identifies coding mutations in hyaluronidase genes Median coverage across the 8 Mb region sequenced on chromosome 14 was 140x; 1404 SNVs and 742 indels were identified after quality control and filtering. Five mutations causing amino acid changes within coding regions of the three hyaluronidase genes ( SPAM1 , HYAL4 , and ENSCAFG00000024436 ) were identified (Fig. 4 ); all mutations followed the associated haplotype identified by GWAS. The mutation within the SPAM1 gene (cfa14:11,704,952, Lys482Arg) was predicted to be “possibly damaging” (PolyPhen-2 score 0.91). The three mutations in the HYAL4 gene (cfa14:11,736,613, Gly454Ser; cfa14:11,736,674, Ser434Phe; cfa14:11,736,843, Leu378Ile) and one within ENSCAFG00000024436 (cfa14:11,760,826, Met463Thr) were predicted to be benign (PolyPhen-2 score < 0.15). Conversion of these coordinates to CanFam2 determined the non-synonymous mutations in SPAM1 and HYAL4 were identical to those identified in the MCT study [ 10 ]. Additional non-coding variants were identified near these genes, including 5′ UTR variants (two SNVs, one indel in HYAL4 ), 3′ UTR variants (two SNVs, two indels in HYAL4 and three SNVs in SPAM1 ), up- and downstream gene variants, and intron variants (Additional files 2 and 3 ). One synonymous coding SNP was identified in ENSCAFG00000024436 (cfa14:11,768,664) (Additional file 2 ). Potential cumulative risk for chromosomes 8 and 14 Distribution of number of risk haplotypes by phenotype are shown in Fig. 5 . Only 8 dogs (7 of which were cases) had > 3 risk haplotypes, so counts were categorized as 0, 1, > 2 for analysis. Number of risk haplotypes was significantly associated with TZL ( p -value < 0.001), indicating a potential cumulative risk. Larger sample sizes are necessary to evaluate statistical interaction of the chromosome 8 and 14 haplotypes. Fig. 5 Distribution of haplotype scores. Dogs were scored from zero to four based on the number of risk haplotypes for chromosomes 8 and 14. Recombined haplotypes were considered non-risk. Dogs were considered unknown if the genotype for one or more SNPs was missing Full size image Additional significantly associated GWAS SNPs Associated SNPs were also seen on chromosomes 2, 17, and 29, but our study did not have the power to accurately determine the regions of association. We conducted a restricted maximum likelihood analysis [ 11 ], assuming TZL has a 2% prevalence in the Golden Retriever breed, and found that the combined set of 17 significant SNPs in our dataset (Table 1 ) explained approximately 15% (standard error 7%) of the phenotypic variance, whereas all genotyped SNPs explained approximately 49% (standard error 13%). Discussion In a GWAS to identify genetic risk factors for TZL in Golden Retrievers, we identified associated regions on chromosomes 8 and 14. Subsequent resequencing of a subset of dogs identified non-synonymous mutations in three hyaluronidase genes on chromosome 14 ( SPAM1, HYAL4, and HYALP1) . Coding mutations were not found in the chromosome 8 region but identified variants may be located in regulatory elements for numerous genes, including DIO2, CEP128, GTF2A1, STON2 , and SEL1L . Mutations in hyaluronidase genes are associated with risk for TZL and MCT GWAS analysis and subsequent resequencing identified mutations in SPAM1 and HYAL4 identical to those seen in Arendt et al.’s MCT study [ 10 ], highlighting a potential shared mechanism for TZL and MCT pathogenesis. One potential mechanism is via hyaluronan turnover, which is caused by the interaction of hyaluronan and CD44, a cell surface glycoprotein expressed on both T cells and mast cells [ 12 ]. This turnover leads to increased low molecular weight hyaluronan, the byproducts of which are pro-inflammatory and pro-oncogenic, with implications in cell proliferation, migration, and angiogenesis [ 13 , 14 ]. In contrast, high molecular weight hyaluronan and decreased hyaluronidase activity have been associated with the increased longevity and cancer resistance seen in naked mole rats [ 13 ]. It would be informative to measure hyaluronan in TZL and controls to determine whether the ratio of low to high molecular weight hyaluronan is altered in TZL. Most mammals have six hyaluronidase-like genes, clustered on two chromosomes. In dogs, HYAL1 , HYAL2 and HYAL3 are clustered on cfa20, whereas SPAM1 , HYAL4 , and ENSCAFG00000024436 are clustered on cfa14. ENSCAFG00000024436 is homologous to HYALP1 , which is an expressed pseudogene in people [ 15 ]. HYALP1 is believed to be functional in other mammals [ 15 ], although its functional status is unknown in dogs. SPAM1 is considered a testis hyaluronidase and is important during egg fertilization by sperm [ 16 ]. However, SPAM1 has been detected in the epididymis, seminal vesicles, prostate, female genital tract, breast, placenta, fetal tissue, and certain malignancies [ 17 , 18 , 19 ], suggesting it is multifunctional and not sperm-specific. Despite the potential shared pathogenesis of TZL and MCT, we did not see an association between MCT and TZL in our dataset. Of dogs where medical history was known, 3/76 TZL (4%), 8/142 TZUS (6%), and 4/103 controls (4%) had a history of or concurrent MCT. This suggests these the diseases develop independently despite their shared mechanisms. More research is necessary to understand the role of these hyaluronidases in dogs and evaluate how the observed variants alter expression of hyaluronidases and downstream signaling. Thyroid hormone metabolism may influence TZL risk In a parallel study, we determined dogs with hypothyroidism were significantly less likely to develop TZL than dogs without hypothyroidism [ 9 ]. As thyroid hormone plays an important role in cell growth and metabolism, we hypothesize that lack of this hormone may decrease T cell proliferation and therefore help prevent the development of TZL. A recent study reported an association between polymorphisms in CEP128 and autoimmune thyroid disease in humans, although the mechanism underlying this association is unclear [ 20 ]. While we did not identify coding mutations within CEP128 , it is possible that mutations we identified in regulatory elements could have similar downstream effects. Additionally, while SnpEff did not predict our SNVs to be modifiers of DIO2 or TSHR , it is possible that the regulatory elements of these genes are far up- or downstream as seen in people. Canine genome annotations for this region may not yet be able to predict these relationships. While canine hypothyroidism is generally thought to be caused by lymphocytic thyroiditis or idiopathic atrophy [ 21 ], it is plausible that changes in expression of DIO2 or TSHR could influence its development. Thyroid hormone regulation depends on an axis of multiple hormones and organs. Thyroid stimulating hormone, released from the pituitary, binds TSHR on the thyroid gland, causing release of thyroxine and, to a lesser extent, triiodothyronine [ 22 ]. DIO2 is one of two hormones responsible for converting thyroxine to triiodothyronine, the more active form, in the peripheral organs [ 23 ]. It is feasible that changes in the expression of either of these genes could alter thyroid hormone production and function. SEL1L , another gene in this region, encodes a protein that is part of a complex involved in endoplasmic reticulum-associated degradation of misfolded proteins [ 24 ]. Interestingly, levels of Deiodinase 2, the product of DIO2 , are tightly regulated, and synthesis can be inhibited by endoplasmic reticulum stress via endoplasmic reticulum-associated degradation [ 25 ]. Thus, alterations in SEL1L and endoplasmic reticulum stress could also impact thyroid hormone regulation. While a role of thyroid hormone is plausible based on variants identified on cfa8 and our parallel finding of an inverse association of hypothyroidism and TZL risk, we cannot rule out the possibility of a spurious finding due to chance overrepresentation of dogs with hypothyroidism among our control population. Further studies are needed to validate this finding in an independent population. Golden Retriever predisposition to TZL We believe TZUS represents a precursor state to TZL, so we hypothesized TZUS dogs would share the same genetic variants as TZL cases. However, we were unable to differentiate TZUS from controls in our GWAS analysis. The high prevalence of TZL among Golden Retrievers and corresponding high prevalence of TZUS suggest the genetic basis for developing CD5 + CD45 − T cells may be fixed among this breed, and different from the genes that control progression to neoplasia in these cells. If this is the case, we would be unable to identify the genetic risk factor for developing CD5 + CD45 − T cells in our study. Future studies may evaluate fixed regions of the Golden Retriever genome to identify candidate genes. Additionally, a GWAS of TZL among a less predisposed breed may distinguish additional associated regions not identified in our study. It is worth noting that Golden Retrievers of European descent appear less likely to develop TZL [ 26 ]. As such, delving into genomic differences between European and American Golden Retrievers may provide insight into regions that could underly TZL risk. Conclusions Canine genomics are informative for human genomics and offer computational benefits due to the comparatively recent development of dog breeds. Within a dog breed, there is reduced genetic variation [ 27 , 28 ], allowing us to use smaller sample sizes and fewer genetic markers when evaluating genetic risk factors for canine diseases [ 28 , 29 , 30 ]. Little is known about the functional implications of the mutations identified on cfa8. Since variants in this region are in moderate to high LD, it is difficult to prioritize which variants are important in disease pathogenesis versus which are bystanders inherited with the causative mutation. Additional studies are necessary to elucidate these associations and better understand the effect of these variants. The likely importance of hyaluronidases and shared association with MCT is noteworthy and warrants further investigation. Further research will increase our understanding of how these coding mutations alter hyaluronidase function. Ultimately, future research will help elucidate TZL pathogenesis and identify causative variants that may be biomarkers or disease risk or potential therapeutic targets. Methods Study participants All dogs were recruited from the privately-owned pet population in the United States from October 2013 through May 2015. The study was conducted with approval from the Colorado State University Institutional Animal Care and Use committee (Protocol 13-4473A). Written informed consent was obtained from all dog owners; dogs remained under the custody and care of their owners for the duration of the study. Detailed recruitment information for the larger study population has been previously described [ 9 ]. Briefly, TZL cases were identified through submissions to Colorado State University’s Clinical Immunology laboratory. Lymphoma-free Golden Retrievers aged > 9 years were recruited from 1) the submitting clinic of TZL cases and 2) email solicitation to Golden Retriever owners in the Canine Lifetime Health Project [ 9 ]. Peripheral blood samples were obtained from all participants, and a subset of dogs with adequate DNA quality and quantity (as described below) were used for the GWAS study. Flow cytometric analysis of peripheral blood samples was used to categorize dogs as TZL, TZUS, or controls. Flow cytometry was carried out as previously described [ 3 ] and samples were analyzed with the antibody combinations listed in Additional file 8 using a 3-laser Coulter Gallios. Footnote 2 We defined TZL cases ( n = 95) as a homogeneous expansion of CD5 + CD45 − T cells and lymphocytosis (> 5000 lymphocytes/μL), lymphadenopathy (noted on veterinarian-completed submission form), or both (Additional file 9 A). We defined dogs as TZUS ( n = 142) if they were > 9 years of age and had no history or clinical signs of a lymphoproliferative disease (no lymphadenopathy or lymphocytosis), but had a small population of CD5 + CD45 − T cells on flow cytometry (> 1% of total T cells; Additional file 9 B). Control dogs ( n = 101) were those > 9 years of age with no history or suspicion of a lymphoproliferative disease, no population of CD5 + CD45 − T cells identified by flow cytometry ( < 1% of total T cells; Additional file 9 C), and no evidence of a clonal T cell population in the peripheral blood as assessed using the PCR for Antigen Receptor Rearrangement (PARR) assay [ 31 ]. Genome-wide association mapping Genomic DNA was extracted from white blood cell pellets of peripheral blood samples using the QIAamp DNA blood Midi Kit. Footnote 3 DNA quality and quantity were assessed using NanoDrop Footnote 4 and only samples with 1) a concentration of at least 35 ng/μL, 2) over 1000 ng of DNA total, and 3) an A260/280 ratio of 1.7–2.0 were submitted for genotyping. Genotyping was performed at GeneSeek Inc. Footnote 5 using the Illumina 170 K CanineHD BeadChip SNP array [ 32 ]. PLINK software [ 33 , 34 ] was used to perform data quality control, removing individuals with call rates < 97.5% and SNPs with call rates < 97.5% or minor allele frequency < 5%. Only autosomal chromosomes were analyzed. MDS was performed using PLINK to ensure there were no distinct groupings based on phenotype (TZL/TZUS/control), which would indicate population stratification and/or residual confounding. While we saw no obvious deviations on these plots, we were concerned for bias based on European versus American descent due to apparent divergence in this breed [ 10 , 29 ]. To determine whether any of our dogs were likely of European descent, we downloaded a publicly available dataset 1 [ 10 ] including both European and American Golden Retrievers, conducted the same quality control protocol as described above, and merged the two datasets. We then created MDS plots to determine which dogs in our dataset clustered with the known European dogs. Dogs with a value for the first cluster < 2.5 standard deviations from the mean value for known European dogs were removed (Additional file 10 ). Genome-wide complex trait analysis (GCTA) software [ 11 ] was used to estimate a genetic relationship matrix (GRM) and remove highly related individuals (one dog was removed for each pair of dogs with the same phenotype and a GRM value of 0.25 [half-sibling level]). The disease-genotype association was estimated using GCTA, adjusting for the first principal component of the GRM in a mixed linear model to correct for cryptic relatedness [ 35 ]. QQ-plots with 95% CIs calculated based on the beta distribution of observed p -values were created to assess possible genomic inflation and to establish suggestive significance levels. It is currently unclear whether TZUS dogs share some or all of the genetic predisposition for TZL, or they simply represent normal variation among controls. To determine in which category they are most genetically informative, we performed separate association studies comparing 1) combined TZL and TZUS versus controls, 2) TZL versus combined TZUS and controls, and 3) pairwise comparisons (TZL versus control, TZL versus TZUS, and TZUS versus control). Quantile-quantile plots were used to determine which analyses had enough power to be evaluated. After peaks of interest were identified, we used GCTA to conduct a conditional GWAS. By adjusting for the genotype of the top SNP of each peak of interest, we can evaluate whether the other significantly associated SNPs in that peak are statistically independent from the top SNP (i.e. whether there are one or multiple peaks). Haplotype block definition and association analysis Haplotype blocks for associated loci were defined based on boundaries identified both by clumping analysis in PLINK and R 2 -based LD analysis in Haploview [ 36 ]. For clumping analysis, the dataset was subsetted to a region including all SNPs with R 2 > 0.2 from the top SNP on that chromosome. Because of the genomic structure of dogs, a maximum window size was set to 5000 kb. For each block, their haplotype, frequencies, chi-square test, and p -value were obtained using PLINK. To assess cumulative risk, we additionally categorized dogs by number of risk haplotypes (zero to four) present for the associated regions. Logistic regression was used to evaluate the association of number of risk haplotypes and TZL. Targeted sequencing Sixteen dogs (10 TZL, 3 TZUS, 3 controls), selected for optimal haplotype representation (i.e. to represent risk–risk, risk–non-risk, and non-risk–non-risk for each haplotype) and distribution in MDS plot, were sequenced across the associated genomic regions (Additional file 11 ). A custom sequence capture array was designed (NimbleGen SeqCap EZ Developer Kit Footnote 6 ) to cover the top associated regions (16.1 Mb total; CanFam 3.1 cfa8:51,700,000-54,800,000, cfa14:8,000,000-16,100,000, cfa29:7,600,000-12,500,000). Regions were chosen to include all SNPs with R 2 > 0.2 from the top SNP. Standard indexed Illumina libraries were prepared with the KAPA HyperPlus library preparation kit. Footnote 7 Targeted pooled (4 samples) libraries were captured by hybridization in solution using the custom probe pool. Estimated coverage of the 16.1 Mb target region was 95%. Library constructions, pooling and captures were performed following the SeqCap EZ HyperCap Workflow User’s Guide (V 1.0) 6 . Per suggestion from NimbleGen, developer’s reagent (06684335001) was used in place of COT-1. Index-specific hybridization enhancing oligonucleotides were used to improve the efficiency of genomic region capture. Sequencing was carried out on an Illumina NextSeq 500. Sequencing data were pre-processed and aligned to the CanFam3.1 reference genome Footnote 8 using FastQC [ 37 ], Samtools [ 38 , 39 ], Picard Tools [ 40 ], Genome Analysis Toolkit [ 41 ], and BWA-MEM [ 42 ], as specified by Genome Analysis Toolkit best practices. Data was visualized using Integrative Genomics Viewer [ 43 ]. Genome Analysis Toolkit was used for base quality score recalibration (BaseRecalibrator), variant calling (HaplotypeCaller), and variant prioritization (VariantFiltration). Variants were additionally filtered based on adherence to the risk or non-risk haplotype (at least 80% adherence) and variants that passed this filter were annotated using SnpEff [ 44 ]. Coding SNPs were evaluated for predicted effect using PolyPhen-2 [ 45 ]. Ensembl was used to convert coordinates to the human genome to determine whether non-coding SNPs were in potential regulatory elements. Availability of data and materials The datasets analysed during the current study are available at (primary data collected for this study), (data from Arendt et al. MCT GWAS), (CanFam 3.1 reference genome). Notes Genotyping data are available on the BROAD website: Beckman Coulter Inc., Brea, CA QIAGEN Inc., Germantown, MD Thermo Scientific, Wilmington, DE GeneSeek Inc., Lincoln, NE Roche Diagnostics Corporation, Indianapolis, IN KapaBiosystems, Wilmington, MA CanFam3.1 reference genome is available at: Abbreviations CI: Confidence interval GCTA: Genome-wide complex trait analysis GRM: Genetic relationship matrix GWAS: Genome-wide association study LD: Linkage disequilibrium MCT: Mast cell tumor MDS: Multidimensional scaling OR: Odds ratio PTCL: Peripheral T cell lymphoma SNP: Single nucleotide polymorphism SNV: Single nucleotide variant TZL: T zone lymphoma TZUS: T zone-like cells of undetermined significance QQ: Quantile-quantile
A genetic mutation might be the reason dogs with hypothyroidism are less likely to develop T-zone lymphoma (TZL). That's the finding from Morris Animal Foundation-funded researchers at Colorado State University who tried to identify genetic risk factors for TZL using a genome-wide association study (GWAS) and subsequent targeted sequencing. They recently published their results in the journal BMC Genomics. "Golden retrievers are predisposed to so many cancers," said Dr. Julia Labadie, Morris Animal Foundation epidemiologist, who conducted the study as part of her doctoral work. "Any piece of the puzzle we can solve to help us understand why, can really help this breed. Of course, golden retrievers aren't the only dogs getting this cancer, so what we learn has implications for all dogs at risk for TZL." The published study is a follow up to a 2019 publication from the same team that examined associations of environment and health history among golden retrievers with TZL. The team found that both hypothyroidism and omega-3 supplementation are associated with decreased risk of TZL and suggested a genetic predisposition for TZL. "This gives us one more piece of information which we can add to future studies to ultimately help us understand why dogs get this and other cancers," said Dr. Janet Patterson-Kane, Morris Animal Foundation Chief Scientific Officer. "With continued research, we are making progress toward the development of preventive measures, earlier diagnosis and successful treatments." The team used a subset of banked blood samples from the original study to look more closely for a genetic explanation for their initial findings. The samples used were from 95 dogs that were positive for TZL, 142 from dogs that possessed T-zone like cells that may have been precancerous, and 101 from a control group of dogs that were at least 9 years old and did not have the disease. Researchers extracted DNA from each sample and genotyped them to identify areas in the dogs' chromosomes that were associated with having or not having the disease. One region of interest was found in chromosome 8, which has an association with thyroid hormone regulation. While variation in the genes associated with thyroid function has not yet been confirmed, identifying the region containing thyroid hormone genes in the genetic risk factor study as well as hypothyroidism in the environmental risk factor study strongly suggests that this area of the dog's genome is an important clue to the underlying causes of T-zone lymphoma. "This study illustrates the value of combined genetic and environmental risk factor analysis, because identifying hypothyroidism in the environmental study, as well as a genetic region that governs thyroid function in the genetic study, highlights the importance of this part of the dog genome in this disease," said Dr. Anne Avery, Professor, Department of Microbiology, Immunology and Pathology at Colorado State University. "The relevant genes may be the thyroid function genes themselves, or other genes in the region, but the strong evidence from the combined studies about the importance of this genetic region means that we can be confident that further focus on this area will be fruitful." The CSU team also found four variants on chromosome 14 that were associated with an increased risk of TZL. These same variants were previously found to be associated with risk for mast cell tumors among golden retrievers, which could suggest a shared mechanism underlying development of the two cancers. This unrelated study from the Broad Institute used similar genotyping methods. The CSU team didn't find that dogs with the chromosomal variants were likely to have both tumors, though. T-zone lymphoma is a slowly progressive form of the cancer that usually develops in older dogs, comprising about 12% of canine lymphoma cases. It is far more prevalent in golden retrievers than any other breed, with golden retrievers representing over 40% of all reported cases. Most of the golden retrievers in this study's control group were drawn from Morris Animal Foundation's Canine Lifetime Health Project. This is a registry of dogs whose owners are interested in participating in clinical trials and other studies to improve canine health. Many of the dogs entered into the registry during the recruitment phase of Morris Animal Foundation's Golden Retriever Lifetime Study, but were too old to participate in the Study at the time of their enrollment.
10.1186/s12864-020-06872-9
Earth
Changing ocean currents are driving extreme winter weather
Jianjun Yin et al, Influence of the Atlantic meridional overturning circulation on the U.S. extreme cold weather, Communications Earth & Environment (2021). DOI: 10.1038/s43247-021-00290-9 Journal information: Communications Earth & Environment
http://dx.doi.org/10.1038/s43247-021-00290-9
https://phys.org/news/2021-10-ocean-currents-extreme-winter-weather.html
Abstract Due to its large northward heat transport, the Atlantic meridional overturning circulation influences both weather and climate at the mid-latitude Northern Hemisphere. Here we use a state-of-the-art global weather/climate modeling system with high resolution (GFDL CM4C192) to quantify this influence focusing on the U.S. extreme cold weather during winter. We perform a control simulation and the water-hosing experiment to obtain two climate states with and without a vigorous Atlantic meridional overturning circulation. We find that in the control simulation with an overturning circulation, the U.S. east of the Rockies is a region characterized by intense north-south heat exchange in the atmosphere during winter. Without the northward heat transport by the overturning circulation in the hosing experiment, this channel of atmospheric heat exchange becomes even more active through the Bjerknes compensation mechanism. Over the U.S., extreme cold weather intensifies disproportionately compared with the mean climate response after the shutdown of the overturning circulation. Our results suggest that an active overturning circulation in the present-day climate likely makes the U.S. winter less harsh and extreme. Introduction The important role of the Atlantic Meridional Overturning Circulation (AMOC) in the climate system has been extensively studied 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 . Without an AMOC and associated northward heat transport, northern and western Europe could be much colder 1 , 2 , 5 , 6 , 9 , the Arctic sea ice could expand 1 , the Inter-Tropical Convergence Zone (ITCZ) could shift southward 3 , 5 , 9 , and sea level along the East Coast of North America could be higher 12 . Compared with these changes in the mean climate, the impact of AMOC on extreme weather has not been investigated systematically and sufficiently thus far. One reason is that previous generations of global climate model were particularly designed for studies on large-scale, long-term climate, rather than on daily weather at the local scale, which requires high resolution, frequent data output, regional focus, and so on. Nonetheless, several recent studies have shown that a slowdown of AMOC could contribute to summer heatwaves over Europe 13 , 14 , flooding and droughts 15 , stronger and more active Atlantic hurricanes 16 , 17 and extratropical storms 18 . During the past decade, the Geophysical Fluid Dynamics Laboratory (GFDL) of NOAA has been working towards a unified and seamless modeling system suitable for studying both weather and climate, as well as their complex interactions under the same umbrella. The recent progress in model development and the rapid growth of supercomputer power have provided better tools to tackle important weather-climate issues. Here, we use the high resolution version (C192) of the global coupled modeling system, GFDL CM4 19 , 20 , 21 , 22 , 23 (see the “Methods” section), to investigate the influence of AMOC on the U.S. extreme cold weather during winter. As low-frequency high-impact events, extreme cold snaps could be disastrous ( ), particularly for the U.S. southern states with typical mild temperatures during winter 24 , 25 . Results Control simulation and water-hosing experiment with GFDL CM4C192 Under the 1950 radiative forcing, a long, centennial timescale control simulation has been carried out with CM4C192 as part of the GFDL’s participation in the High Resolution Model Intercomparison Project 26 . Due to the refined resolution for both the atmosphere (0.5°) and ocean (0.25°), synoptic-scale phenomena are better simulated by CM4C192, including hurricanes and severe winter storms, atmospheric rivers and blocking, ocean eddies and jets, storm surge and coastal flooding, etc 12 , 19 , 20 , 21 , 23 . In addition, the simulated AMOC has a mean strength of about 18 Sv (1 Sv = 10 6 m 3 s −1 ) at 26°N, compared well with observations 19 , 23 (Supplementary Fig. 1a ). To investigate the impact of AMOC on mid-latitude weather, we consider an idealized case by obtaining a climate state without an active AMOC while keeping everything else the same. To do so, we perform the typical water-hosing experiment by imposing a 0.6 Sv freshwater addition over the northern North Atlantic 1 , 3 (see the “Methods” section for more details). This experimental design should lead to strong and quick signals with a clear and definite attribution to AMOC, thereby avoiding complication by other factors. In addition, the high resolution coupled model is computationally expensive, which currently prevents long, transient, and ensemble simulations. In response to the freshwater perturbation, the AMOC almost shuts down in about 20 years (Supplementary Fig. 1b , c ). The atmosphere in the Northern Hemisphere approaches a new quasi-equilibrium state after year 20. In the following analysis, we compare years 21–100 of the hosing experiment with the 100-year control run to identify response characteristics of daily weather to the AMOC shutdown. Energy transport across 40°N and Bjerknes compensation between the ocean and atmosphere In the control run of CM4C192, the atmosphere and ocean work together to transport up to 5.7 Petawatts (PW, or 10 15 Watts) annual heat poleward to compensate the differential solar heating between the low and high latitudes 27 , 28 , 29 (Fig. 1a, b and Supplementary Fig. 2 ). In the Northern Hemisphere, the maximum total transport occurs at about 40°N. At mid-latitudes, the atmosphere is highly efficient at mixing different temperatures and transporting heat poleward through fast-moving turbulent weather systems, especially during winter. For the annual mean, the oceanic transport of about 0.8 PW at 40°N, largely due to AMOC 16 , 30 , 31 , is by far smaller than its atmospheric counterpart of 4.8 PW, but nonetheless represents an enormous amount of heat in global energy balance (Fig. 1 ). It should be noted that CM4C192 likely underestimates the northward heat transport in the Atlantic. The simulated maximum transport of about 1 PW at 26°N is lower than the recent observational estimate of about 1.3 PW 16 , 31 (Fig. 1c ). We consider the atmosphere north of 40°N as a whole (“northern atmosphere”) and perform a detailed heat budget analysis for December, January, and February (DJF). During boreal winter of the control, the northern atmosphere loses 13.3 PW heat at the top of the atmosphere (TOA) but gains 6.1 PW from the surface (Fig. 1a ). The heat deficit of 7.2 PW is compensated by the atmospheric heat transport across 40°N mainly associated with mid-latitude weather processes especially baroclinic transient eddies. Without an AMOC and its northward heat transport in the hosing experiment (Fig. 1c ), the TOA and surface heat fluxes reduce by 0.6 PW and 1.1 PW, respectively (Fig. 1a ). To compensate the increased heat deficit due to these changes, the atmosphere must transport about 0.5 PW more heat northward across 40°N. This “Bjerknes compensation” mechanism 32 , 33 , 34 , 35 , 36 works to stabilize the mean temperature and maintain the energy balance of the northern atmosphere in a climate without AMOC. Fig. 1: Energy balance of the northern atmosphere in the climates with and without an active AMOC. a Schematic shows the energy balance for the entire atmosphere north of 40°N. The left half with black numbers (annual/DJF) shows heat fluxes (PW) at the top, bottom and southern boundaries in the long-term control run of CM4C192. The right half with red numbers shows the heat flux anomalies during years 21–100 of the hosing experiment relative to the control. The positive and negative values indicate enhanced and reduced heat fluxes, respectively. Only the annual mean value is shown for the oceanic transport. The blue and yellow shadings denote the atmosphere and AMOC, respectively. b Annual northward heat transport by the global atmosphere and global ocean as a function of latitude in the control run. c Annual northward heat transport of the global ocean and the Atlantic in the control and during years 21–100 of the hosing experiment. The green vertical dashed line marks 40°N. Full size image The enhanced atmospheric heat transport during winter is achieved through more active weather processes at mid-latitudes 33 . In the control, intense north–south atmospheric heat exchanges occur over a broad region at 40°N. At 850 hPa, large atmospheric eddy temperature fluxes 27 ( v ′ T ′; see “Methods” section) are found over the eastern North America and western North Atlantic, East Asia, and the North Pacific, as well as over Europe and Middle East (Fig. 2a ). These regions coincide with the mid-latitude storm track where extratropical cyclones and anti-cyclones continuously develop and propagate, thereby efficiently mixing warm and cold air masses. In particular, the U.S. east of the Rocky Mountains 37 sees some of the highest values of v ′ T ′ (Fig. 2a ). Fig. 2: Enhanced atmospheric heat transport by transient eddies in response to the shutdown of AMOC. a Atmospheric eddy temperature flux ( v ′ T ′) (°C m s −1 ) at 850 hPa in the long-term control. v ′ T ′ is band passed using a Lanczos filter to identify synoptic variations on 3–15 days. Positive and negative values indicate northward and southward transport of sensible heat, respectively. The green asterisks mark Chicago, Houston, and New York. The thin grey lines are surface topography with 1000 m intervals. b Anomalies of the atmospheric eddy temperature flux (°C m s −1 ) during years 21–100 of the hosing experiment relative to the control. c Anomalies of the surface heat flux (W/m 2 ) during years 21–100 of the hosing experiment relative to the control. Negative values indicate reduction of the upward heat flux. The freshwater perturbation is input into the ocean region of the green box. All values in a , b and c are for DJF. See Supplementary Fig. 2 for the TOA and surface heat fluxes in the control run. Full size image The atmospheric eddy temperature flux is sensitive to the change in heat transport by AMOC and the surface heat flux anomalies in the northern Atlantic and Arctic (Fig. 2c ). After the AMOC shutdown in the hosing experiment of CM4C192, v ′ T ′ shows large increases at the northern latitudes (Fig. 2b ). North of 40°N, the increase in the eddy sensible heat flux concentrates over the northern North Atlantic, where the mean cooling is largest and amplified due to the sea ice feedback (Supplementary Fig. 3b ). South of 40°N, higher v ′ T ′ values are pronounced over the eastern U.S. and the North Pacific (Fig. 2b ). Note that the southward intrusion of frigid Arctic air mass is equivalent to a large northward temperature flux because both v ′ and T ′ are negative and have large absolute values. In addition, the atmospheric eddy latent heat flux ( v ′ q ′) shows a consistent increase in the 20°–40°N latitudinal band (Supplementary Fig. 4 ). Response of the U.S. extreme cold weather to the AMOC shutdown During years 21–100 in the hosing experiment, the global annual mean surface air temperature cools by about 1 °C relative to the control (Supplementary Fig. 5a ). This global cooling, centered at the northern North Atlantic, is a result of the cloud, water vapor, and sea ice feedbacks associated with the reduced northward heat transport in the ocean 38 , 39 . Other changes of the large-scale mean climate in the hosing experiment are generally similar to the previous results 1 . Next we focus on the U.S. daily surface air temperature ( T s ) in DJF. Compared with the reanalysis data of ERA5 40 during 1979–2021 (see “Methods” section), CM4C192 simulates the mean and daily variations of T s in DJF well in the control run (Supplementary Fig. 6 ). As for extremely cold temperatures, we evaluate the model performance at Chicago, Houston, and New York, three large cities representing the Midwest, South, and Northeast U.S., respectively. At Chicago, the daily temperature anomaly relative to the daily climatology (Δ T s ; see “Methods” section) reached the lowest point of −23.5 °C on January 31, 2019 in the detrended and deseasonalized ERA5 data (Fig. 3a, b ). The extremeness of the recent Texas cold snap during February 2021 ( ) is even more striking. Δ T s at Houston plummeted to −23.4 °C on February 16, 2021, by far colder than previous extreme events (Fig. 3c, d ). At New York, the coldest Δ T s occurred on January 18, 1982 and on February 20 and 24, 2015, with a magnitude of about −16.3 °C (Fig. 3e, f). Fig. 3: Data-model comparison of DJF daily temperature anomalies (Δ T s ) at three cities of the U.S. a , b Chicago; c , d Houston; e , f New York. a , c , e The time series for 1979–2021 of ERA5 and the 50-year control simulation of CM4C192. Both curves are detrended and deseasonalized so that the mean is zero. The coldest value of Δ T s at each city in ERA5 is marked with its occurrence date. b , d , f The histograms of the 42-year ERA5 data and the 100-year control simulation of CM4C192. Note that the x -axis uses a logarithmic scale and denotes probability ( c i / N ; c i —bin count; N —total count). The solid horizontal lines show the mean. The dashed horizontal lines denote the return levels for the 1-in-10-year and 1-in-100-year cold events. Their values along with the mean and three moments of the time series are listed at the upper left corner. From left to right: mean, standard deviation, skewness, kurtosis, \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) , and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) . Full size image At the three cities, CM4C192 simulates the general statistics of Δ T s well in the control run, including its standard deviation, skewness, and kurtosis (Fig. 3 ). However, the model underestimates extreme cold events as evidenced by the higher 10-year and 100-year return levels ( \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) ; see “Methods” section for the return level calculation), especially at Houston (Fig. 3 ). Different resolutions and external forcings, as well as existing model biases, are among the possible reasons for the differences between the ERA5 data and CM4C192 simulations. After the shutdown of AMOC in the hosing experiment, the intensity and frequency of extremely cold daily temperatures over the U.S. increase disproportionately compared with the mean temperature response (Figs. 4 and 5 and Supplementary Fig. 7 ). At Chicago, the 10-year and 100-year return levels of \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) further drop by 3.4 °C and 3.6 °C, respectively, in the hosing experiment, compared with a mean cooling of 1.6 °C relative to the control (Fig. 4a, b ). \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) (−20.9 °C) in the control is almost identical to \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) (−20.8 °C) in the hosing run, suggesting that the 100-year extreme cold event could occur every 10 years at Chicago after the AMOC shutdown. At Houston, \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) drops more and by 4.6 °C from −14.8 °C in the control to −19.4 °C in the hosing (Fig. 4c, d ). It represents a change more than five times larger than the mean cooling of 0.9 °C (Fig. 5f ). Interestingly, this drop makes \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) in CM4C192 closer to that of ERA5 (Fig. 3c, d ). At New York, \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) further drop by 5.6 °C and 5.4 °C, respectively, compared with a mean cooling of 2 °C (Fig. 4e, f ). Extremely cold temperatures reaching or exceeding \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) = −15.7 °C in the control occur more frequently and for about 60 times/days in the hosing experiment. Fig. 4: Response of DJF daily temperature anomalies (Δ T s ) at three cities of the U.S. in the hosing experiment. a , b Chicago; c , d Houston; e , f New York. a , c , e Time series for 100 years or 9000 DJF days. In both curves, the daily climatology from the control has been removed and the mean cooling remains in the curve of the hosing run. b , d , f The histograms. The y -axis and x -axis are the temperature anomaly and the number of days, respectively. Note the x -axis uses a logarithmic scale. The solid horizontal lines show the long-term mean. The dashed horizontal lines denote the return levels for the 1-in-10-year and 1-in-100-year cold events. The statistics of the time series are listed at the upper left corner (from left to right: long-term mean, standard deviation, skewness, kurtosis, \({\widehat{\Delta T}}_{s}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) ). These statistics are calculated based on years 1–100 of the control run and years 21–100 of the hosing run. The 90% confidence bounds of \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) quantified by the bootstrapping can be found in Supplementary Fig. 8 . Full size image Fig. 5: Changes in statistics of DJF daily temperature anomalies (Δ T s ) over mid-latitude land areas in the hosing experiment. a Long-term mean (°C), b Standard deviation (°C), c Skewness, d Kurtosis, e 100-year return level ( \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) ; °C); f Ratio of the extreme ( e ) and mean ( a ) responses. The values show the changes in statistics during years 21–100 of the hosing experiment relative to the long-term control. f Large positive values over North America indicate amplified responses of extremely cold daily temperature relative to the mean cooling. Negative values indicate that the extreme and mean temperature responses have opposite signs. See Supplementary Fig. 7 for these statistics in the long-term control simulation. Full size image To assess the uncertainty associated with the extreme value analysis, we perform the Kolmogorov–Smirnov test for the annual coldest Δ T s at Chicago, Houston and New York between the control and hosing runs. The test rejects the null hypothesis at the 5% significance level that the control and hosing samples are drawn from the same distribution. In terms of the return level estimate, we apply the bootstrap method to quantify its 90% confidence bounds 41 (Supplementary Fig. 8 ). The results confirm that compared with the control, the drops of \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) in the hosing experiment are statistically significant at the three cities. Impact factors for the change in return levels In the hosing experiment, the drops in return level of extreme cold temperatures could be caused by multiple factors 42 (Fig. 5 ): the mean cooling, increased overall variance, reduced skewness, changes in the seasonal cycle (Supplementary Fig. 9 ), and individual extratropical cyclones/anti-cyclones that become stronger and propagate more southward. At New York, the mean cooling (−2.0 °C), the increased standard deviation (from 4.7° to 5.4 °C) and the reduced skewness (from 0.4 to 0), as well as more extreme individual weather events, all contribute to the drop of \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) in the hosing run (Fig. 4e ). Similarly, these factors are important to explain the intensification of extreme cold weather over western Europe (Fig. 5 and Supplementary Fig. 7 ), along with the increase in snow cover (Supplementary Fig. 10 ). However, snow cover in the hosing experiment changes little over the U.S. due to a minimum cooling (Fig. 5a ). By comparison, the drop of \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) at Houston is mainly caused by individual extreme weather events rather than by the overall variability and skewness (Fig. 4c ). This is consistent with the increase in kurtosis that measures the tailedness of the temperature distribution (i.e., outliers). In fact, the large drops of \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) in the Great Plains just east of the Rocky Mountains are related to the increased kurtosis, which also dominates the ratio of the extreme and mean responses (Fig. 5d–f ). The shutdown of AMOC sharpens the meridional temperature gradient at the northern mid-latitudes and increases the baroclinicity of the atmosphere. These lead to stronger weather systems that propagate more southward. It should be noted that the analysis above is based on daily temperature anomalies (Δ T s ) relative to the daily climatology in the control ( \({\tilde{T}}_{{{{{{\rm{s}}}}}}}\) ). Due to the relatively small curvature of the seasonal cycle in DJF (Supplementary Fig. 9 ), the largest negative anomalies also mean the local coldest weather during winter. Among the three cities, Chicago is located in land interior and generally colder than the coastal Houston and New York. The absolute daily temperature ( T s ) at Chicago could drop to as low as −27.4 °C in the hosing run of CM4C192, compared with the coldest temperature of −14.3 °C at Houston and −21.6 °C at New York. Conclusions In this study, we use a state-of-the-art global weather/climate modeling system with high resolution to investigate the influence of AMOC on extreme winter weather. Located at the upwind direction of the North Atlantic, mean winter temperatures over the U.S. are thought to be less influenced by the AMOC compared with the downwind European side (Fig. 5a and Supplementary Fig. 3b ). From a concise energy balance point of view without involving much advanced atmospheric dynamics, we show here that AMOC can modulate daily temperature extremes more efficiently over the U.S. (Fig. 5e ). The AMOC shutdown and reduced northward heat transport in the Atlantic are capable of exciting more extremely cold weather over the U.S. during winter. This amplified response at the tail of the temperature distribution could be several times larger than that of the mean (Fig. 5f ). This sensitivity of extreme weather over land interior to deep ocean circulation seems surprising but is nevertheless a robust response required by Bjerknes compensation. Due to the north–south orientation of the mountain series over North America (Fig. 2 ), the Arctic outbreak during winter can push frigid polar air mass from Canada all the way southward to the Gulf of Mexico. We find that this channel of intense atmospheric heat exchange becomes even more active after the shutdown of AMOC, thereby intensifying extreme cold events over the U.S. In other words, an active AMOC in the present-day climate likely makes the U.S. winter less harsh and extreme. According to some of recent observational studies, the AMOC has weakened during the past century 43 . In particular, the northward heat transport at 26°N in the North Atlantic reduced by 0.17 PW and from 1.32 PW during 2004–2008 down to 1.15 PW during 2009–2016, as a result of a recent AMOC slowdown event 31 . This reduction in ocean heat transport influenced the northern atmosphere through heat flux anomalies at the ocean surface. The magnitude of this reduction represents a sizeable fraction of that induced by the AMOC shutdown in the CM4C192 simulations (Fig. 1 ). Anyway, the model simulations carried out here represent a sensitivity study. Given the highly idealized nature of the hosing experiment in this study, one should be cautious about its implication for extreme cold weather in future climates. This is evidenced by the opposite trends of the mean temperature and Arctic sea ice between the ERA5 data and the CM4C192 simulation (Supplementary Fig. 3 ). Compared with the shutdown case, in addition, a slowdown of AMOC could cause a similar but more gradual response of the extreme weather. Despite these caveats, one sure thing is that Bjerknes compensation, which is derived from the very basic law of energy conservation, should continue to work in the future climate. Anything that alters one way of the energy flow will trigger a response from the others. Methods The GFDL CM4C192 model CM4C192 is the high resolution version of the latest generation of the climate models developed and used at GFDL 19 . For various metrics, it performs among the best CMIP6 models 44 . The atmospheric model (AM4) 20 , 21 , 22 adopts finite-volume cubed-sphere dynamical core with 192 grid boxes per cube face (~0.5° grid spacing). It has 33 vertical levels and the model top is located at 1 hPa. The model incorporates updated physics such as a double-plume scheme for shallow and deep convection and a new mountain gravity wave drag parameterization 21 . Due to improvements in model resolution, physics and dynamics, CM4C192 simulates strong synoptic systems well such as hurricanes 45 and atmospheric rivers 22 . The oceanic model of CM4C192 is based on the Modular Ocean Model version 6 (MOM6) 23 . It uses the Arbitrary-Lagrangian-Eulerian algorithm in the vertical to allow for the combination of different vertical coordinates including geopotential and isopycnal. The model adopts the C-grid stencil in the horizontal and is configured on a tripolar grid. It has a 0.25° eddy-permitting horizontal resolution and 75 hybrid vertical layers down to the 6500 m maximum bottom depth. The vertical grid spacing can be as fine as 2 m near the ocean surface. Daily or even hourly data of important atmospheric variables are saved to facilitate analyses on weather and extreme events. These variables include surface air temperature ( T s ), precipitation, sea level pressure, atmospheric temperature ( T ) at 250 and 850 hPa, zonal and meridional winds ( u , v ) at 250 and 850 hPa, and specific humidity ( q ) at 850 hPa. The model uses a noleap calendar that has 365 days in every year. Control run and water-hosing experiment with CM4C192 The initial condition is obtained from a long-term control simulation under the 1850 radiative forcing. During the 100-year control run under the 1950 radiative forcing, the global mean surface air temperature shows a slight increase (Supplementary Fig. 5a ). This drift is mainly caused by some high-latitude regions. At low and mid-latitudes, T s is quite stable in the control run without any clear trend (Supplementary Fig. 5b – d ). In the water-hosing experiment, a 0.6 Sv freshwater addition is input uniformly into the northern North Atlantic and the ocean region from 65°W–5°E and 50°N–75°N (see the green box in Fig. 2c ) for 100 years. This freshwater addition is not compensated elsewhere. So it leads to about 5 m global sea level rise over the 100-year period. The perturbation freshwater is input at the same temperature as the local sea surface temperature. So while it is a mass source and reduces regional and global ocean salinity, it is not a specific heat source or sink and therefore does not influence the heat budget analysis here. Atmospheric and Oceanic heat transport In this study, we use both the direct and indirect methods to calculate the heat transport by the atmosphere and ocean. In the long-term control run, the total northward heat transport by the global atmosphere and global ocean at a latitude ϕ can be estimated by integrating the net radiative flux at TOA from the South (or North) Pole to latitude ϕ . $${Q}_{{{{{{\rm{t}}}}}}}(\phi )={\int }_{-\frac{\pi }{2}}^{\phi }{\int }_{0}^{2\pi }{F}_{{{{{{\rm{TOA}}}}}}}{R}^{2}\cos \,\phi ^{\prime} \,{{{{{{\rm{d}}}}}}}\lambda \,{{{{{{\rm{d}}}}}}}\phi ^{\prime}$$ (1) Q t is the total northward heat transport; F TOA the net radiative flux at TOA; R Earth’s radius; λ and ϕ are longitude and latitude, respectively. Similarly, the atmospheric heat transport ( Q a ) is estimated as $${Q}_{{{{{{\rm{a}}}}}}}(\phi )={{\int }_{-\frac{\pi }{2}}^{\phi }}{\int }_{0}^{2\pi }({F}_{{{{{{\rm{TOA}}}}}}}-{F}_{{{{{{\rm{sfc}}}}}}}){R}^{2}{{{{{\rm{cos}}}}}}\phi ^{\prime} \,{{{{{{\rm{d}}}}}}}\lambda \,{{{{{\rm{d}}}}}}\phi ^{\prime},$$ (2) where F sfc is the heat flux at the surface. We adopt the direct method to calculate the heat transport in an ocean basin. Integrate the transport from the western to the eastern boundary and vertically. Then sum across the ocean basins. $${Q}_{{{{{{\rm{o}}}}}}}(\phi )=\mathop{\sum}\limits_{{{{{{\rm{basin}}}}}}}{\int }_{-H}^{\eta }{\int }_{{{{{{\rm{w}}}}}}}^{{{{{{\rm{e}}}}}}}{\rho }_{{{{{{\rm{w}}}}}}}{c}_{{{{{{\rm{p}}}}}}}Tv \,R \,cos\phi \,{{{{{\rm{d}}}}}}\lambda \,{{{{{\rm{d}}}}}}z$$ (3) Q o is the global ocean heat transport, T the ocean potential temperature, v the ocean meridional velocity, ρ w seawater density, c p seawater heat capacity, η and H denote ocean surface and bottom, respectively. Sensible and latent heat fluxes from the atmospheric transient Eddies To calculate the atmospheric eddy heat fluxes, we apply a Lanczos bandpass filter 46 to daily atmospheric temperature ( T ), specific humidity ( q ), and meridional wind ( v ) to identify their variations on the synoptic timescale of 3–15 days. We first remove the seasonal cycle before applying the filter to the time series. $$x{\prime} (t)=\mathop{\sum }\limits_{k=-L}^{L}w(k)x(t-k)$$ (4) $$w(k)=\left(\frac{{{{{{\rm{sin}}}}}}2\pi {f}_{2}k}{\pi k}-\frac{{{{{{\rm{sin}}}}}}2\pi {f}_{1}k}{\pi k}\right)\frac{{{{{{\rm{sin}}}}}}\pi k/L}{\pi k/L}$$ (5) $$k=-L,\ldots ,0,\ldots ,L$$ x and x ′ represent the original and filtered time series of T , q or v , respectively. f 1 and f 2 are the cutoff frequencies for the bandpass filter. w ( k ) represents a set of weights within the filter window ( L = 25). Analysis on extreme daily surface air temperature The anomaly of daily surface air temperature is the departure from its daily climatology. $$\Delta {T}_{{{{{{\rm{s}}}}}}}(x,y,t)={T}_{{{{{{\rm{s}}}}}}}(x,y,t)-{\tilde{T}}_{{{{{{\rm{s}}}}}}}(x,y,{t}_{1}),{t}_{1}=1,2,\ldots ,365$$ (6) T s , \({\tilde{T}}_{{{{{{\rm{s}}}}}}}\) and Δ T s are daily temperature, its climatology and anomaly, respectively. As the coldest three months at the mid-latitude Northern Hemisphere, \({\tilde{T}}_{{{{{{\rm{s}}}}}}}\) over DJF shows relatively small variation compared with the annual cycle (Supplementary Fig. 9 ). Note that Δ T s in the hosing experiment is calculated relative to \({\tilde{T}}_{{{{{{\rm{s}}}}}}}\) in the control. So the change in the seasonal cycle (mean, amplitude and timing) in the hosing run also contributes to Δ T s (Supplementary Fig. 9 ). To calculate return levels of extremely cold daily temperatures, we use the block maxima approach in the extreme value analysis 41 , 47 . We consider the time series of −Δ T s and pick out the maximum daily values (i.e., the coldest daily temperatures) in DJF for each year. Then we fit the generalized extreme value (GEV) distribution to annual maxima of −Δ T s . $$G(x)=\exp \{-{[1+k\left(\frac{x-\mu }{\sigma }\right)]}^{-\frac{1}{k}}\}$$ (7) $$1+k\frac{x-\mu }{\sigma }\, > \,0$$ k , σ , and μ are the shape, scale and location parameters of GEV, respectively. For k = 0, the GEV distribution reduces to the Gumbel distribution. For k > 0 and k < 0, the GEV distribution becomes the Fréchet and Weibull distribution, respectively. After the three parameters are determined, the return levels ( \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{10}\) and \({\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}\) ) can be estimated with the inverse cumulative density function of the GEV distribution. For example, $$-{\widehat{\Delta T}}_{{{{{{\rm{s}}}}}}}^{100}=\mu -\frac{\sigma }{k}\{1-{[-{{{{\mathrm{ln}}}}}(1-\frac{1}{100})]}^{-k}\}$$ (8) To assess the uncertainty associated with the return level estimates and determine whether the changes in return level in the hosing experiment are statistically significant, we use the bootstrap method 41 , 48 to generate 10,000 samples of the annual maximum values of −Δ T s and quantify the 90% confidence bounds. ERA5 reanalysis ERA5 combines large amounts of historical observations and uses advanced modeling and data assimilation to obtain global estimates of the atmosphere 40 . For the data-model comparison in this study, we use the 3-h global surface air temperature data from January 1, 1979 to February 28, 2021. The data with a 0.25° horizontal resolution are downloaded from the Copernicus Climate Change Service ( ). February 29 in the leap years is removed before the data-model comparison. Data availability The control simulation of GFDL CM4C192 can be found at the CMIP6 archive ( ). ERA5 reanalysis data can be found at . Supplementary Data 1 – 3 contain data that were used to generate Figs. 1 , 3 , and 4 . Code availability The model codes can be found at . All other codes used in the analysis of this study are available from the corresponding author upon request.
Throughout Earth's oceans runs a conveyor belt of water. Its churning is powered by differences in the water's temperature and saltiness, and weather patterns around the world are regulated by its activity. A pair of researchers studied the Atlantic portion of this worldwide conveyor belt called the Atlantic Meridional Overturning Circulation, or AMOC, and found that winter weather in the United States critically depends on this conveyor belt-like system. As the AMOC slows because of climate change, the U.S. will experience more extreme cold winter weather. The study, published in the journal Communications Earth & Environment, was led by Jianjun Yin, an associate professor in the University of Arizona Department of Geosciences and co-authored by Ming Zhao, a physical scientist at the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory. AMOC works like this: Warm water travels north in the upper Atlantic Ocean and releases heat into the atmosphere at high latitudes. As the water cools, it becomes denser, which causes it to sink into the deep ocean where it flows back south. "This circulation transports an enormous amount of heat northward in the ocean," Yin said. "The magnitude is on the order of 1 petawatts, or 10 to the 15 power watts. Right now, the energy consumption by the entire world is about 20 terawatts, or 10 to the 12 power watts. So, 1 petawatt is enough to run about 50 civilizations." But as the climate warms, so does the ocean surface. At the same time, the Greenland ice sheet experiences melting, which dumps more freshwater into the ocean. Both warming and freshening of the water can reduce surface water density and inhibit the sinking of the water, slowing the AMOC. If the AMOC slows, so does the northward heat transport. This is important because the equator receives more energy from the sun than the poles. Both the atmosphere and ocean work to transport energy from low latitudes to high latitudes. If the ocean can't transport as much heat northward, then the atmosphere must instead transport more heat through more extreme weather processes at mid-latitudes. When the atmosphere moves heat northward, cold air is displaced from the poles and pushed to lower latitudes, reaching places as far south as the U.S. southern border. "Think of it as two highways connecting two big cities," Yin said. "If one is shut down, the other one gets more traffic. In the atmosphere, the traffic is the daily weather. So, if the ocean heat transport slows or shuts down, the weather becomes more extreme." Yin said the study was motivated by the extreme cold weather Texas experienced in February. "In Houston, the daily temperature dropped to 40 degrees Fahrenheit below the normal," Yin said. "That's the typical range of a summer/winter temperature difference. It made Texas feel like the Arctic. This kind of extreme winter weather happened several times in the U.S. during recent years, so the scientific community has been working to understand the mechanism behind these extreme events." The crisis in Texas caused widespread and catastrophic power outages, and the National Oceanic and Atmospheric Administration estimated that socioeconomic damages totaled $20 billion. Yin was curious about the role the ocean played in the extreme weather event. Yin and Zhao used a state-of-the-art, high-resolution global climate model to measure the influence of the AMOC on U.S. extreme cold weather. They ran the model twice, first looking at today's climate with a functioning AMOC. They then adjusted the model by inputting enough freshwater into the high-latitude North Atlantic to shut down the AMOC. The difference revealed the role of the AMOC in extreme cold weather. They found that without the AMOC and its northward heat transport, extremely cold winter weather intensifies in the U.S. According to recent observational studies, the AMOC has weakened in past decades. Climate models project it will get even weaker in response to increased greenhouse gases in the atmosphere. "But there is uncertainty about the magnitude of the weakening, because at this point, we don't know exactly how much the Greenland ice sheet will melt," Yin said. "How much it melts depends on the greenhouse gas emissions." The researchers also didn't take into account in their model the effects of human-caused global warming, but that's an area of interest for the future, Yin said. "We basically just turn off the AMOC (in the model) to look at the response by extreme weather," he said. "Next, we want to factor in the greenhouse gases and look at the combined effects of the AMOC slowdown and global warming on extreme cold weather."
10.1038/s43247-021-00290-9
Physics
Brief reflections from a plasma mirror
Dmitrii Kormin et al. Spectral interferometry with waveform-dependent relativistic high-order harmonics from plasma surfaces, Nature Communications (2018). DOI: 10.1038/s41467-018-07421-5 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-07421-5
https://phys.org/news/2018-12-plasma-mirror.html
Abstract The interaction of ultra-intense laser pulses with matter opened the way to generate the shortest light pulses available nowadays in the attosecond regime. Ionized solid surfaces, also called plasma mirrors, are promising tools to enhance the potential of attosecond sources in terms of photon energy, photon number and duration especially at relativistic laser intensities. Although the production of isolated attosecond pulses and the understanding of the underlying interactions represent a fundamental step towards the realization of such sources, these are challenging and have not yet been demonstrated. Here, we present laser-waveform-dependent high-order harmonic radiation in the extreme ultraviolet spectral range supporting well-isolated attosecond pulses, and utilize spectral interferometry to understand its relativistic generation mechanism. This unique interpretation of the measured spectra provides access to unrevealed temporal and spatial properties such as spectral phase difference between attosecond pulses and field-driven plasma surface motion during the process. Introduction The investigation and control of ultrafast physical processes in nature call for ever shorter flashes of light. The present state of the art is high-order harmonic generation (HHG) from gas medium 1 , providing extreme ultraviolet (XUV) and X-ray pulses with durations in the attosecond regime, the temporal scale of electron motion in atoms and molecules. An alternative is represented by HHG from plasma mirrors (PMs) 2 , 3 . Its main advantage over gas HHG is the potential to utilize lasers with ultrahigh peak power and thus produce bright attosecond light pulses with orders of magnitude of higher energy and shorter wavelength. Such high-energy attosecond light sources will satisfy the challenging needs of attosecond XUV nonlinear optics for XUV-pump–XUV-probe experiments 4 . There have been many advances in the past two decades in understanding and developing attosecond light sources using PMs. So far, three distinct mechanisms of HHG from PMs have been identified. Coherent wake emission (CWE) 5 below relativistic laser intensities and relativistically oscillating mirror (ROM) 6 , 7 , 8 , 9 are compared in the experiment and clearly differentiated from each other 10 , 11 . The ROM model predicts a power law decay of the harmonic spectrum with the harmonic order with an exponent of −8/3. However, there exist different alternative models describing this last regime and providing different exponents 12 , 13 , 14 , 15 . Recently, a third mechanism termed coherent synchrotron emission 16 or relativistic electronic spring model 17 , 18 , 19 was proposed and experimentally identified with its characteristic spectral signatures 20 . CWE provides temporally coherent and synchronized XUV harmonics resulting in attosecond temporal bunching 21 , 22 but with a chirp related to sub-laser-cycle dynamics of plasma electrons 23 , which makes it less attractive for applications. The good spatial coherence of the laser is preserved during the CWE process 24 . To date, ROM was observed up to multi-keV 25 , 26 energy in good agreement with theoretical predictions 9 . It is shown to provide well beamed radiation 27 , 28 and diffraction-limited focusing 29 , 30 . The motion of the plasma surface during the interaction (denting) influences the harmonic beam divergence 31 and reduces the coherence length of ROM harmonics 32 . The generation condition is controlled by the plasma scale length to optimize the harmonic yield 33 , 34 , 35 , 36 and to change the spacing between pulses in an attosecond pulse train (attotrain) 37 . Furthermore, utilizing similar control, harmonics provide spatial information about structured plasma surfaces via ptychographic imaging 38 . Engineering the instantaneous waveform with two-color driving field control is also proposed and used to enhance high-harmonic yield 39 , 40 . For future applications of these light sources, isolated attosecond pulses (APs) are superior to an attotrain. For this reason, different polarization gating techniques were proposed for many-cycle laser pulses 41 , 42 , 43 , which have large intensity losses. Owing to the advent of intense few-cycle laser drivers with high contrast 44 , 45 , 46 , 47 , however, direct isolation is within reach through intensity gating 48 , 49 , 50 . In these cases 49 , an attotrain is produced containing a well isolated AP with very high isolation degree (defined as main-to-side pulse temporal intensity ratio), which still presents apparent modulation in its corresponding spectrum. A natural property determining the waveform of these few-cycle laser pulses is the carrier-envelope phase (CEP) ϕ CEP (see Methods). CEP dependence and control of CWE harmonics was already demonstrated 51 and isolation was reached with tilting the pulse front of the intense few-cycle driver laser and thus producing a manifold of isolated APs propagating in different directions 52 . Although high harmonics from few-cycle driver are expected to be waveform dependent 49 , neither this CEP dependence nor isolated APs have been achieved in the relativistic generation regime before. In this article, we present laser-field-dependent emission of high-order harmonics from a ROM driven by a two-cycle laser. For certain CEP values, this harmonic radiation supports a strongly isolated AP. An analysis based on spectral interferometry (SI) reveals the spectral phase difference between the pulses and the spectrum of the individual pulses in the attotrain. This information allows determining the temporal spacing and relative contrast between individual APs in a few-pulse attotrain as well as the denting of the reflecting solid-density PM 31 . The simultaneous measurement of the laser waveform provides clear demonstration of the CEP dependence of the underlying relativistic laser–plasma interaction. Results Experiment description The experimental set-up is shown in Fig. 1 . The multi-terawatt (multi-TW) peak power Light Wave Synthesizer 20 (LWS-20) 44 was used as a laser source with a low-intensity PM for contrast improvement 45 by a factor of 300. It provided a pulse energy of 40 mJ on-target after losses, which was focused with an off-axis parabolic (OAP) mirror to d FWHM = 1.3 μm, in an angle of incidence α inc = 55°, and p -polarization. Focus size and quality were controlled using a microscope objective, which can replace the target and image the attenuated beam focus to a charge-coupled device (MO+CCD). Chirp-scan technique with a home-made dazscope 53 device was used to measure the on-target compression and optimize it with the help of the acousto-optic shaper (Dazzler) in the laser system. In combination with an almost octave-spanning spectrum, which is slightly red-shifted by the PM optics, it led to two-cycle laser pulses with slightly modified parameters (central wavelength of the laser λ L = 765 nm, intensity full-width-half-maximum (FWHM) duration τ FWHM = 5.1 fs) with ultra-relativistic peak intensity of I L = 1.3 × 10 20 W cm −2 in focus, which corresponds to a L = 7 normalized vector potential defined as \(a_{\mathrm{L}} = \sqrt {I_{\mathrm{L}}({\mathrm{W cm}}^{ - 2})\lambda _{\mathrm{L}}^2({\mathrm{\mu m}})/1.38 \times 10^{18}}\) . Detailed information about the laser and the pulse characterization can be found in Methods and in ref. 44 . Fig. 1 Experimental set-up. LWS-20 delivers relativistic two-cycle laser pulses. A small portion of the beam is used in a stereo-ATI phasemeter to measure the CEP for every shot. The contrast of the main pulse is improved by 2.5 orders of magnitude by a plasma mirror (PM) and characterized by a third-order autocorrelator (THG-AC). A controllable prepulse is produced in the prepulse unit. Afterwards, the on-target pulse duration is measured and optimized with a home-made dazscope (DS) in combination with a Dazzler in the system. An off-axis parabolic mirror (OAP) tightly focuses the beam on the fused silica target that is translated after every shot. The generated XUV emission is reflected to a flat-field spectrometer where it is spatially separated from the fundamental beam. Focus quality is checked and optimized with the attenuated beam by replacing the target with a microscope objective (MO) and imaging it to a CCD Full size image The laser was focused to a fused silica target and the created relativistic PM reflected the driver pulse together with generated XUV harmonic emission toward a flat-field imaging spectrometer equipped with a micro-channel plate and a CCD (XUV SPEC). Each recorded XUV spectrum was tagged with the relative CEP of the corresponding laser pulse, measured with a single-shot stereo above-threshold-ionization phasemeter 54 (CEP-meter). The XUV generation process is strongly dependent on the preplasma extension at the arrival of the main pulse 33 , 34 , 35 , that is the plasma scale length L (see Methods). Therefore, it significantly increases the requirements for the temporal laser contrast, which was characterized by a home-made third-order autocorrelator 55 (THG-AC) when the PM was implemented. The contrast was estimated to be 10 −19 beyond 30 ps and 3 × 10 −8 at 1.5 ps before the main pulse 44 . A tailored prepulse with adjustable delay was introduced (see Methods), which together with the PM provided a complete control over the plasma scale length for optimization of the XUV generation efficiency and further investigation of the light–plasma interaction. The scale length for the different prepulses was estimated with the hydrodynamic code MEDUSA 56 (see Methods). Waveform-dependent XUV spectra High harmonics were generated up to 80 eV photon energy on plasma surfaces with optimal scale length. The harmonic spectra were fitted for many shots and found to scale with the harmonics frequency in a power law I ω ~ ω −2.55±0.21 for L / λ L ≈ 0.13, close to the theoretically expected ROM scaling 9 (dashed line in Fig. 2a shows a typical fit). For more details on the fit results, see Supplementary Table 1 . We thus conclude that in the observed spectral range of 16–100 eV XUV emission was dominantly generated via the ROM mechanism. Measured spectra for two different CEPs with π difference (Fig. 2a ) had harmonics with photon energies that did not correspond to the integer multiples of the laser central frequency ( ℏ ω 0 = 1.62 eV) and were shifted relatively to each other by ℏ ω 0 /2. In general, the photon energy of harmonics varied with CEP for different laser shots. Sorting the measured spectra according to the corresponding laser pulse ϕ CEP reveals continuous shift of the photon energy of harmonics. Fig. 2 CEP-dependent XUV emission. a Example of measured XUV spectra generated by the laser pulses with π -shifted ϕ CEP . Two insets demonstrate harmonics photon energy and shape difference in spectral regions 30–35 and 50–55 eV, respectively. The dashed line is a power law fit with an exponent of −2.62. b CEP-sorted experimental XUV spectra for 71 shots for a prepulse delay τ pp = 1.67 ps ( L / λ L ≈ 0.13). c Corresponding CEP dependence obtained from PIC simulations with a scale length of L / λ L = 0.13. Additional contrast enhancement procedure (see Methods) was applied to measured and simulated spectra for better visualization of harmonics photon energy. A vertical running average smoothing of the data in b , c was applied within the CEP measurement error range of ±210 mrad Full size image Figure 2b displays normalized and contrast enhanced (see Methods) CEP-sorted spectra measured with a prepulse delay of τ pp = 1.67 ps corresponding to a plasma scale length of L / λ L ≈ 0.13 (see Methods). A set of one-dimensional (1D) particle-in-cell (PIC) simulations have been performed (see Methods) for the same laser intensity, a series of ϕ CEP values, and several different plasma scale lengths. Figure 2c shows the particular case L / λ L = 0.13, which is in reasonable agreement simultaneously with the experimental data in Fig. 2b as well as the later evaluation in Fig. 5 . Supplementary Fig. 1 shows the simulated CEP-sorted spectra for different scale lengths and indicates that if only the results in Fig. 2 are compared then L / λ L = 0.1 shows a better agreement. The absolute CEP in experiments was determined by comparing simulations and experiment (see Methods). Simulations predicted an almost linear shift in the photon energy of the n th harmonic within ( ℏ ω n − ℏ ω n + 1 ) with CEP in the whole energy range. However, this shift increases with the photon energy and its dependence becomes more complex. The same trend is observed in measured spectra: the lowest harmonics have linear shift with ϕ CEP while the highest ones become more disordered. An increase of the plasma scale length also leads to stronger and more complex CEP dependence of the harmonics in simulations, which was experimentally confirmed by increasing the prepulse delay. These results clearly demonstrate a relativistic waveform-dependent interaction. Such an effect can be explained by treating the obtained spectra as an interference of several APs. XUV pulses appear as a result of the motion of relativistically accelerated electrons driven by the laser electric field. Different field shapes, that is CEPs, therefore lead to different temporal structures of the generated attotrain. Understanding this dependence represents an important step toward the control of the temporal structure of attotrains and even the routine generation of isolated APs with CEP-stabilized few-cycle lasers. Spectral interference The individual intensity of generated APs is not only connected to the field of the driver laser at the corresponding cycle but also depends on the scale length 49 . According to our simulations, for ϕ CEP ≈ 0 (and L / λ L ≈ 0.25) two almost equivalent APs are produced, that is they have much higher intensity and maximum photon energy than other pulses. Correspondingly, above a certain photon energy the generated harmonic spectrum can be considered as the interference of only these two strongest APs (see, for example, Fig. 3a ). Therefore, SI 57 can be applied and the Fourier transformation (FT) of the spectrum above this energy contains information in peak P1 in Fig. 3b about the spectral phase difference of the two APs and the product of their spectra (detailed explanation of SI treatment is given in Supplementary Fig. 2 and its test with PIC simulation results in Fig. 3 ). Figure 3a–c represent such a range of a measured spectrum and the corresponding spectral interferometric evaluation. The product of the spectra of the two individual APs ( S AP1 and S AP2 ) \(S_{{\mathrm{P1}}} = \sqrt {S_{{\mathrm{AP1}}}S_{{\mathrm{AP2}}}}\) and their spectral phase difference without the linear term are plotted in Fig. 3c . As the two dominant APs are very similar, we assume that they have the same spectrum S P1 directly obtained from SI. Additionally assuming that the spectral phase of one of the AP (for example, first) is flat, which is in agreement with our simulations where all APs have flat spectral phase in a broad parameter range (see Supplementary Fig. 4 ) 49 , the phase of the other one (second) becomes equal to their difference as ϕ P1 = Δ ϕ 12 = ϕ AP1 − ϕ AP2 , which is also obtained by the inverse FT of the peak P1. This enables to estimate the effect of the spectral phase on the AP temporal structure. Fig. 3 Spectral interferometry with two APs. a Measured XUV spectrum with ϕ CEP ≈ 0 and L / λ L ≈ 0.25 in a selected spectral range. b Corresponding FT using the 39–57 eV range. The peak P1 represents the interference of the two involved APs. Dashed line shows temporal gate function chosen for the subsequent inverse FT. c Reconstructed combination of two AP spectra \(S_{{\mathrm{P1}}} = \sqrt {S_{{\mathrm{AP1}}}S_{{\mathrm{AP2}}}}\) (blue) and their phase difference ϕ P1 = ϕ AP1 − ϕ AP2 without the linear term (yellow). y Axis scale and ticks for spectral intensity match the y axis of next plot. d Fourier-limited (blue) and phase-affected (red) AP temporal structure based on S P1 and ϕ P1 . The delay between the pulses is removed for better comparison Full size image Figure 3d demonstrates that, by adding phase ϕ P1 to Fourier-limited AP with spectrum S P1 , the temporal structure remains practically unchanged, that is, our second assumption is not necessary as the two APs have the same second- and higher-order spectral phase. This was also observed in all investigated experimental shots having two APs as well as in our simulations. It is important to note that the delay between two APs contained in the group delay (GD) of ϕ P1 , which is depicted by the position of P1, is 2.7 fs, while T L = 2.55 fs. Therefore, the second AP propagated some additional path and it took additional 150 as to reach the detector. Simulations predict 31 , 37 that plasma is pushed by the intense laser field and different APs are generated at different spatial position. Using this 150 as propagation delay in the conventional optical path difference equations for the reflected light rays (see Methods), a spatial difference of 40 nm in normal direction is estimated. Spectral interferometry method was thus successfully applied for the analysis of XUV spectra of two APs generated by a few-cycle laser, revealing spatial and temporal information about the process. In particular, the second- and higher-order spectral phase difference between the pulses was found to be zero. Extrapolation of this method to interference of several APs allows comparing their individual intensities and GDs, which results in a partial reconstruction of the temporal structure of the attotrain. Plasma surface denting In the ϕ CEP ≈ π case (and L / λ L ≈ 0.4), three APs start contributing to the XUV spectrum in the same photon energy range, which is reflected in the corresponding FT. Figure 4a illustrates such an experimental spectrum. Instead of a single peak, the corresponding FT contains three peaks originating from the non-equidistant attotrain. Each of them represents interference between different pairs of APs (Fig. 4b ). As the number of peaks is the same as the number of APs, the spectrum of all three individual APs can be retrieved without any assumption as shown in Fig. 4c (see Supplementary Fig. 2 and 3 ). Positions of the peaks correspond to arrival time differences at the detector of interfering APs: t P3 = t P1 + t P2 . Comparison of the corresponding phase differences without the linear term (Fig. 4d ) confirms this interpretation: Δ ϕ 13 ≡ ϕ P3 = ϕ P1 + ϕ P2 ≡ Δ ϕ 12 + Δ ϕ 23 . Assumption of a flat spectral phase of any AP allows representing the other two phases through phase differences Δ ϕ 12 and Δ ϕ 23 . Applying such a phase to the measured Fourier-limited individual APs, their duration again does not show any AP elongation. These observations are consistent with the numerical predictions that all generated APs in the measurement were (atto)chirp-free. Thus FT provides the temporal structure of attotrain (see Fig. 6a ) with the previous assumption. The ratios between the AP intensities are, however, not influenced by this assumption. Fig. 4 Spectral interferometry of three APs. a Measured XUV spectrum with ϕ CEP ≈ π and L / λ L ≈ 0.5 within selected spectral range. b Corresponding FT using the 33–57 eV range. Peaks P1, P2, and P3 represent interference of different pairs of three involved APs: AP2 & AP3, AP1 & AP2, and AP1 & AP3, respectively. Dashed lines show the applied temporal gate functions for inverse FT. c Reconstructed individual spectra of all three APs within the considered spectral range. d Corresponding spectral phase differences without the linear term. Black dashed line illustrates the sum ϕ P1 + ϕ P2 , which well matches ϕ P3 . A − π /2 constant is added to ϕ P3 and the phase sum line for better visibility. e Reconstructed plasma surface dynamics, based on the calculated relative coordinates of three APs (see Methods). f Result of 1D LPIC simulation for the electron density in the plasma driven by the laser field with pulse parameters similar to the experiment ( ϕ CEP = π and L / λ L = 0.4). The green diamonds mark the points where XUV pulses are generated and are obtained from the interferometric evaluation of the simulated spectrum. The color bar indicates normalized electron density n / n c . The blue line is a parabolic interpolation based on these three generation points that coincides well with the averaged plasma surface denting. Green lines show propagation directions of the generated XUV pulses Full size image From the arrival time differences of APs at the detector, the relative spatial and temporal coordinates of the AP generation are derived, as shown in Fig. 4e . The time direction ambiguity is eliminated by comparison to simulations, where the GD difference of first and second pulse is always much larger. This curve is explained by so called plasma denting—movement of the plasma surface caused by the pushing intense laser field. Figure 4f shows the simulated electron density in the laboratory system normalized to the critical density n / n c during the interaction with the laser pulse. The averaged shape of the plasma surface within one optical cycle is very close to the parabola formed by the birth place coordinates of APs obtained from the interferometric evaluation of the simulated spectrum, which is valid for a broad parameter range as shown in Supplementary Fig. 5 . Therefore, the relative coordinates of the generation points of APs reconstruct the motion of the plasma surface. Figure 4f demonstrates such a parabolic interpolation with similar parameters as in Fig. 4e . It is expected that softer plasma (longer L ) experiences a stronger denting, which leads to a larger interval between peaks P1 and P2. However, for some particular cases they partially overlap or completely merge into a single peak. This is possible when the plasma is very dense and the corresponding denting is negligible. In these cases, the GD differences are derived using only this merged peak together with P3 but without the possibility to check higher-order spectral phase differences. Comparison of such reconstructions for different CEP values and plasma scale lengths is shown in Fig. 5a . The represented CEP interval corresponds to a P3 peak amplitude above noise level in the FT of the measured spectra, that is, there are three considerable APs present in the selected spectral range. Figure 5a shows that the increase of prepulse delay, that is, longer plasma scale length, leads to a stronger denting. It also illustrates that the reconstructed parabolas significantly depend on the CEP of the driving pulse. Variations in the depth of denting caused by CEP change are smaller than those by significant plasma scale length increase. Fig. 5 Waveform-dependent relativistic plasma surface denting. a Parabolic interpolation for the plasma surface motion evaluated from the measured XUV spectra. Red, blue, and yellow solid lines correspond to the plasma scale length L / λ L ≈ 0.13 and respective ϕ CEP values 7 π /6, π /2, and 5 π /3—center and the edges of the CEP range leading to a P3 peak in FT above the noise level. Red dashed line shows plasma surface denting for significantly softer plasma L / λ L ≈ 0.5 and ϕ CEP ≈ 7 π /6. Lines are fits based on the mean value of the generation points (with in ϕ CEP ± Δ ϕ RMS ), shaded areas correspond to standard deviation. b Similar reconstruction based on the simulated spectra. CEP values are the same as above and scale lengths are L / λ L = 0.13 for solid lines and L / λ L = 0.4 for dashed line. For all the cases, first AP spatial and temporal coordinates are used as reference Full size image Nevertheless, the effect unambiguously demonstrates that, for a few-cycle driving laser pulse, even the curvature of the reconstructed parabolas without relying on their origin (overlapping coordinate) are field dependent. Therefore, it is expected that the plasma surface dynamics is also CEP dependent as opposed to the case of long driving pulses that lead to only intensity-dependent plasma surface motion 31 . Figure 5b shows similar comparison of plasma surface denting for the simulated spectra. Both effects—significant dependence on CEP and plasma scale length—are in qualitative agreement with the experiment. This agreement together with the visual comparison between Fig. 2b and c were the criteria to choose the scale length in simulation. The differences between experiment and 1D simulations might originate from multi-dimensional effects as in the λ 3 regime 58 . We conclude that these experimental observations and simulations support the field-dependent plasma denting phenomena. Isolated AP The spectrum in Fig. 4a corresponds to a dominant central AP, which is at least five times stronger than the side APs, as visible in the calculated temporal structure in Fig. 6a . This relation is also predictable from the FT; according to the equations (Supplementary Fig. 2 ) stronger P1 and P2 peaks relative to P3 encode a more dominant central AP. At certain CEP values, which depend on the plasma scale length and the laser intensity, P3 vanishes below the noise level, while peaks P1 and P2 are still well observable. Fig. 6 Highly isolated AP. a Reconstructed temporal structure of the attotrain based on the individual spectra and corresponding phase differences from Fig. 4c, d and assuming a flat phase for one of the pulses as observed in our simulations. b Measured spectrum (blue) with ϕ CEP ≈ 11 π /6 and L / λ L ≈ 0.5 in comparison with highly modulated spectrum (red) appearing as a result of two equivalently strong APs ( ϕ CEP ≈ π ). c Corresponding FT of the measured XUV spectrum attributed to highly isolated AP. Peaks P1 and P2 represent interference of this dominant AP with weak side ones. P3 is below the noise level due to very low spectral intensity of the side APs. d Reconstructed temporal structure demonstrating minimum isolation degree of the central AP of 30. Inset: AP intensities in logarithmic scale with different normalization for better comparison Full size image It indicates that the two side APs are much weaker than the central one. Figure 6b shows such an experimental spectrum (blue line) and its FT is plotted in Fig. 6c . Here only two peaks responsible for interference of the side APs with central one are visible. The position of P1 and P2 predicts, however, the position of P3. Therefore, the analysis is repeated without clear P3 peak by using noise signal at the corresponding FT position instead of real P3 amplitude. In this case, reconstruction of the individual spectra of APs is not accurate, but it only leads to underestimation of the real isolation degree. Corresponding temporal structure with actual delays given by the spectral phase differences is presented in Fig. 6d . It demonstrates a minimum (intensity) isolation degree of 30 of the central AP, which is definitely considered as an isolated AP. These relativistic HHG results clearly support isolated APs from solid-density plasma surfaces. Discussion We have presented experimental results showing CEP-dependent high-order harmonics from relativistic PMs supporting a well isolated AP. A unique interpretation of the harmonic spectra as an SI trace of a few APs and the corresponding evaluation delivers formerly unattainable information about the spectrum and spectral phase difference between the pulses, including their temporal separation. This SI analysis reveals that the APs in the attotrain have the same second- and higher-order spectral phase, that is, all of them are compressed or chirped in the same way. Based on simulations, it is expected that the pulses are unchirped (no atto chirp). Furthermore, the analysis provides the dynamics of the denting of the solid density reflecting plasma surface without any spatial measurement. Based on this observation, the plasma motion is expected to be dependent on the field of the driver laser. Our results open the way toward next-generation XUV and X-ray sources based on relativistic PMs. These sources provide unprecedented photon numbers by utilizing multi-TW to petawatt laser powers, photon energies in the multi-keV range, and corresponding coherent extreme X-ray continuum supporting few-attosecond or even zeptosecond pulses. These extraordinary properties support pioneering applications, such as nonlinear attosecond physics, attosecond pump–attosecond probe spectroscopy of atoms and molecules in the XUV and X-ray spectral region, attosecond X-ray diffraction, and time-resolved nuclear photonics. Methods Experimental set-up The laser energy of LWS-20 was reduced by losses on the PM and its optics ( T = 70%) and the vacuum beamline ( T = 76%). In the experimental vacuum chamber, the laser beam was focused with an f /1.2 OAP mirror to a spot size of d FWHM = 1.3 μm, which was carefully optimized with feedback loops utilizing the adaptive mirror and wavefront sensor in the laser. The large fused silica target was translated after every shot to provide a fresh surface. The long-term reliable synchronization of the translation stages limited the applied repetition rate to 1 Hz, although the driving laser can operate at 10 Hz. To tag the CEP, a small portion of the beam was reflected by a pellicle beam-splitter and directed into the phasemeter. Before all experimental runs with tagging to determine the relative CEP on-target of each laser pulse, preliminary shots were accumulated to obtain statistics about the CEP measurement error, which was typically Δ ϕ RMS = 210 mrad. The absolute CEP on the target was obtained from the relative values by comparing the Fourier transform of the experimental CEP-sorted spectra with that from PIC simulations so that the third AP indicated by the 5–6 fs peak appears in the same CEP range. Scale length The characteristic plasma scale length L is used to describe plasma expansion and defined as \(n(x) = n_0\exp ( - x/L)\) . Prepulse generation A small quarter-inch mirror was installed on a motorized stage in front of the last mirror before focusing. It reflected a small portion of the main beam in the same direction as the main beam after the last mirror with shorter optical path, that is, arriving before the main pulse by a certain time called prepulse delay. The prepulse mirror was inserted with its reflecting surface backward: 34 the beam was propagating through the thin glass layer before and after being reflected. This provided a delay that is needed for reaching zero-delay while the prepulse mirror is in front of the main mirror plane. The mirror position corresponding to zero-delay was calibrated with the interference between the main and the prepulse observed at focus. The prepulse delay was adjustable in the range from −1.5 to 12 ps, where negative values indicate a postpulse instead of a prepulse and were used as “no prepulse” case during harmonics generation. The intensity of the generated prepulse at focus was estimated to be 10 −4 –10 −5 of the main pulse according to pulse elongation in the glass and CCD signal at focus, which indicated a 6 times larger spot size and 100 times lower energy as the main pulse. Supplementary Fig. 6 demonstrates dependence of the measured XUV signal intensity on the prepulse delay. Zero-level signal without prepulse (at −0.33 ps) hints at a too steep plasma for the ROM generation, that is, satisfactory good high-dynamic-range contrast of the main pulse. The obvious maximum around 3 ps and following degradation (up to five times) proves the usability of this method for controlling of the plasma scale length. The scale length for a certain prepulse delay was obtained by hydrodynamic simulations with the MEDUSA code 56 . XUV spectra contrast enhancement Spectral amplitude of the measured XUV signal falls very rapidly with increasing photon energy. In order to represent harmonics within a broad spectral range in one figure and observe their photon energies easier, the following steps were consequently applied to each individual spectrum: (1) bandpass filtering (between 1 and 12 fs); (2) magnification of higher photon energy part by dividing the spectrum by a fourth-order polynomial fit; and (3) normalization to the maximum oscillation amplitude. It was carefully checked that none of these modifications changes the photon energy of harmonics. PIC simulation The 1D PIC simulations are performed using the code LPIC++ 7 . The incident laser has an electric field waveform \(E_y^{{\mathrm{inc}}} = a_{\mathrm{L}}\exp \{ - 2\ln 2[(t - t_{\mathrm{p}})/\tau _{\mathrm{L}}]^2\} \cos [2\pi (t - t_{\mathrm{p}})/T_L + \phi _{{\mathrm{CEP}}}]\) , with a L the normalized vector potential, τ L the intensity FWHM pulse duration normalized to the laser period T L = λ L / c , ϕ CEP laser CEP, and the field direction is defined for a cosine pulse that the most intense half-cycle points outwards the target. Throughout this paper, a L = 6 is assumed that corresponds to a laser intensity of 8.4 × 10 19 W cm −2 for a laser central wavelength λ L of 765 nm. The density profile of the interacting plasma has an exponential interface layer in front of a constant density slab layer. The density of the interface layer rises from 0.1 n c up to a maximum of 400 n c , that is, the density of glass targets when fully ionized, with a scale length of L , where n c is the critical electron density at the laser wavelength. The slab layer has a thickness of 2 λ L . The p -polarized laser pulse is incident onto plasma at an angle α inc = 55°. In LPIC++, this oblique incidence geometry is transformed into a 1D case using the Bourdier technique 59 . Plasma denting estimation For two light rays being reflected in vacuum from two parallel planes with Δ x normal coordinate difference with the given incidence angle (see Supplementary Fig. 7 ), the optical path difference (Δ d opt ) and the corresponding delay (Δ t arr ) appears to be \(\Delta d_{{\mathrm{opt}}} = 2\Delta x\cos (\alpha _{{\mathrm{inc}}}) = c\Delta t_{{\mathrm{arr}}}\) , which together with T L delay leads to the arrival time difference at detector between two consecutive APs Δ τ arr = T L + Δ t arr = T L + Δ d opt / c . Assuming AP generation happens simultaneously with the reflectance of laser pulses, that is the APs are generated at the same phase in the different optical cycles of the driving laser, and their further propagation is coincident, we can attribute a position of corresponding peak in FT to this time difference. Thus spatial and temporal coordinates of AP generation (relative to each other) can be expressed as $$\begin{array}{l}\Delta x = (\Delta \tau _{{\mathrm{arr}}} - T_{\mathrm{L}})c/[2\cos (\alpha _{i{\mathrm{nc}}})]\\ \Delta t_{{\mathrm{gen}}} = T_{\mathrm{L}} + \Delta x\cos (\alpha _{{\mathrm{inc}}})/c.\end{array}$$ For an attotrain of 3 pulses using the first point as an origin ( t = 0, x = 0), the other two points with their respective relative coordinates Δ x and Δ t gen define a parabola that describes cycle-average plasma surface denting in y = 0 plane of the laboratory system. The assumptions are supported in the simulations by the good agreement between the plasma surface motion and the evaluated parabolas (see Fig. 4f and Supplementary Fig. 5 ). Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
When a dense sheet of electrons is accelerated to almost the speed of light, it acts as a reflective surface. Such a 'plasma mirror' can be used to manipulate light. Now an international team of physicists from the Max Planck Institute of Quantum Optics, LMU Munich, and Umeå University in Sweden have characterized this plasma-mirror effect in detail, and exploited it to generate isolated, high-intensity attosecond light flashes. An attosecond lasts for a billionth of a billionth (10-18) of a second. The interaction between extremely powerful laser pulses and matter has opened up entirely new approaches to the generation of ultrashort light flashes lasting for only a few hundred attoseconds. These extraordinarily brief pulses can in turn be used to probe the dynamics of ultrafast physical phenomena at sub-atomic scales. The standard method used to create attosecond pulses is based on the interaction of near-infrared laser light with the electrons in atoms of noble gases such as neon or argon. Now researchers at the Laboratory for Attosecond Physics at the Max Planck Institute of Quantum Optics in Garching and Munich's Ludwig Maximilians University (LMU), in collaboration with colleagues at Umeå University, have successfully implemented a new strategy for the generation of isolated attosecond light pulses. In the first step, extremely powerful femtosecond (10-15 sec) laser pulses are allowed to interact with glass. The laser light vaporizes the glass surface, ionizing its constituent atoms and accelerating the liberated electrons to velocities equivalent to an appreciable fraction of the speed of light. The resulting high-density plasma made up of rapidly moving electrons, which propagates in the same direction as the pulsed laser light, acts like a mirror. Once the electrons have attained velocities that approach the speed of light they become relativistic, and begin to oscillate in response to the laser field. The ensuing periodic deformation of the plasma mirror interacts with the reflected light wave to give rise to isolated attosecond pulses. These pulses have an estimated duration of approximately 200 as and wavelengths in the extreme ultraviolet region of the spectrum (20-30 nanometers, 40-60 eV). In contrast to attosecond pulses generated with longer laser pulses, those produced by the plasma-mirror effect and laser pulses that have a duration of few optical cycles can be precisely controlled with the waveform. This also allowed the researchers to observe the time course of the generation process, i.e. the oscillation of the plasma mirror. Importantly, these pulses are much more intense, i.e. contain far more photons, than those obtainable with the standard procedure. The increased intensity makes it possible to carry out still more precise measurements of the behaviour of subatomic particles in real time. Attosecond light pulses are primarily used to map electron motions, and thus provide insights into the dynamics of fundamental processes within atoms. The higher the intensity of the attosecond light flashes, the more information can be gleaned about the motions of particles within matter. With the practical demonstration of the plasma-mirror effect to generate bright attosecond light pulses, the authors of the new study have developed a technology, which will enable physicists to probe even deeper into the mysteries of the quantum world.
10.1038/s41467-018-07421-5
Space
Evidence of a local hot bubble carved by a supernova
"The origin of the local 1/4-keV X-ray flux in both charge exchange and a hot bubble." M. Galeazzi, et al. Nature (2014) DOI: 10.1038/nature13525. Received 20 March 2014 Accepted 14 May 2014 Published online 27 July 2014 Journal information: Nature
http://dx.doi.org/10.1038/nature13525
https://phys.org/news/2014-07-evidence-local-hot-supernova.html
Abstract The solar neighbourhood is the closest and most easily studied sample of the Galactic interstellar medium, an understanding of which is essential for models of star formation and galaxy evolution. Observations of an unexpectedly intense diffuse flux of easily absorbed 1/4-kiloelectronvolt X-rays 1 , 2 , coupled with the discovery that interstellar space within about a hundred parsecs of the Sun is almost completely devoid of cool absorbing gas 3 , led to a picture of a ‘local cavity’ filled with X-ray-emitting hot gas, dubbed the local hot bubble 4 , 5 , 6 . This model was recently challenged by suggestions that the emission could instead be readily produced within the Solar System by heavy solar-wind ions exchanging electrons with neutral H and He in interplanetary space 7 , 8 , 9 , 10 , 11 , potentially removing the major piece of evidence for the local existence of million-degree gas within the Galactic disk 12 , 13 , 14 , 15 . Here we report observations showing that the total solar-wind charge-exchange contribution is approximately 40 per cent of the 1/4-keV flux in the Galactic plane. The fact that the measured flux is not dominated by charge exchange supports the notion of a million-degree hot bubble extending about a hundred parsecs from the Sun. Main When the highly ionized solar wind interacts with neutral gas, an electron may hop from a neutral to an outer orbital of an ion, in what is known as charge exchange. The electron then cascades to the ground state of the ion, often emitting soft X-rays in the process 16 . The calculations of X-ray intensity from solar-wind charge exchange depend on limited information about heavy ion fluxes and even more uncertain atomic cross-sections. The ‘Diffuse X-rays from the Local galaxy’ (DXL) sounding rocket mission 17 was launched from the White Sands Missile Range in New Mexico, USA, on 12 December 2012 to make an empirical measurement of the charge exchange flux by observing a region of higher interplanetary neutral density (with a correspondingly higher charge exchange rate) called the ‘helium focusing cone’. Neutral interstellar gas flows at about 25 km s −1 through the Solar System owing to the motion of the Sun through a small interstellar cloud. This material, mostly hydrogen atoms but about 15% helium, flows from the Galactic direction (longitude l , latitude b ) ≈ (3°, 16°), placing Earth downstream of the Sun in early December 18 . The trajectories of the neutral interstellar helium atoms are governed primarily by gravity, executing hyperbolic Keplerian orbits and forming a relatively high-density focusing cone downstream of the Sun about 6° below the ecliptic plane ( Fig. 1 ) 19 . Interstellar hydrogen, on the other hand, is also strongly affected by radiation pressure and photoionization: radiation pressure balances gravity, reducing the focusing effect, while photoionization creates a neutral hydrogen cavity around the Sun. Figure 1: The He focusing cone. Modelled interstellar He density (blue is low density; red is high density) showing the He focusing cone. Keplerian He orbits, Earth’s orbit, and the DXL and ROSAT observing geometries are also shown. PowerPoint slide Full size image The early December launch of DXL placed the He focusing cone near the zenith at midnight. The 7° field of view was scanned slowly back and forth across one side of the cone and more rapidly in a full circle to test the consistency of the derived charge exchange contribution in other directions and to make measurements of the detector particle background while DXL was looking towards Earth ( Extended Data Fig. 1 ). Figure 2 shows the ROSAT All Sky Survey 1/4-keV map 20 (R12 band) with the paths of the DXL slow scan (red) and fast scan (white) overplotted. The ROSAT observation of the slow-scan region was performed in September 1990 when the line of sight was about one astronomical unit (the Earth–Sun distance, 1 au ) away from, and parallel to, the He cone, so its charge exchange contribution was not strongly affected by the cone enhancement ( Fig. 1 ). Figure 2: The DXL scan path. ROSAT all-sky survey map in the 1/4-keV (R12) energy band, shown in Galactic coordinates (contours are labelled in degrees) with l = 180°, b = 0° at the centre. The colour scale shows flux intensity. The units are ROSAT units, RU. The DXL scan path is the white band with the slow portion shown in red. The black line is the 90° horizon for the DXL flight. The width of the band represents the half-power diameter of the instrument beam. PowerPoint slide Full size image For this analysis, we chose pulse height limits for both of the DXL proportional counters (Counter-I and Counter-II) to match the pulse heights of the ROSAT 1/4-keV band as closely as possible ( Extended Data Fig. 2 ). This energy range is dominated by and contains most of the emission from solar-wind charge exchange and/or the local hot bubble. To quantify the solar-wind charge exchange emission we compared both DXL and ROSAT count rates to well determined models of the interplanetary neutral distribution along the lines of sight for both sets of measurements ( Fig. 3 ) 17 , 21 . Figure 4 shows the DXL and ROSAT count rates along the DXL scan path as functions of Galactic longitude. The figure shows the combined Counter-I and Counter-II count rates (black dots) during the DXL scan and the ROSAT 1/4-keV count rates in the same directions (blue solid line). The best fit to the DXL total count rate (red solid line), and the solar-wind charge exchange contributions to DXL (red dashed line) and ROSAT (blue dashed line) rates are also shown (see Table 1 for best-fit parameters: the model shown corresponds to the second column). There is potentially an additional contribution from charge exchange between the solar-wind ions and the geocoronal hydrogen surrounding Earth, which tracks the short-term variations in solar-wind flux. Time variations of a few days or less were removed from the ROSAT maps, and the current best estimate of the residual from geocoronal charge exchange is about 50 ROSAT units (1 RU = 10 −6 counts s −1 arcmin −2 ) for the ROSAT 1/4-keV band (K.D.K., J. Carter, M.P.C., Y. M. Colladovega, M.R.C., T.E.C., D.K., F.S.P., A. Read, I.P.R., D. G. Sibeck, S. F. Sembay, S.L.S., N.E.T. and D.M.W., manuscript in preparation). The geocoronal contribution to the DXL flux should be negligible, owing to the look direction, which is directly away from the Sun. The signature of the cone enhancement in the DXL data compared to the ROSAT rates is evident, highlighting the contribution from charge exchange. However, the best fit shows that the total charge exchange contribution to ROSAT is only about 40% ± 5% (statistical error) ± 5% (systematic error) of the total flux observed at the Galactic plane. Its contribution to the ROSAT flux over the DXL scan path is typically about 140 RU. For comparison, the total ROSAT 1/4-keV flux ranges from around 300 RU to 400 RU in the plane up to 1,400 RU in the brightest areas at intermediate and high latitudes. This result implies that the measured fluxes are dominated by interstellar emission, strengthening the original idea of a hot bubble filling the local interstellar medium for a hundred parsecs or so in all directions from the Sun. Figure 3: Neutral atom column density for DXL and ROSAT. Neutral column density distribution integrals for each line of sight along the scan path. The density distribution in the integrals is weighted by one over the distance from the Sun squared (1/ R 2 ) to reflect the dilution of the solar wind as it flows outward. The red lines are the integral for He (solid) and H (dashed) in the DXL geometry. The blue lines represent the integral for He (solid) and H (dashed) in the ROSAT geometry. The black line shows the Galactic latitude during the scan. DXL is significantly more affected by the He focusing cone, while in both cases the H contribution is small. PowerPoint slide Source data Full size image Figure 4: Fit to DXL and ROSAT data. Combined Counter-I and Counter-II count rates (black dots) during the DXL scan and ROSAT 1/4-keV count rate in the same directions (blue solid line). The best fit to the DXL total count rate (red solid line), and the solar-wind charge exchange contribution to DXL (red dashed line) and ROSAT 1/4-keV bands (blue dashed line) are also shown. The error bars are s.e.m. PowerPoint slide Source data Full size image Table 1 Best-fit model parameters Full size table It has been pointed out that a hot bubble creates an apparent pressure balance problem with the tenuous warm cloud that the Sun is passing through. However, recent results on the magnetic contribution to the cloud pressure 22 and new three-dimensional maps of the local interstellar medium 23 bring the implied pressure of the plasma in the local hot bubble to rough agreement with pressures derived for the local interstellar clouds when the measured contribution from the solar-wind charge exchange is removed from the local hot bubble emission 24 . Methods The total count rate due to charge exchange with H and He is the integral along the line of sight of the product of the solar-wind ion flux ( n i v rel ), the donor densities n H and n He , as functions of position, and the sum over the products of the partial cross-sections for producing each X-ray line by charge exchange and the efficiency of producing a detector count from that line: where i represents the solar-wind species, j the emission lines for that species, σ i are the speed-dependent interaction cross-sections for individual species, b ij is the line branching ratio, g j is the instrument’s efficiency for detecting photons in line j , and v rel is the relative speed between solar wind and neutral flow (both the bulk and thermal velocity). We can then write the ion density n i in terms of the proton density n p at R 0 = 1 au , assuming that it scales as one over the square of the distance R from the Sun (we verified that neutralization effects on the solar-wind ions are negligible), and define the compound cross-section as: In the case of ‘constant’ solar-wind conditions, the solar-wind flux can be removed from the integrals, and the total charge exchange rate with H and He can be written as: The assumption of isotropy of the ion flux included in the equation above is an approximation, because the flux is known to vary strongly on timescales of about a day. Evidence that averaging over the few-week transit time through the relevant interplanetary region smooths these fluctuations is found in the very good agreement between four complete sky surveys performed years apart by different missions 20 . A factor to account for the difference in the solar-wind flux between the ROSAT and DXL is included in our fitting procedure and its best value is reported in Table 1 . In this work, we used the expected H and He distribution 21 adapted for solar-wind conditions during the respective missions to calculate the integrals for both H and He for each point along the DXL scan path. The distribution of the interplanetary neutrals is calculated based on the solar ionization conditions derived from measurements of backscattered solar radiation and checked by in situ sampling, so the integrals above can be calculated with some confidence for all lines of sight. We took as free parameters the combination n p ( R 0 ) v rel α He , the ratio of solar-wind fluxes for the two missions ( n p ( R 0 ) v rel ) DXL /( n p ( R 0 ) v rel ) ROSAT , and correction factors to fine-tune the calculated ratios of DXL counter responses to the ROSAT 1/4-keV band response. We then did a global least-squares fit for both DXL counter rates for each point along the scan path. There is insufficient variation in the hydrogen column densities to determine its effective cross section, so for α H / α He we assumed ratios of one, as assumed in the calculated contributions, and two, since some determinations show smaller cross-sections for helium. The residual contribution to ROSAT from charge exchange in Earth’s geocorona has in the past been assumed to be negligible owing to the good agreement with other all-sky surveys done with different observing geometries. More recent analyses suggest that the value should be more like 50 RU. Table 1 shows the best-fit parameters for some different assumptions for the residual geocoronal charge exchange contribution to the ROSAT rates and for the ratio of effective cross-sections for hydrogen and helium. The total solar-wind charge exchange contribution is minimally affected. A systematic error has been included in our results to account for the variation in the table. There may be other potential sources of systematic uncertainties affecting our result. These include contribution from point sources to the DXL count rate, scattered solar X-rays, higher-energy photons into the DXL bands, and a highly non-isotropic solar-wind flux. We estimated the first three and found their contribution to our result to be within a few per cent; for the fourth, there is no evidence of a non-isotropic solar wind across the DXL slow scan that would significantly (within error) affect our result.
I spent this past weekend backpacking in Rocky Mountain National Park, where although the snow-swept peaks and the dangerously close wildlife were staggering, the night sky stood in triumph. Without a fire, the stars, a few planets, and the surprisingly bright Milky Way provided the only light to guide our way. But the night sky as seen by the human eye is relatively dark. Little visible light stretching across the cosmos from stars, nebulae, and galaxies actually reaches Earth. The entire night sky as seen by an X-ray detector, however, glows faintly. The origins of the soft X-ray glow permeating the sky have been highly debated for the past 50 years. But new findings show that it comes from both inside and outside the Solar System. Decades of mapping the sky in X-rays with energies around 250 electron volts—about 100 times the energy of visible light—revealed soft emission across the sky. And astronomers have long searched for its source. At first, astronomers proposed a "local hot bubble" of gas—likely carved by a nearby supernova explosion during the past 20 million years—to explain the X-ray background. Improved measurements made it increasingly clear that the Sun resides in a region where interstellar gas is unusually sparse. But the local bubble explanation was challenged when astronomers realized that comets were an unexpected source of soft X-rays. In fact, this process, known as solar wind charge exchange, can occur anywhere atoms interact with solar wind ions. Colors indicate the density of interstellar helium near Earth and its enhancement in a downstream cone as the neutral atoms respond to the sun’s gravity (blue is low density, red is high). Also shown are the observing angles for DXL and ROSAT. Credit: NASA’s Goddard Space Flight Center After this discovery, astronomers turned their eyes to within the Solar System and began to wonder whether the X-ray background might be produced by the ionized particles in the solar wind colliding with diffuse interplanetary gas. In order to solve the outstanding mystery, a team of astronomers led by Massimilliano Galeazzi from the University of Miami developed an X-ray instrument capable of taking the necessary measurements. Galeazzi and colleagues rebuilt, tested, calibrated, and adapted X-ray detectors originally designed by the University of Wisconsin and flown on sounding rockets in the 1970s. The mission was named DXL, for Diffuse X-ray emission from the Local Galaxy. On Dec. 12, 2012, DXL launched from the White Sands Missile Range in New Mexico atop a NASA Black Brant IX sounding rocket. It reached a peak altitude of 160 miles and spent a total of five minutes above Earth's atmosphere. The data collected show that the emission is dominated by the local hot bubble, with, at most, 40 percent originating from within the Solar System. "This is a significant discovery," said lead author Massimiliano Galeazzi from the University of Miami in a press release. "Specifically, the existence or nonexistence of the local bubble affects our understanding of the galaxy in the proximity to the Sun and can be used as foundation for future models of the Galaxy structure." It's now clear that the Solar System is currently passing through a small cloud of cold interstellar gas as it moves through the Milky Way. The cloud's neutral hydrogen and helium atoms stream through the Solar System at about 56,000 mph (90,000 km/h). The hydrogen atoms quickly ionize, but the helium atoms travel at a path largely governed by the Sun's gravity. This creates a helium focusing cone—a breeze focused downstream from the Sun—with a much greater density of neutral atoms. These easily collide with solar wind ions and emit soft X-rays. The confirmation of the local hot bubble is a significant development in our understanding of the interstellar medium, which is crucial for understanding star formation and galaxy evolution. "The DXL team is an extraordinary example of cross-disciplinary science, bringing together astrophysicists, planetary scientists, and heliophysicists," said coauthor F. Scott Porter from NASA's Goddard Space Flight Center. "It's unusual but very rewarding when scientists with such diverse interests come together to produce such groundbreaking results."
10.1038/nature13525
Medicine
Peering into single cells reveals key processes in acute kidney injury
Christian Hinze et al, Single-cell transcriptomics reveals common epithelial response patterns in human acute kidney injury, Genome Medicine (2022). DOI: 10.1186/s13073-022-01108-9 Jan Klocke et al, Urinary single-cell sequencing captures kidney injury and repair processes in human acute kidney injury, Kidney International (2022). DOI: 10.1016/j.kint.2022.07.032 Journal information: Kidney International , Genome Medicine
https://dx.doi.org/10.1186/s13073-022-01108-9
https://medicalxpress.com/news/2022-10-peering-cells-reveals-key-acute.html
Abstract Background Acute kidney injury (AKI) occurs frequently in critically ill patients and is associated with adverse outcomes. Cellular mechanisms underlying AKI and kidney cell responses to injury remain incompletely understood. Methods We performed single-nuclei transcriptomics, bulk transcriptomics, molecular imaging studies, and conventional histology on kidney tissues from 8 individuals with severe AKI (stage 2 or 3 according to Kidney Disease: Improving Global Outcomes (KDIGO) criteria). Specimens were obtained within 1–2 h after individuals had succumbed to critical illness associated with respiratory infections, with 4 of 8 individuals diagnosed with COVID-19. Control kidney tissues were obtained post-mortem or after nephrectomy from individuals without AKI. Results High-depth single cell-resolved gene expression data of human kidneys affected by AKI revealed enrichment of novel injury-associated cell states within the major cell types of the tubular epithelium, in particular in proximal tubules, thick ascending limbs, and distal convoluted tubules. Four distinct, hierarchically interconnected injured cell states were distinguishable and characterized by transcriptome patterns associated with oxidative stress, hypoxia, interferon response, and epithelial-to-mesenchymal transition, respectively. Transcriptome differences between individuals with AKI were driven primarily by the cell type-specific abundance of these four injury subtypes rather than by private molecular responses. AKI-associated changes in gene expression between individuals with and without COVID-19 were similar. Conclusions The study provides an extensive resource of the cell type-specific transcriptomic responses associated with critical illness-associated AKI in humans, highlighting recurrent disease-associated signatures and inter-individual heterogeneity. Personalized molecular disease assessment in human AKI may foster the development of tailored therapies. Background Acute kidney injury (AKI) is a frequently observed clinical syndrome, which associates with high morbidity and mortality [ 1 , 2 , 3 , 4 , 5 , 6 ]. More than 10% of all hospitalized individuals and more than 50% of critically ill individuals admitted to intensive care units develop AKI [ 2 , 3 , 7 ]. Despite its extensive clinical and economic impact, AKI therapy is largely limited to best supportive care and kidney replacement therapies (hemodialysis or hemofiltration) in patients with advanced kidney failure [ 8 , 9 , 10 ]. Targeted therapies preventing AKI or fostering recovery from AKI are still missing. Numerous attempts have been made using animal models and human samples to uncover underlying mechanisms of AKI, to identify therapeutic targets and to identify disease biomarkers [ 11 , 12 , 13 , 14 , 15 , 16 , 17 ]. However, studies in a controlled clinical setting with cell type-specific gene expression resolution of human AKI are lacking. Although AKI is uniformly defined by changes in serum creatinine levels and/or urinary output, previous studies suggest a vast underlying heterogeneity and complexity of AKI with an unknown number of AKI subtypes, suggesting that personalized approaches in the treatment of AKI may be needed [ 15 , 18 , 19 , 20 ]. Most recently, the question of AKI subtypes was intensively debated when high incidence rates of AKI were observed in individuals with COVID-19 [ 21 , 22 , 23 , 24 ]. In particular, the question was raised whether COVID-19 entails a specific molecular subtype of AKI, through either renal viral tropism or other systemic effects [ 25 , 26 , 27 , 28 , 29 ]. Single-cell gene expression approaches provide powerful tools to investigate cell type-specific changes and cellular interactions and thus may help to delineate potential molecular subtypes of AKI. Recent mouse studies underlined the potential of single cell resolution for our understanding of AKI and revealed new molecular cell states associated with AKI [ 11 , 12 , 30 ]. Here, we present a comparative single-cell census of the human kidney in individuals with critical illness-associated AKI compared to controls without AKI. Methods Study cohort For this study, we collected post mortem biopsies from eight patients with AKI and 4 control patients. AKI patients (sample names AKI 1–8) were enrolled in the study if they showed clinical criteria of severe AKI (as defined by KDIGO criteria for AKI stage 2 or stage 3) within 5 days prior to sampling and if they developed AKI in a clinical setting of critical illness, severe respiratory infections, and systemic inflammation. All post mortem samples were collected on intensive care units of Charité-medizin in Berlin, Germany. The four control samples are comprised of three specimens from tumor-adjacent normal tissues (samples names Control-TN 1–3) and three post mortem biopsy specimens of one brain-dead patient from three different time points (15, 60, and 120 min after cessation of circulation, sample names Control-15 min, Control-60 min, Control-120 min) to account for post mortem effects. Samples Control-TN1-3 were collected during elective tumor nephrectomies performed at Charité-medizin in collaboration with the Department of Urology. The remaining control samples (post mortem biopsies) represented by samples Control-15 min, Control-60 min, and Control-120 min were collected on an intensive care unit of Charité-medizin. Specimen collection After consent of next of kin, post mortem biopsies were collected using 18G biopsy needles within 2 h from death from individuals who had died in a clinical setting of critical illness on intensive care units of Charité-medizin Berlin (ethics approval EA2/045/18). Control tissue from tumor-adjacent normal tissue of tumor nephrectomies was collected during tumor nephrectomies (ethics approval EA4/026/18). Kidney specimens were either stored in pre-cooled RNAlater at 4 °C for 24 h and then stored at − 80 °C (for snRNA-seq) or in 4% formaldehyde (for histopathological studies and in situ hybridizations). Single-nuclei sequencing Kidney specimen subjected to snRNA-seq were kept at 4 °C at all times. All specimens were treated as described in detail in Leiz et al. [ 31 ]. Main steps included are as follows: Specimens were thoroughly minced in nuclear lysis buffer 1 (nuclear lysis buffer (Sigma) + Ribolock (1U/µl) + VRC (10 mM)) and homogenized using a dounce homogenizer with pastel A (Sigma D8938-1SET), filtered (100 µm), homogenized again (douncer with pastel B), filtered through a 35-µm strainer, and centrifuged (5 min, 500 g). The pellet was then resuspended in nuclear lysis buffer 2 (nuclear lysis buffer + Ribolock (1U/µl)). To remove debris from the suspension, we underlayed the suspension with lysis buffer containing 10% sucrose and 1U/µl of Ribolock. After centrifugation (5 min, 500 g), the supernatant and debris were carefully removed. Pelleted nuclei were resuspended in PBS/0.04%BSA + Ribolock (1U/µl), filtered through a 20-µm strainer and stained with DAPI. All samples were subjected to single-nuclei sequencing following the 10 × genomics protocol for Chromium Next GEM Single Cell 3’ v3.1 chemistry targeting 10,000 nuclei. Obtained libraries were sequenced on Illumina HiSeq 4000 sequencers (paired-end). Digital expression matrices were generated using the Cellranger software version 3.0.2 with –force-cells 10,000 against a genome composed of the human HG38 genome (GRCh38 3.0.0) and the genome of Sars-CoV-2. Single-nuclei sequencing data analysis Initial filtering was performed by excluding nuclei with more than 5% mitochondrial reads and less than 500 detected genes. Nuclei passing this filter of all samples were then analyzed using Seurat’s best practice workflow for data integration using the reciprocal PCA approach ( ) with default parameters. Emerging clusters were then analyzed for marker gene expression using Seurat’s FindAllMarkers function and subsequently assigned to the major renal cell types. Each major cell type was then subclustered using Seurat’s best practice standard workflow for data integration: ( ). Followed by RunUMAP(seu, dims = 1:30), FindNeighbors(seu, dims = 1:30) and FindClusters(seu, resolution = 0.5) using the integrated assay. Marker genes for all emerging clusters were calculated. During the next steps, the aim was to identify destroyed nuclei and doublets. For the destroyed nuclei, in the initial clustering, they clearly clustered away from major kidney cell types and showed a significantly reduced complexity of gene expression (nUMI, nGene) when compared to major cell types. Removal of destroyed nuclei led to a reduction of initially 121,933 nuclei to 113,137 nuclei in our dataset. Doublets could be detected by clusters in the subclusterings of the major cell types showing canonical marker genes from another major cell type (e.g., TAL marker gene expression in a PT subcluster). Also, these subclusters clearly clustered away from the subclusters of the respective major cell type. Canonical marker genes used for doublet removal were NPHS2 (Podo), LRP2 (PT), AQP1, CLCNKA (tL), SLC12A1 (TAL), SLC12A3 (DCT), CALB1 (CNT), AQP2 (CD-PC), FOXI1 (CD-IC), PECAM1 (EC), PTPRC (Leuk), and ACTA2 (IntC). Doublet removal led to a reduction of nuclei as stated in the table in Additional file 1 : Fig. S1. Destroyed and doublet cells were removed and the whole clustering process was repeated to avoid any influence of the excluded cells on the clusterings. After the identification of major cell types (Fig. 1 ), all major cell types were subclustered, separately using RunPCA, FindNeighbors, and FindClusters with resolution 0.5. This resulted in the presented subclusters in Figs. 3 , 4 , and 6 and the corresponding supplemental figures. Fig. 1 A single-cell census of human AKI. A Overview of the study and samples subjected to snRNA-seq and bulk RNA-seq. B Major cell types of the human kidney (Podo, podocytes; PT, proximal tubule; tL, thin limb; TAL, thick ascending limb; DCT, distal convoluted tubule; CNT, connecting tubule; CD-PC/IC-A/IC-B, collecting duct principal/intercalated cells type A and B; Leuk, leukocytes; IntC, interstitial cells). C Uniform manifold approximation and projection (UMAP) of all kidney cells from snRNA-seq from individuals with AKI and controls. D Heatmap of marker genes of each major cell type. Examples of known cell type marker genes are indicated. Expression values are shown as per-gene maximum-normalized counts per million (CPM). E Relative abundances of major cell types in individuals with AKI and controls (mean and standard deviation) (upper panel) and stacked bar plots for all individuals and major cell types (lower panel). F Principal component analysis of all study individuals using pseudobulk data per individual from all proximal tubule (PT) cells and PT-specific highly variable genes (see Additional file 1 : Fig. S2 for other cell types and whole tissue). COVID-associated AKI cases are highlighted by gray arrows Full size image Enrichment testing for all major cell types was performed by calculating relative abundances of each generated subcluster per patient as percent of the respective major cell type. A p -value was computed by using a Mann–Whitney- U test comparing relative abundances of AKI samples versus control samples for each cluster and each major cell type. Enrichment scores were calculated by -log10 transforming the p -values and considered significant if p -value < 0.05. The three replicates from Control-PM were averaged to one value per comparison. RNA extraction and alignment for bulk RNA sequencing The RNeasy Micro Kit (#74,004, Qiagen, Hilden, Germany) was used to extract total RNA from kidney biopsies stored at − 80 °C. For tissue disruption, frozen biopsy samples were transferred to ceramic bead-filled tubes (#KT03961-1–102.BK, Bertin Technologies, Montigny-le-Bretonneux, France) containing 700 µl QIAazol Lysis reagent (#79,306, Qiagen) and homogenized for 2 × 20 s at 5000 rpm using a Precellys 24 tissue homogenizer (Bertin Technologies). The lysate was incubated for 5 min at room temperature, mixed with 140 µl chloroform, and centrifuged (4 °C, 12,000 × g, 15 min). The supernatant was transferred to a fresh tube. Subsequent RNA purification was performed according to the “Purification of Total RNA from Animal and Human Tissues” protocol starting at step 4 ( ). RNA concentration and integrity were evaluated with a NanoDrop Spectrophotometer (Thermo Scientific, Waltham, MA) and 2100 Bioanalyzer Instrument (Agilent Technologies, Santa Clara, CA). Paired-end RNA sequencing (Truseq stranded mRNA, 2 × 100 bp) was performed on a Novaseq 6000 SP flow cell. Provided FASTQ files were aligned using STAR [ 32 ] and the same genome as for the snRNA-seq data (GRCh38 3.0.0 with SARS-CoV1/2), reads were then counted using featureCounts [ 33 ] with -p -t exon -O -g gene_id -s 0. Differential gene expression analysis Differential gene expression analysis was performed using the DESeq2 [ 34 ] package (version 1.28.1). Input to DESeq2 were count matrices generated by adding the counts from all cells per sample and major cell type (for snRNA-seq) or the raw count matrices (for bulk RNA-seq). For snRNA-seq, only genes expressed in 10% or 500 cells were considered. DESeq dataset was generated by: dds <—DESeqDataSetFromMatrix(countData = my.count.data, colData = col.data, design = ~ condition + perc.mt), followed by: dds <—estimateSizeFactors(dds, type = ‘poscounts’) dds <—DESeq(dds) “condition” was either AKI or control, my.count.data was generated for each major cell type, separately (snRNA-seq). Desired comparisons were derived by: my.result = results(dds, contrast = c(“condition”,”Control”,”AKI”)) Results were then filtered by demanding adjusted p -value < 0.001 and absolute log2 fold change larger than 1. Pathway enrichment analysis Differentially expressed genes were analyzed, separately, for genes up- and downregulated in AKI versus controls using the MSigDB [ 35 , 36 ] web interface: with the hallmark gene sets and curated gene sets (C1) from Biocarta [ 37 ], Kegg [ 38 ], Reactome [ 39 ], and WikiPathways [ 40 ]. Partition-based graph abstraction The fully integrated assay from the Seurat object of the respective cell type was imported into scanpy [ 41 ] version 1.5.0. PCA and neighborhood graph was computed: sc.tl.pca(data, use_highly_variable = False) sc.pp.neighbors(data, n_pcs = 30) Healthy subclusters (e.g., PT-S1-3) were summarized into one category (e.g., Healthy PT) to exclude the anatomical axis from this analysis. PAGA graph and diffusion pseudotime (dpt) were calculated. The root of diffusion pseudotime was defined to be in the first element of the healthy cells (e.g., first element in healthy PT). To further exclude the anatomic axis from pseudotime analysis, all dpt values for the healthy cells were set to 0. PAGA graph was then plotted with threshold 0.15 highlighting dpt. Dpt was min–max normalized for plotting. Cross-species approach Mouse ischemia reperfusion data was downloaded from GEO (GSE139107) [ 11 ]; metadata of PT subclustering from the original publication was kindly provided by Ben Humphreys. Multinomial classification was performed using the glmnet [ 42 ] package version 2.0–16. The training set included randomly selected cells (2/3 of cells) from injured mouse PT subclusterings. Genes used in the training were highly variable features of the mouse AKI data. Human orthologous genes were generated using Biomart. Test data were the remaining 1/3 of mouse PT injured subclusters. Glmnet produces different models for different values of lambda which determines how hard overcomplexity of the respective model gets punished. Each so-generated model was tested on the test data and the model with the highest accuracy on the test data was determined. The so selected model was then applied to our human PT subclusters PT-New 1–4. Validation of snRNA-seq results using independent dataset The Kidney Precision Medicine Project recently published a preprint with multi-level omics data from kidneys of multiple disease etiologies (e.g., chronic kidney disease, acute kidney injury, control tissue) [ 43 ]. This study also included samples of patients with AKI and control samples. Hence, the cohort from this study provided an opportunity to validate our own findings of AKI-associated clusters in the setting of critical illness and systemic inflammation. For this, we downloaded snRNA-seq data from Lake et al. [ 43 ] (search criteria: Single-nucleus RNA-seq and “Aggregated Clustered Data”): a87273b3-ec0d-4419–9253-2dc1dcf4a099_WashU-UCSD_KPMP-Biopsy_10X-R_05142021.h5Seurat from atlas.kpmp.org. Following the information from Suppl. Table S 1 of Lake et al., we were able to include five AKI cases (IDs: 30–10,034, 32–2, 33–10,005, 32–10,034, 33–10,006) and three control samples from tumor nephrectomies (IDs: 18–162, 18–142, 18–312). Unbiased clustering and marker gene calculation were performed as described for our own data. It is of note that the mentioned AKI specimens represented cases without critical illness and systemic inflammation. RNA in situ hybridization The RNAscope 2.5 HD reagent kit-brown (#322,300, Advanced Cell Diagnostics (ACD), Newark, CA, USA) was used to perform chromogenic in situ hybridization on formalin-fixed paraffin embedded kidney sections with probes directed against IFITM3 (custom-ordered, ACD) and IGFBP7 (#316,681, ACD). The RNAscope multiplex fluorescent reagent kit v2 (#323,100, ACD) was used to perform fluorescent in situ hybridization on formalin-fixed paraffin embedded kidney sections with probes directed against ALDOB (#422,061-C3, ACD), LRP2 (#532,391 custom-ordered in C3 channel, ACD), MET (#431,021-C3, ACD), MYO5B (#825,871, ACD), NQO1 (#555,671-C2, ACD), SERPINA1 (#435,441-C2, ACD), SLC2A1 (#423,141-C2, ACD), SLC12A1 (#577,391, ACD), and VCAM1 (#440,371, ACD). Kidney slices were fixed in 4% formaldehyde embedded in paraffin by the Department of Pathology of Charité-medizin Berlin. Paraffin-embedded kidney slices were cut into 5-µm sections, plated on Superfrost Plus slides, air-dried overnight, baked for 1 h at 60 °C, cooled for 30 min, dewaxed, and air-dried again. For chromogenic assays, subsequent pretreatment and RNAscope assay procedures for all probes were performed according to the “Formalin-Fixed Paraffin-Embedded (FFPE) Sample Preparation and Pretreatment” and “RNAscope 2.5 HD Detection Reagent BROWN” user manuals (ACD documents #322,452 and #322,310) as recommended by the manufacturer. Sections were counterstained with hematoxylin before dehydrating and applying coverslips using a xylene-based mounting medium. Images of the hybridized sections were captured on a Leica DM2000 LED bright field microscope. For fluorescent assays, pretreatment and RNAscope assay procedures for all probes were performed according to the “RNAscope Multiplex Fluorescent Reagent Kit v2” user manual (ACD document #323,100) as recommended by the manufacturer. Sections were mounted with Dako Fluorescence Mounting Medium (#S3023, Dako, Carpinteria, CA, USA). Images were captured on a Zeiss Axio Imager 2 LSM 800 confocal scanning microscope. Results Single-nuclei RNA sequencing from human kidney samples enables the investigation of cell type-specific gene expression changes in acute kidney injury To access cellular responses in AKI, we conducted a single-cell transcriptome census of human AKI utilizing single-nuclei RNA sequencing (snRNA-seq) of kidney samples from individuals with AKI and control individuals without AKI (Fig. 1 A, Additional file 1 : Fig. S1). Kidney samples from individuals with AKI who succumbed to critical illness were obtained within 1–2 h post-mortem with consent of next of kin. All AKI individuals ( n = 8, AKI 1–8 ) had developed clinical criteria of severe AKI (as defined by KDIGO criteria for AKI stage 2 or stage 3) within 5 days prior to sampling. All individuals had developed AKI in a clinical setting of critical illness, severe respiratory infections, and systemic inflammation, including four cases of COVID-19-associated AKI (Additional file 2 : Table S1). To control for baseline characteristics inherent to human kidneys obtained under clinical conditions and to quantitate the extent of post-mortem effects on gene expression, we used control kidney samples. They included normal kidney tissue collected during tumor nephrectomies ( n = 3; Control TN1-TN3 ; for clinical parameters of all individuals see Additional file 2 : Table S1). In addition, we obtained post-mortem kidney tissue at three different time points (15 min, 60 min, 120 min) after the cessation of circulation (Control 15 min; 60 min; 120 min ) from a brain-dead individual without clinical evidence of AKI. Single-nuclei RNA-seq of all samples resulted in 106,971 sequenced nuclei with a median of 2139 detected genes and 4008 unique transcripts per nucleus (Additional file 1 : Fig. S1). Joint unbiased clustering and cell type identification with known marker genes allowed the identification of the expected major kidney cell types (Fig. 1 B–D). There were no overt differences in major cell type abundances between AKI and controls (Fig. 1 E). Principal component analysis (PCA) indicated that the presence of AKI (versus absence of clinical AKI) was the main driver of cell type-specific and global gene expression differences between the samples (Fig. 1 F, Additional file 1 : Fig. S2). In contrast, PCA did not identify a major impact of the sampling method (tumor versus post-mortem biopsy), the sampling time after cessation of circulation (15 min, 60 min or 120 min), or the presence of COVID-19-associated AKI (versus AKI associated with other respiratory infections) on global or cell-type-specific gene expression (Fig. 1 F, Additional file 1 : Fig S2). Interestingly, we observed heterogeneity of kidney cell gene expression between different individuals with AKI (Fig. 1 F). Kidney tubular epithelial cells from different parts of the nephron show strong gene expression responses to AKI Kidney ischemia–reperfusion injury in mice, the most frequently applied experimental model of human AKI, results in a predominant injury of cells of the proximal tubule (PT), the most abundant cell type of the kidney. Therefore, many previous studies focused on this cell type [ 13 , 14 ]. However, in humans, there is considerable uncertainty regarding the impact of AKI on molecular states of different kidney cell types [ 44 ]. To assess the cell type-specific response to AKI systematically, we performed differential gene expression analysis within the major kidney cell types comparing AKI to control kidneys using DESeq2 [ 34 ] (see Additional file 3 : Table S2 for a full list of differentially expressed genes per cell type). Profound transcriptomic responses to AKI were observed in kidney tubule cells of the PT, the thick ascending limb of the loop of Henle (TAL), the distal convoluted tubule (DCT), and connecting tubules (CNT), cell types that reside predominantly in the cortex and outer medulla of the kidney, regions that are known to be particular susceptible to ischemic or hypoxic injury [ 13 , 14 , 15 , 45 ] (Fig. 2 A). In contrast, less pronounced transcriptomic responses to AKI were observed in thin limbs (tL), collecting duct principal cells (CD-PCs) and collecting duct intercalated cells (CD-ICs), consistent with the predominant localization of these cell types in the inner medulla of the kidney, which is adapted to a low oxygen environment, has lower energy expenditure, and is less susceptible to hypoxia or ischemia when compared to more cortical regions [ 45 ]. Podocytes, endothelial cells and interstitial cells also displayed less pronounced transcriptomic responses in AKI. Fig. 2 Cell type-specific responses of kidney cells to acute injury. A Absolute numbers of differentially expressed (DE) genes upregulated and downregulated in AKI versus controls within major kidney cell types. B Dot plot displaying the degree of differential expression for known injury marker genes and housekeeping control genes (actin beta (ACTB), ataxin 2 (ATXN2), and RNA polymerase III subunit A (POLR3A)). C , D Dot plots for top enriched pathways (defined by FDR) in genes upregulated ( C ) and downregulated ( D ) in AKI versus controls. Note that although the number of DE genes varied strongly between the cell types (e.g., Podo vs. PT), we observed similar enrichment results in several cell types. HM – Molecular Signatures Database (MSigDB) hallmark gene sets, MSigDB canonical pathway gene sets derived from RC, Reactome; WP, WikiPathways; and KEGG, Kyoto Encyclopedia of Genes and Genomes Full size image Among differentially expressed genes were known markers of renal cell stress, which encode proteins that have been proposed as kidney injury markers, such as neutrophil gelatinase-associated lipocalin/lipocalin 2 (LCN2), kidney injury molecule 1 (HAVCR1), and insulin-like growth factor binding protein 7 (IGFBP7) [ 46 ]. Importantly, our data provided the opportunity to identify the major cellular sources where these transcripts were synthesized in response to injury. For instance, consistent with previous reports on mouse and human AKI, LCN2 was primarily upregulated in CNT and CD-PC [ 12 , 47 ], while HAVCR1 was primarily upregulated in PT [ 12 , 48 ] although we also observed unexpected differential expression in TAL and DCT (Fig. 2 B). Secreted phosphoprotein 1 (SPP1), encoding for the secreted glycoprotein osteopontin, was found to be upregulated in virtually all non-leukocyte kidney cell types. This was strongly reminiscent of the situation in mouse AKI, where osteopontin upregulation was similarly observed in multiple kidney cell types [ 12 ], and where osteopontin inhibition attenuated renal injury [ 49 ], suggesting a conserved, targetable AKI pathway. IGFBP7 protein was previously found to be primarily of PT origin in diseased human kidneys [ 50 ]. Consistently, we found an upregulation of IGFBP7 mRNA in PT cells (Fig. 2 B). However, we also found IGFBP7 to be upregulated in podocytes and TALs (Fig. 2 B), findings which we were able to validate by IGFBP7 in situ hybridization (Additional file 1 : Fig. S3). We also observed a strong interferon gamma response in several cell types in AKI (Fig. 2 C). We could validate this finding by IFITM3 in situ hybridization (Additional file 1 : Fig. S3). This indicates that our single-cell transcriptome database is consistent with prior knowledge and provides an opportunity to uncover novel information regarding the cellular origin of AKI-associated transcripts. Pathway analyses of differentially expressed genes indicated that a proportion of genes upregulated in AKI were associated with inflammatory response-associated pathways (tumor necrosis factor alpha, interferon gamma, and interleukin signaling), hypoxia response, and epithelial to mesenchymal transition (EMT, Fig. 2 C, Additional file 4 : Table S3). Importantly, our analyses indicated that most functional pathways were upregulated simultaneously in multiple kidney tubule cell types suggesting common AKI response patterns across the nephron. Several studies have indicated an AKI-associated metabolic shift in tubular epithelia and a downregulation of genes associated with tubular transport processes [ 45 , 51 ]. Consistently, we observed that genes downregulated in AKI were mostly related to molecule transport and metabolism (Fig. 2 D). Since our cohort included four individuals with COVID-19-associated AKI, we compared their kidney cell type-specific gene expression with that of individuals with non-COVID-19 respiratory infection-associated AKI. Only few differentially expressed genes were identified applying the same cut-off values for fold change and adjusted p -value as for the comparison between AKI and control samples (Additional file 1 : Fig. S4). This suggests that the major transcriptomic responses of kidney cells in COVID-19 were not substantially different from those in other forms of AKI (see Additional file 5 : Table S4 for the respective gene lists). Interestingly, a relaxation of the criteria for differential expression (adjusted p -value < 0.05) identified the highest number of potential gene expression differences between COVID AKI and non-COVID AKI in the DCT. This is in contrast to recent publications which report the strongest gene expression responses to COVID-19 in podocytes and PTs [ 52 , 53 ] (see Additional file 1 : Fig. S4, Additional file 5 : Table S4 and Additional file 6 : Table S5). Importantly, the genes and pathways that were differentially regulated in AKI versus control according to single nuclei sequencing data displayed concordant regulation in bulk mRNA sequencing from separate kidney samples of the same patients, providing additional validation (Fig. 2 B, C, D). Profound effects of AKI on kidney cell state abundance To achieve a more fine-grained analysis of cellular subclasses, we performed subclusterings of the major kidney cell types. Thereby, we were able to derive 74 kidney cell populations based on their transcriptomes, which included known cellular subtypes of kidney cells (e.g., S1, S2, and S3 segments of the PT) and additional novel cell populations (designated as “New” cell populations, Fig. 3 A, Additional file 1 : Fig. S5-8, Additional file 7 : Table S6). These “New” cell populations were still attributable to major kidney cell types based on their transcriptomes (Fig. 1 C), but they were not characteristic of the known anatomic sub cell types, suggesting that they represent injury-related cell states. Fig. 3 AKI leads to depletion of differentiated cell states and enrichment of “New” cell states within the kidney epithelium. A , B UMAP plot of subclustered kidney tubular epithelial cells ( A ) and their enrichment or depletion in AKI based on statistical testing of relative abundances within the respective major cell type ( B ) (see the “ Methods ” section for details). In A , cellular subtypes of the kidney tubule are annotated as indicated. To enhance visibility, color code is indicated below the respective labeling. In B , the same UMAP plot as in A is color-coded based on enrichment (red) or depletion (blue) in AKI individuals. C , D Analogous plots for subclustering of endothelial cells (ECs). Please note the emergence of one AKI-associated subcluster, EC-New 1. E , F Analogous plots for subclusterings of interstitial cells. PT-S1-3, PT S1-3 segments; c/mTAL, cortical/medullary TAL; TL, DTL, thin limb and descending thin limb; CCD, OMCD, IMCD, cortical/outer and inner medullary collecting duct principal cell; lymphEC, lymphatic EC; GEC, glomerular EC; FenEC 1–4, fenestrated endothelial cell types; DVR, descending vasa recta; MC, mesangial cells; VSMC, vascular smooth muscle cells; REN, renin-transcribing cells; Fibro, fibroblasts; NEUR, neuronal cells Full size image We analyzed, which of the identified cell subpopulations differed in abundance in individuals with AKI, yielding depleted and enriched subpopulations (Fig. 3 B). Profound depletion in AKI was observed within cells of the PT (in particular those representing the S3 segment), consistent with its known susceptibility to injury and its tendency to undergo dedifferentiation in AKI [ 11 , 13 , 14 , 54 ]. Unexpectedly, in addition to PT, differentiated medullary TAL, DCT, and CNT cells were also substantially depleted in AKI. Inversely, profound enrichment in AKI was observed of the “New” cell subpopulations associated with these same cell types, indicating that PT, TAL, DCT, and CNT displayed the most profound responses to AKI and confirming the notion that “New” subpopulations represent injury-associated cell states (Fig. 3 B). “New” subpopulations within cells of collecting duct (CD-PC, CD-IC-A, CD-IC-B), ascending and descending thin limbs (ATL, DTL) were also enriched in AKI, although they represented only small subpopulations (Fig. 3 B), suggesting that these cell types are less susceptible or reside in less susceptible regions of the kidney. Non-epithelial cell types of the kidney, such as endothelial cells, interstitial cells, and leukocytes displayed no enrichment or depletion in AKI, with the exception of one subtype of endothelial cells (EC-New 1), which was enriched in AKI and showed a transcriptional profile resembling endothelia of descending vasa recta (Fig. 3 C–F; Additional file 1 : Fig. S5). Analyses of AKI-induced cell states suggest four distinct injury response patterns We next conducted further analyses to characterize the “New” cell subpopulations detected within the tubular epithelial compartment of the kidney. Quantification of the four “New” cell clusters associated with the PT (PT-New 1–4) indicated that almost one third (31.8%) of PT cells in AKI samples belonged to these clusters (Fig. 4 A). We next identified marker genes for PT-New 1–4 (Fig. 4 B) and performed pathway analysis [ 55 ]. Enriched gene sets included oxidative stress signaling and the nuclear transcription factor erythroid 2-related factor 2 (NRF2) pathway (PT-New 1), the hypoxia response pathway (PT-New 2), the interferon gamma response, and genes encoding for ribosomal proteins (PT-New 3) as well as genes associated with epithelial-mesenchymal transition (EMT) (PT-New 4) (Fig. 4 B, see , Hallmark and canonical pathways, for pathway definitions, additional file 7 : Table S6). Nevertheless, PT-New 1–4 showed some overlap and trajectory analysis using partition-based graph abstraction (PAGA) [ 56 ] suggested hierarchical relationships between healthy PT cells and cells representing PT-New 1–4, with PT-New 4 displaying the most distant gene expression signature from healthy PT. Fig. 4 PT AKI-enriched cell states reveal four distinct injury response patterns. A UMAP plot of subclustering of the PT with the anatomical PT segments S1-3 (PT-S1-3) and the AKI-associated cell states PT-New 1–4 (also depicted in Fig. 3 A). Below the UMAP are a bar plot displaying the relative abundances of PT-New 1–4 with respect to all AKI PT cells and a trajectory analysis using partition-based graph abstraction (PAGA) highlighting diffusion pseudotime. Line widths of the connecting edges represent statistical connectivity between the nodes [ 56 ]. Healthy PT-S1-3 were summarized to healthy PT for this analysis. B Heatmap of selected marker genes for the identified PT cell subpopulations from marker gene analysis (Additional file 7 : Table S6) and published markers for the anatomical segments PT-S1-3. C Plots display relative abundance of PT-New 1–4 as percentage of all PT cells. D Plot displays relative abundance of combined PT-New 1–4 as percentage of all PT cells. E Individual abundances of PT-New 1–4 for control and AKI individuals. P -value: * < 0.05, ** < 0.01, *** < 0.001; n.s., not significant. Control-PM, pooled samples (Control 15 min , Control 60 min , Control 120 min ) of post mortem non-AKI control individual Full size image We compared PT-New 1–4 to previously identified PT-derived injured cell states in mouse renal ischemia–reperfusion injury, designated as “injured PT S1/S2 cells,” “injured PT S3 cells,” “severe injured PT cells,” and “failed repair” cells [ 11 ]. We trained a multinomial model using marker genes of these clusters using a cross-species mouse/human comparative approach (see methods), which indicated that PT-New 1 (oxidative stress) showed similarity to injured mouse S1/2 cells, whereas PT-New 2 (hypoxia) resembled injured S3 cells (Additional file 1 : Fig. S9). PT-New 3 and PT-New 4 most closely resembled “failed repair” PT cells in mice (Additional file 1 : Fig. S9). Cells from PT-New 3 and PT-New 4 expressed the EMT marker VIM. PT-New 4 cells were additionally marked by VCAM1, a marker that has previously been associated with the “failed repair” state of injured PT cells [ 57 ]. PT-New 1, PT-New 3, and PT-New 4 also showed high expression of interferon target genes (e.g., IFITM2, IFITM3) and human leukocyte antigen HLA-A, which is consistent with the association of injured PT cells with immune responses and inflammation [ 57 ]. Together, these observations suggest that “New” cell populations represent four distinct but hierarchically connected injured PT cell states. The individual abundances of PT-New 1–4 and the combined abundances of PT-New 1–4 were significantly increased in the AKI samples (Fig. 4 C, D). Importantly, the distribution of PT-New 1–4 among individuals with AKI displayed marked heterogeneity (Fig. 4 C, E). For instance, the relative abundance of PT-New 4 (EMT/ “failed repair”) varied by a factor of three among samples from different individuals with AKI (compare PT-New 4 between AKI 4 and AKI 7 in Fig. 4 E). The combined injury-associated PT clusters (PT-New 1–4) made up between 20 and 45% of all proximal tubule cells in individuals with AKI (Fig. 4 C). Together these observations highlight the presence of recurrent AKI-associated cell states, but they also indicate substantial inter-individual heterogeneity. We next aimed to validate all “New” PT-associated cell states in injured kidney tissues using multi-channel RNAscope in situ hybridizations. Using kidney tissue sections from AKI and control patients, we performed co-staining for LDL receptor-related protein 2 (LRP2), a transcript encoding a canonical proximal tubule marker, and four PT-New 1–4 marker transcripts. These marker transcripts were selected based on their strong and specific overexpression in either of the PT-New clusters (PT-New 1–4), and by their previously reported association with functional gene expression signatures of oxidative stress (PT-New 1, marker gene: NAPDH dehydrogenase quinone 1, NQO1), hypoxia (PT-New 2, marker gene: myosin-Vb, MYO5B), inflammation response (PT-New 3, marker gene: serpin family A member 1, SERPINA1), and EMT (PT-New 4, marker gene: vascular cell adhesion molecule 1, VCAM1) [ 58 , 59 , 60 , 61 ] (Fig. 5 A-D). As predicted, cells representing these four cell states were detectable as subsets of cells within proximal tubules. They were highly specific to AKI patients with little or no representation in kidneys of control (non-AKI) individuals (Fig. 5 A–D). PT-New 1–4 cells were distinct from each other and were mostly interspersed among other proximal tubule cells, but sometimes occurred as local clusters (e. g. PT-New 3 and PT-New 4, Fig. 5 C). Fig. 5 Multi-channel in situ hybridizations confirm the presence of the four “New” cell states in PTs. A In situ hybridizations for oxidative stress-related gene NQO1 (marker gene for PT-New 1), hypoxia-associated gene MYO5B (marker gene for PT-New 2) and canonical PT marker gene LRP2. Note that some MYO5B expression is observed in control samples as expected from the CPM values presented in B. B Feature plots highlighting the expression of NQO1 and MYO5B in PTs (compare to Fig. 4 A) as well as box plots showing the expression of the respective gene in PTs in control versus AKI samples. C In situ hybridizations for inflammation response gene SERPINA1 (marker gene for PT-New 3), EMT-associated gene VCAM1 (marker gene for PT-New 4), and canonical PT marker gene LRP2. Scale bars as indicated. P -value: * < 0.05, ** < 0.01, *** < 0.001. CPM, counts per million Full size image The abundance of injury response patterns varies among cell types of the kidney tubule We next turned to other kidney epithelial cell types and their response to injury. We compared AKI-enriched “New” cell states in tL, TAL, DCT, CNT, CD-PC, and CD-IC to those in PT (Fig. 6 A–D, Additional file 1 : Fig. S10-12). Remarkably, the transcriptomic responses of the different tubular epithelial cell types to AKI displayed a marked overlap. For instance, “New” cell populations residing in TAL (TAL-New 1–4) and DCT (DCT-New 1–4) displayed four injured cell states with marker genes and functional pathways similar to PT-New 1–4 (Additional file 8 : Table S7, Additional file 1 : Fig. S13). This suggests conserved injury responses across different kidney cell types. To assess potential transcriptional regulation within the “New” cell populations, we performed enrichment analysis using the ChIP enrichment analysis dataset [ 62 ] with Enrichr [ 55 ] (Additional file 9 : Table S8). Among top enriched transcription factors were NRF2 (New 1), hypoxia inducible factor 1 subunit alpha (New 2), MYC (New 3), and Jun proto-oncogene (New 4). Fig. 6 AKI-associated cell states within the thick ascending limb (TAL). A UMAP plot of TAL subclustering with the anatomical segments cTAL 1–3 and mTAL and the AKI-associated cell states TAL-New 1–4. Below the UMAP are a bar plot displaying the relative abundances of TAL-New 1–4 with respect to all AKI TAL cells and a trajectory analysis using partition-based graph abstraction (PAGA) highlighting diffusion pseudotime. Line widths of the connecting edges represent statistical connectivity between the nodes [ 56 ]. Healthy cTAL 1–3 and mTAL were summarized to healthy TAL for this analysis. B Heatmap of selected marker genes for the identified TAL cell subpopulations. C Plot displaying relative abundances of TAL-New 1–4 with respect to the individual’s TAL cells. D Relative abundances of combined TAL-New 1–4 cells with respect to the individual’s TAL cells. E Individual abundances of TAL-New 1–4 for control and AKI individuals. P -value: * < 0.05, ** < 0.01, *** < 0.001; n.s., not significant. Control-PM, pooled samples (Control 15 min , Control 60 min , Control 120 min ) of post mortem non-AKI control individual Full size image The percentage of cells displaying AKI-associated “New” cell states varied markedly among major cell types of the kidney: 31.8% of PT cells, 36.4% of TAL cells, 59.6% of DCT cells, 43.5% of CNT cells, 2.1% of tL cells, 19.1% of CD-PCs, and 5.7% of CD-ICs (Figs. 4 E and 6 E, Additional file 1 : Fig. S10-12). We could further validate our findings using an independent snRNA-seq dataset from kidneys of patients with non-critical illness-associated forms of AKI [ 43 ] (Additional file 1 : Fig. S14). Interestingly, cells representing PT-New 1, PT-New 2, PT-New 4, TAL-New 1, TAL-New 2, and TAL-New 4 were clearly detectable within these AKI kidneys, whereas the inflammatory clusters (PT-New 3 and TAL-New 3) and interferon signatures were absent. This suggests that the “New 3” cell states might by specific to AKI in the settings of critical illness and/or systemic inflammation. Moreover, we used multi-channel RNAscope in situ hybridizations with probes against TAL-New 1–4 marker genes (Fig. 6 B), again combining them with a canonical TAL marker gene encoding solute carrier family 12 member 1 (SLC12A1). We utilized marker transcripts for TAL-New 1 (oxidative stress signature, marker gene: aldolase B, ALDOB), TAL-New 2 (hypoxia signature; marker gene: solute carrier family 2 member 1, SLC2A1), TAL-New 3 (inflammation response signature; marker gene: serpin family A member 1, SERPINA1), and TAL-New 4 (EMT signature; marker gene: MET proto-oncogene, MET) [ 60 , 63 , 64 , 65 ]. Similar to our findings in proximal tubules, TAL-New 1–4 cell states were confirmed to represent distinct cells that were interspersed within the TAL of AKI patients (Fig. 7 A–D). Fig. 7 Multi-channel in situ hybridizations confirm the presence of the four “New” cell states in TALs. A In situ hybridizations for oxidative stress-related gene ALDOB (marker gene of TAL-New 1), hypoxia-induced gene SLC2A1 (marker gene of TAL-New 2), and canonical TAL marker gene SLC12A1. Note that the TAL-New 1 abundance is not significantly different between AKI and control samples (compare to Fig. 6 B). TAL-New 1 cells based on ALDOB expression are therefore also present in control samples. Moreover, ALDOB is also expressed in PT cells. B Feature plots highlighting the expression of ALDOB and SLC2A1 in TALs (compare to Fig. 6 A) as well as box plots showing the expression of the respective genes in TALs in control versus AKI samples. C In situ hybridizations for inflammation response gene SERPINA1 (marker gene of TAL-New 3), EMT-associated gene MET (marker gene of TAL-New 4), and canonical TAL marker gene SLC12A1. Scale bars as indicated. P -value: * < 0.05, ** < 0.01, *** < 0.001. CPM, counts per million Full size image Similar to the PT, we also observed inter-individual heterogeneity for the AKI-associated cell states in other cell types of the kidney tubule (Fig. 6 E, Additional file 1 : Fig. S10-12). Although, similar clusters as PT-New 1–4 were present in the other kidney tubule cell types, their relative abundance was cell type-dependent. In PTs and DCTs, the most abundant AKI-associated cell state was that associated with EMT (PT-New 4 and DCT-New 4). In contrast, the most abundant TAL cell state was TAL-New 3 (interferon gamma signaling-associated). We conclude that injury responses in different epithelial cell types of the kidney associate with common molecular pathways and marker genes, although they display cell type-specific and inter-individual heterogeneity. Discussion This study identifies a strong impact of human AKI associated with critical illness on the kidney transcriptome at single cell resolution. We provide an atlas of single cell transcriptomes and uncover novel AKI-induced transcriptomic responses at unprecedented cellular resolution and transcriptional depth. We find that the dominant AKI-associated transcriptomic alterations reside within the different cell types of the kidney’s tubular epithelium with surprisingly few transcriptomic alterations in other cell types. We uncover four AKI-associated “New” transcriptomic cell states, which emerge abundantly in PT, TAL, and DCT and display a remarkable overlap of marker genes and enriched molecular pathways between these different tubular epithelial cell types, suggesting common injury mechanisms. We were able to validate all four cell states in PTs and TALs using multi-channel in situ hybridization. Finally, we report a strong transcriptional heterogeneity among individuals with AKI, which is explained by the inter-individual differences in cell type-specific abundances of injury-associated cell states. Our study provides insights into the pathophysiology of human AKI and indicates injury-associated responses in several different types of tubular epithelial cells. Previous studies of AKI in animal models (mostly ischemia–reperfusion injury in mice) have focused on the PT because cells of the PT are abundant and because PTs display marked susceptibility and responsiveness to ischemic injury [ 13 , 14 , 44 , 45 , 66 , 67 , 68 ]. Fewer studies have examined effects of AKI on other cell types. However, some previous rodent studies also showed evidence of injury in distal parts of the kidney tubule including TALs [ 69 ] and collecting ducts [ 70 ], similar to what we demonstrate here in human AKI. Presumed mechanisms of tubular injury and tubular transcriptome responses are related to the high metabolic demands of tubular cells, in particular PTs and TALs, which become overwhelmed in the setting of ischemia or hypoxia [ 44 , 69 , 71 , 72 ]. Tubular stress in this setting is documented by novel induction of injury-associated transcripts and downregulation of differentiation markers (e.g., solute transporters). Most recently, transcriptome studies at single cell resolution in mouse AKI models confirmed these findings and indicated predominant cellular responses in the proximal tubule compartment of the kidney [ 11 , 12 , 30 ]. Our human AKI data are consistent with the proximal tubular responses described in mice, but they suggest an unexpectedly widespread molecular response in other types of kidney tubular epithelia, including TAL, DCT, CNT, and collecting duct. It is tempting to assume that this difference reflects the more complex, multifactorial pathogenesis of AKI in critically ill patients compared to ischemia reperfusion models, but additional interspecies differences can so far not be excluded. Moreover, in the PT, the chronological order of appearance of the injured clusters seems to be different from mice (e.g., presence of injured PT-S1/2, PT-S3 and failed repair cells at the same time) [ 11 ]. This might highlight different molecular processes in humans when compared to mice or might be due to a setting of constant ongoing injury in humans in contrast to experimentally induced renal injury at a single defined time point in mouse models. Given the observed induction of AKI-associated cell states in different segments of the kidney tubule, questions arise regarding their functional significance. It is noteworthy that we found a particularly pronounced activation of NRF2 target genes in the “New 1” clusters in several cell types (PT-New 1, TAL-New 1, DCT-New 1, CNT-New 1, CD-PC-New 1, TL-New 1). NRF2 signaling is induced by oxidative stress, plays a role in the induction of antioxidative genes, and has been associated with protection from kidney injury [ 73 , 74 , 75 ]. Trajectory analyses and comparative genomics indicated that these cells were likely derived from S1/2 segments of the PT and represent an early stage of injury. Cells of the “New 2” cluster exhibited a hypoxia signature. Hypoxia has also previously been recognized as an important mechanism of AKI, due to low baseline tissue oxygen concentrations in the kidney, which further decline under conditions leading to AKI. Induction of hypoxia-inducible genes through stabilization of hypoxia-inducible transcription factors in different kidney cell populations or selectively in TAL or PT cells was found to ameliorate ischemic or toxic AKI [ 45 , 76 ]. In contrast, cells of the “New 3” and “New 4” clusters expressed an EMT signature and pro-inflammatory genes and resembled cells previously described as a “failed repair” state, which is associated with progression to kidney fibrosis [ 11 , 57 ]. In trajectory analyses, these cells were transcriptionally most distant from healthy tubular epithelium in PT and TAL. Our study provides evidence for patient-specific individual compositions of AKI-associated cell states, but it does not support an existence of strictly different molecular subtypes of AKI. Nevertheless, larger cohorts will be needed to confirm this notion. Our study included patients with COVID-19-associated AKI. There is an ongoing debate on the mechanisms of AKI in COVID-19, particularly with regard to whether there is a COVID-19-specific kidney pathophysiology that is different from other critical illness-associated forms of AKI [ 26 , 28 , 52 , 77 , 78 ]. Our study did not uncover a specific transcriptomic signature associated with COVID-19-associated AKI and suggests that COVID-19 AKI is on a common molecular spectrum with AKI associated with other types of respiratory failure and critical illness. However, future studies with increased patient numbers and functional studies are required to provide definitive answers in this regard. In summary, we observed that AKI in humans with critical illness and systemic inflammation is associated with widespread transcriptomic responses within a spectrum of kidney cell types, uncovering novel cell states and potential targets for AKI therapies. These findings suggest that precision approaches like single-cell transcriptomics may be suitable tools to overcome the current limitations in diagnosing and treating subtypes of AKI. Conclusions Single-cell transcriptomics revealed recurring AKI-associated epithelial cell states throughout the cell types of the kidney tubular epithelium. Our observation of patient-specific heterogeneity of these responses underlines the potential utility of single-cell approaches in informing personalized AKI management. Availability of data and materials Single cell data can be accessed via: . Unfiltered Cellranger output files from all samples as well as meta data with cell type assignment can be downloaded at NCBI GEO under the accession number GSE210622.
Acute kidney injury (AKI) is a frequent complication associated with various diseases and particularly affects patients on intensive care units. However, the mechanisms underlying AKI are incompletely understood. Just recently, an interdisciplinary research team has used single-cell sequencing techniques to uncover the molecular processes associated with AKI. Reporting in Genome Medicine and Kidney International, they describe novel gene expression patterns of injured kidney cells that may lead to new therapeutic approaches and strategies for biomarker discovery. The studies were conducted in close collaboration between Charité—Universitätsmedizin Berlin; the Berlin Institute for Medical Systems Biology (BIMSB) of the Max Delbrück Center; the German Rheumatism Research Center Berlin (DRFZ), a Leibniz Institute; and the Hannover Medical School. The kidneys are among the most important organs in the human body. They filter waste products from the blood, control body fluid composition and blood pressure, influence energy metabolism, and produce vital hormones. If kidney function is impaired—as is the case in AKI—there can be severe consequences. "AKI is a frequent and serious complication in critically ill patients, affecting about half of our intensive care unit patients," says Dr. Jan Klocke of Charité's Department of Nephrology and Medical Intensive Care. "The condition is often underestimated, despite the fact that AKI is associated with increased mortality and patients can suffer permanent damage, even complete loss of kidney function." AKI can accompany a wide range of diseases. It often occurs in conjunction with cardiovascular diseases or severe infectious diseases such as COVID-19, but also after surgical interventions or in association with drug treatment. There are often no concrete treatment options. "We try to stabilize affected patients, but so far it is usually not possible to reverse the destructive processes in the kidney with targeted treatments," says Dr. Hinze, who played a key role in supervising one of the studies at Charité and the Max Delbrück Center and now works at the Hannover Medical School. "Up to now, little has been known about which mechanisms are at play in the kidney cells. The aim of our studies was to shed some light on this, with the long-term goal of improving the treatment provided to our patients in the clinic." AKI is often triggered by an insufficient supply of blood to the kidneys, causing the cells there to no longer receive sufficient oxygen and nutrients—and to react with stress. The cells go into a kind of alarm mode and produce signal substances that can lead to inflammatory and remodeling processes (fibrosis) in the surrounding tissue. It is known from animal model studies that epithelial cells—the cells that line the fine renal tubules—are involved in these inflammatory and fibrotic processes. This was demonstrated using a novel state-of the-art method called single-cell sequencing, which enables researchers to create a detailed profile of the molecular gene expression profiles of thousands of individual cells. But what happens on the cellular level in human AKI? This is the question that research teams led by Dr. Hinze and Dr. Klocke set out to investigate. The two recently published studies are among the first to ever investigate the molecular processes in AKI using single-cell technologies in human kidney cells. The scientists examined cells taken from tissue and urine samples of more than 40 patients and analyzed the molecular patterns of more than 140,000 cells using state-of-the-art bioinformatics approaches. "Single-cell sequencing allows us to virtually zoom into each cell and see which genes are active in that cell at that point in time," explains Dr. Hinze. "From this, we can determine whether that particular kidney cell is currently functioning normally, is under stress, or is about to die. This cutting-edge technology gives us an understanding of AKI in unprecedented detail." The team was also able to show that different cell types of the kidney react quite differently to AKI, with the strongest response observed in the epithelial cells of the renal tubules. These are the smallest functional units of the kidney and consist of several segments. It was known from animal models that epithelial cells of a specific early renal tubule segment were mainly affected by AKI. However, the results of the latest studies on human kidney cells have revealed that the epithelial cells of almost all tubule segments are involved in the injury processes. "This illustrates once again how important it is that we study human systems and learn to understand them better," says Dr. Hinze. "In the different types of epithelial cells, we were able to identify certain molecular patterns that occurred in all patients with AKI, but at individual abundances. In the future, these findings could help doctors to better assess the risk for severe disease progression." In clinical practice, physicians ideally need a fast, non-invasive, and precise testing method to clearly diagnose AKI at an early stage. In order to get closer to this vision for the future, Dr. Klocke started searching for epithelial cells in urine samples. Hardly any cells are found in the urine of healthy people. But in those with AKI, epithelial cells detach from the renal tubule and are excreted into the urine. However, since cells do not survive in urine for long, there were initial doubts as to whether the cells would still be intact and whether their molecular state could even be measured using single-cell sequencing. "We processed the urine samples within four to six hours, and it actually worked very well," says Dr. Klocke. The researchers were able to determine from which segment of the renal tubules the cells came from and which genetic programs they had activated in response to kidney damage. "The information provided by the cells from the urine samples matched that of the corresponding cells from tissue samples," says Dr. Klocke. "Thus, urine provides us with an uncomplicated and patient-friendly method of obtaining sample material for further investigations—in order to identify biomarkers and, in the long term, perhaps reduce or even replace kidney biopsies." With the two current studies, the research team has provided completely new insights into the cellular mechanisms in AKI using single cell sequencing as well as promising approaches for future diagnostic procedures and personalized therapies. In further studies, they plan to enroll a larger number of patients, investigate the cellular responses in different underlying diseases, and uncover other fundamental molecular mechanisms of AKI using cell cultures.
10.1186/s13073-022-01108-9
Computer
Computer security researchers aim to prevent tech abuse
The paper is available as a PDF at emtseng.me/assets/Tseng-2022-C … ital-Privacy-IPV.pdf
https://emtseng.me/assets/Tseng-2022-CHI-Care-Infrastructures-Digital-Privacy-IPV.pdf
https://techxplore.com/news/2022-02-aim-tech-abuse.html
Abstract Networks offer an intuitive visual representation of complex systems. Important network characteristics can often be recognized by eye and, in turn, patterns that stand out visually often have a meaningful interpretation. In conventional network layout algorithms, however, the precise determinants of a node’s position within a layout are difficult to decipher and to control. Here we propose an approach for directly encoding arbitrary structural or functional network characteristics into node positions. We introduce a series of two- and three-dimensional layouts, benchmark their efficiency for model networks, and demonstrate their power for elucidating structure-to-function relationships in large-scale biological networks. Main Networks are used to investigate a wide range of technological, social and biological systems 1 . Key factors for their success are the availability of powerful mathematical and computational analysis tools, but also their intuitive visual interpretation. For example, the central position of genes within molecular networks indicates essential cellular processes 2 , densely connected clusters represent functional complexes 3 , and global patterns, such as the ring-like architecture of co-regulation networks, have been found to reflect principles of cellular organization 4 . However, the full potential of network visualizations for exploring complex systems is limited by several conceptual and practical challenges. (1) Networks do not have a natural two- or three-dimensional (2D or 3D) embedding. Any network layout thus involves a choice of which aspects of the high-dimensional pairwise relationships are visually represented, and which are not. (2) In widely used layout algorithms, such as force-directed methods, this choice is made in an implicit and thus intransparent fashion, often based on subjective, esthetic criteria. This lack of a clear relationship between structural network characteristics and node positioning makes the resulting layouts difficult to interpret. (3) Likewise, there are no layout algorithms available that allow for explicitly representing a given network characteristic. (4) Finally, the big size of many real-world networks is a key limiting factor for producing comprehensible layouts, leading to proverbial hair-ball visualizations. In this Brief Communication we introduce a framework for generating network layouts that address these challenges by using dimensionality reduction to directly encode network properties into node positions. Not only can structural network properties be visually encoded in this fashion, but also external information reflecting the functional characteristics of nodes or links. We propose the following procedure (Fig. 1a ). For a given network, we first compile a set of F features for each of N nodes, incorporating any structural or functional characteristic we wish to be visually reflected in the final layout. The resulting ( N × F ) feature matrix is then converted into an ( N × N ) similarity matrix, which serves as input to dimensionality reduction methods to compute 2D or 3D embeddings. These embeddings can either be used directly as node coordinates, resulting in network layouts we termed portraits. Alternately, embeddings on 2D surfaces can be further extended towards 3D topographic or geodesic maps by using the third dimension for an additional variable of choice. The topographic map extends a flat 2D embedding by an additional z coordinate, and geodesic maps introduce an additional radial coordinate in spherical embeddings. In total, our framework thus offers four different maps in two and three dimensions (Fig. 1b ). The key advantage of our framework, offering both versatility and interpretability, is its ability to incorporate and explicitly display various desired node characteristics or node pair relationships. We implemented five examples that demonstrate the diversity of potential layouts. (1) The global layout uses network propagation for an efficient, high-resolution representation of pairwise network distances. (2) The local layout emphasizes similar connection patterns between node pairs. (3) The importance layout combines several metrics for the overall importance of a node, such as degree, betweenness, closeness and eigenvector centrality. (4) Functional layouts depict node similarities according to external node features. (5) Combined layouts allow for tuning between layouts that are dominated by either structural or functional features. Fig. 1: Framework of interpretable network maps. a , Overview. A node similarity matrix reflecting any network features to be visually represented is embedded into 2D or 3D geometries using dimensionality reduction methods. b , Schematic depiction of the resulting four types of network map: 2D and 3D network portraits directly use the outputs of the dimensionality reduction; topographic and geodesic maps incorporate an additional z or radial variable, respectively. c , The network models used for benchmarking: Cayley tree, cubic grid and torus lattice. d – f , Model network portraits based on global ( d ), local ( e ) and importance ( f ) layouts. The global layouts recapitulate the expected global shape according to pairwise node distances. The local layouts reveal bi- and multipartite network structures. The importance layouts cluster nodes with similar structural importance. g , Comparison of network-based and Euclidean layout distance for all node pairs in a cubic grid ( N = 1,000) for the global layout, two force-directed algorithms and node2vec. All layouts achieve high correlation (Pearson’s ρ glob = 0.99, ρ node2vec = 0.97, ρ force,nx = 0.97, ρ force,igraph = 0.98). Boxes summarize values of all n node pairs at network distance d , with n ranging from n = 4 at distance d = 27 (for corner node pairs) to n = 46,852 for d = 9. Whiskers denote the values for the minimum, first, second and third quartiles and maximum. h , Comparison of the final correlations for cubic grids of increasing size when limiting the wall clock running time of the algorithms to the running time of the global layout. i , Computational wall times that the respective algorithms require to achieve the same correlation as the global layout for cube grids of increasing size. Source data Full size image To illustrate and benchmark our framework, we first applied it to easily interpretable model networks: (1) a Cayley tree, (2) a cubic grid and (3) a torus lattice (Fig. 1c ). The Cayley tree is organized in hierarchical levels. All nodes except for those in the outermost level have the same number of neighbors (degree k = 3), and all nodes within the same level have identical centrality values. The cubic lattice contains four structurally different node groups: nodes at the corner ( k = 3), along the 12 edges ( k = 4), on the six faces ( k = 5) or in the interior ( k = 6). In the torus lattice, all nodes are equivalent in terms of all structural characteristics, including their degree ( k = 4) and centrality metrics. Note that the definition of none of the model networks involves any spatial embedding, so, in principle, no layout is in any formal sense more correct than any other. However, for all three network models, canonical layouts in two and three dimensions, respectively, exist, offering an intuitive visualization of their global architecture. Our global layout provides a good approximation for these idealizations (Fig. 1d ). The local and importance layouts produce entirely different results, each highlighting distinct structural aspects of the model networks. In the local layouts, the nodes are sorted into groups with shared neighbors (Fig. 1e ). This layout reveals bi- and multipartite network structures, resulting in two clusters in the lattice-based networks (cube and torus), and in alternating patterns reflecting the ternary structure of the Cayley tree. The importance layout identifies groups of nodes with the same network centralities (Fig. 1f ). In the Cayley tree, all nodes of the same hierarchy are clustered, and in the cubic grid, nodes of the same type (corner, edge, face nodes) and layer are grouped. In the torus, all nodes have equivalent structural roles, thus resulting in a uniform point cloud. The global layout incorporates random walk-based features similar to the graph embedding method node2vec 5 . Also, for small to moderate network sizes, standard force-directed algorithms 6 produce layouts that recapitulate network distances between node pairs. We can therefore use these algorithms as performance benchmarks. Figure 1g shows good overall correlations between network-based node distances in cubic lattice networks and the respective layout distances (Extended Data Fig. 1 ). A comparison of the correlations obtained for the same computational running time shows a substantial drop for force-directed algorithms as the network size increases (Fig. 1h ). Conversely, force-directed methods are orders of magnitudes slower for fixed layout quality (Fig. 1i ). We next apply our framework to a large real-world network. The human interactome consists of N = 16,376 nodes and M = 309,355 links, representing proteins and their physical interactions that underlie biological processes 7 , 8 . Although several structure-to-function relationships in the interactome are well documented 9 , they are difficult to decipher visually from conventional layouts. Our framework offers a solution to this challenge. Figure 2a shows a 2D network portrait of the interactome in the importance layout. Visual inspection of 2,918 known essential genes reveals a relationship between their structural importance within the interactome and their biological importance. Cancer driver genes, rare disease genes and genes involved in early development show the same trend (Extended Data Fig. 2a–c ). Although this finding represents one of the cornerstones of network biology 2 , it could not be derived from standard layouts (Extended Data Fig. 3a ). Similarly, the agglomeration of genes associated with the same disease in local interactome neighborhoods is well documented 10 , yet remains hidden in standard layouts (Extended Data Fig. 3b ). We can use functional network portraits to visualize disease-associated genes and their interconnectivity (Fig. 2b ). Although the node placement is purely driven by a functional characteristic, the underlying network structure can be inspected through the links. This supports the identification of structure-to-function relationships in an iterative cycle of data visualization, hypothesis generation and validation. In addition to disease gene interconnectivity, Fig. 2b also shows a prominent cluster of highly connected genes associated with multiple diseases (Extended Data Fig. 4 ). Finally, we can also generate layouts in which the node positions are determined by a combination of structural and functional features (see Extended Data Figs. 5 and 6 for applications to a model network and the interactome). Fig. 2: Application to a large-scale, real-world biological network. a , Structural network portrait of the human interactome based on the importance layout. Essential genes and links between them are shown in blue and aggregate in the area of high centrality nodes (top right). b , Functional network portrait based on disease association similarity. Four diseases are highlighted. Only links between disease genes are shown. Although most disease genes are located in four clusters (links shown by thicker lines), a smaller number of pleiotropic genes associated with multiple diseases is located at the center of the network (Extended Data Fig. 4b ). c , Topographic network map in top view (left) and side view (right) obtained from a 3D interactive visualization. The x–y plane is based on a 2D global layout, and the z axis displays the number of diseases associated with a particular gene. d , Green-screen composition of a user exploring a geodesic network map in a virtual reality environment 13 . Nodes are distributed on different spherical layers that reflect different biological roles. The center contains nodes to be functionally annotated, the enclosing layers contain genes associated with similar diseases and involved in relevant biological processes, respectively. Each individual layer is based on a functional layout emphasizing biological similarity, allowing the user to quickly identify the biological context of individual genes and their interactome neighborhood. Source data Full size image Network maps with an additional quantity of interest depicted in the third dimension can be used to build application-specific visualizations. Figure 2c shows a 3D topographic map of the interactome, with a global layout on the x–y plane and the number of disease associations on the z axis, highlighting, for example, the prominent role of the tumor suppressor TP53 in many cancers 11 . The top view reveals several localized node clusters, which correspond to provincial hubs and their respective neighbors 12 . The side view shows the prominent role of the provincial hubs for diseases and their relationships, such as amyloid precursor protein (APP) and ELAV-like RNA binding protein (ELAVL1), which are located at the center of the respective interactome neighborhoods that are perturbed in the associated diseases 13 . Figure 2d demonstrates how our framework can be utilized for generating network maps customized to the interactive annotation of rare genomic variants in a virtual reality environment 14 . The center sphere of the geodesic map contains 13 candidate genes that are suspected to cause a rare genetic disease in a particular patient. The enclosing spheres represent genes implicated in similar phenotypes or involved in related biological pathways, respectively, in a functional layout reflecting biological similarity. This allows for an efficient manual inspection of the biological context of the candidate genes. The flexibility of our framework enables the development of customized network visualizations for a broad range of applications. In biology, for example, the introduced layouts may enhance existing tools for the integration and interpretation of diverse omics datasets 15 , 16 , 17 , 18 , 19 . Note that visual inspection alone will rarely suffice to conclusively show the presence of an observed structure-to-function relationship in a given network. Any hypothesis derived from a particular visualization thus requires an additional, more rigorous evaluation outside of our framework, for example, by statistical or experimental means. Methods A framework for creating interpretable network layouts and maps Our pipeline consists of four basic steps. (1) The network of N nodes and M links is supplied in the form of a link list. (2) For each node in the network, we construct a vector of F features, resulting in an ( N × F ) feature matrix. The particular features that are used determine the layout. We introduce five such layouts, termed ‘global’, ‘local’, ‘importance’, ‘functional’ and ‘combined’ layouts, as detailed in the next sections. (3) The feature matrix is converted into an ( N × N ) similarity matrix, which serves as input for dimensionality reduction algorithms. The utility of dimensionality reduction techniques for network embedding is increasingly recognized, in particular for classification tasks and more recently also for visualizations 20 . We implemented the popular tools t -distributed neighbor embedding ( t -SNE) 21 and uniform manifold approximation and projection (UMAP) 22 , which offer embeddings in 2D and 3D Euclidean space, as well as embeddings on 2D surfaces, such as a sphere. (4) The node coordinates can either be used directly to lay out the network or can be further enhanced by an additional third dimension in the case of 2D embeddings. We termed the direct layouts ‘portraits’. Flat embeddings in 2D Euclidean space can be expanded into 3D topographic maps by using an additional, freely selectable variable as the z coordinate. Similarly, we can enhance embeddings on the surface of a sphere by introducing an additional radial variable, resulting in geodesic maps. Global layout In the global layout, each node is equipped with N features representing its network-based distances to all nodes in the network based on a random walk with the restart propagation method 23 . These random walk-based distances indicate how frequently a walker starting from node i and traveling along randomly chosen links will visit a given node j . Formally, we first determine the vector p i containing the visiting frequencies p i,j for all nodes j ∈ [1, N ] starting from node i as seed for a random walk with restart probability r . These frequencies can be efficiently computed by matrix inversion according to the steady-state expression for a random walk with restart 24 . For all node pairs { n , m }, we then compute the cosine similarity S ( n , m ) between their respective visiting frequency vectors p n and p m and collect the results into an ( N × N ) similarity matrix S glob that serves as input to the dimensionality reduction step of the pipeline. Local layout The local layout is based on the similarity of nodes in terms of shared neighbors. Two nodes that are connected to the exact same set of nodes are considered maximally similar, whereas nodes that do not have any common neighbors do not have any similarity. We can determine this similarity directly from the adjacency matrix A of the network, defined as A i , j = 1 if nodes i and j are connected, and A i , j = 0 otherwise. For all node pairs { n , m }, we compute the cosine similarity between their corresponding columns A i , n and A i , m , resulting in an ( N × N ) similarity matrix S loc which serves as input to the dimensionality reduction step. Importance layout The importance layout reflects the similarity of nodes in terms of their network centralities 1 . Network centralities measure the importance of a particular node according to its position within the network. Numerous centrality measures have been proposed, and we incorporated four of the most widely used into a feature vector. For each node i we compute its (1) degree (the number of neighbors), (2) closeness (its average network distance to all other nodes), (3) betweenness (how often it acts as a bridge along the shortest path between two other nodes) and (4) eigenvector centrality (measuring its dynamic influence), resulting in a 4D vector c i . For all node pairs { n , m }, we then compute the cosine similarity between their corresponding vectors c n and c m , resulting in an ( N × N ) similarity matrix S cent , which serves as input to the dimensionality reduction step. Functional layouts Functional layouts can be used to display node similarities in terms of external features, such as the disease annotations of genes in Fig. 2b . For a given feature matrix F with F i , j = 1 if node i is annotated to feature j , and F i , j = 0 otherwise, we compute the cosine similarity between all node pairs { n , m } using the respective rows F n , j and F m , j , resulting in an ( N × N ) similarity matrix S func , which serves as input to the dimensionality reduction step. Combined layouts Combined layouts allow for extrapolating between purely structural and functional layouts. We first construct a matrix with elements p i , j as in the global layout above, representing the structural aspect of the final layout. For each functional feature that we wish to include, for example annotations to different diseases, we then add an additional column containing the values F i , j = 1 if node i is annotated to feature j , and F i , j = 0 otherwise. These functional columns can now be scaled by a factor m ≥ 0, thereby modulating between purely structural layouts ( m = 0) and layouts that are increasingly dominated by the functional annotations ( m > 0). Finally, for all node pairs { n , m }, we compute the cosine similarity S ( n , m ) between their vectors p n and p m and collect the results into an ( N × N ) similarity matrix S comb , which serves as input to the dimensionality reduction step of the pipeline. Implementation We used the Python package networkx 25 to generate the model networks and compute the network properties required in the different layouts, such as adjacency matrices and node centralities. The force-directed layouts were generated using the Fruchterman–Reingold algorithm 6 as implemented in NetworkX and igraph 26 , respectively, and using ForceAtlas2 27 . Dimensionality reduction methods were implemented using the t -SNE 24 and UMAP Python packages 25 , and the node2vec algorithm was implemented using the StellarGraph library 28 . Note that the implemented dimensionality reduction methods are not strictly deterministic, so that repeated calls may lead to slightly different outputs. To maximize the reproducibility, we therefore set a fixed random seed in the provided Python code. To evaluate how well a particular layout algorithm reproduces network-based distances between nodes, we computed for all node pairs { n , m } the length of the respective shortest paths \({d}_{n,m}^{\rm{SP}}\) and their Euclidean distance \({d}_{n,m}^{\rm{Euc}}\) within the layout. The agreement between the two was then quantified using the Pearson correlation coefficient: $${{r}} = {\frac{{\mathop {\sum}\limits_{\{ n,m\} } {({d}_{n,m}^{\rm{SP}} - {\mu }^{\rm{SP}})({d}_{n,m}^{\rm{Euc}} - {\mu }^{\rm{Euc}})} }}{{\sqrt {\mathop {\sum}\limits_{\{ n,m\} } {({d}_{n,m}^{\rm{SP}} - {\mu }^{\rm{SP}})^2} \mathop {\sum}\limits_{\{ n,m\} } {({d}_{n,m}^{\rm{Euc}} - {\mu }^{\rm{Euc}})^{2}} } }}}$$ where µ SP and µ Euc denote the respective mean values of network-based and Euclidean distances across all node pairs. We used the implementation contained in the numpy Python package 29 . Computational wall time was measured on computer hardware with a 2-GHz Quad-Core Intel Core i5 processor and 16 GB of RAM. Data availability All input files, together with the complete source code, have been deposited in a Zenodo repository 30 . The human interactome network was extracted from the HIPPIE database 31 , filtering for protein–protein interactions with at least one supporting PubMed article. Disease gene associations were taken from the DisGeNET database 32 and mapped to disease categories according to Disease Ontology (DO) 33 . Functional gene annotations were derived from the ‘biological processes’ branch of the Gene Ontology (GO) database 34 . Essential genes were obtained from the Online Gene Essentiality (OGEE) database 35 , rare disease genes from OrphaNet 36 and genes involved in early development from the EmExplorer database 37 . Source data are provided with this paper. Code availability Python source code and input data for reproducing the results in this paper are publicly available from the Zenodo repository 30 . We also provide the code as a Python package on GitHub at , together with Jupyter notebooks including a quickstarter, as well as separate notebooks for reproducing each figure. The CartoGRAPHs framework can also be used as an interactive web application at and source code is provided at (Extended Data Fig. 7 ). As output, 2D and 3D network interactive images can be generated and downloaded in html format. Layouts can also be exported as XGMML files that can be loaded for further processing in the cytoscape software 38 . Finally, we offer export in Wavefront OBJ format to be implemented into 3D printing processes or for exploring network maps in VRNetzer, a virtual reality platform 12 for network visualization and analysis.
Researchers at Cornell Tech have created a new approach to helping survivors of domestic abuse stop assailants from hacking into their devices and social media to surveil, harass and hurt them. The model focuses on "continuity of care," so clients experience a seamless relationship with one volunteer tech consultant over time, similar to a health care setting. It matches survivors with consultants who understand their needs and establish trust, offers survivors multiple ways to safely communicate with consultants, and securely stores their tech abuse history and concerns. "Personal data management in tech abuse is a complex thing that can't always be 'solved' in a single half-hour visit," said Emily Tseng, a doctoral student and lead author on a paper about the model. "Most of the approaches that exist in tech support are limited by a one-size-fits-all protocol more akin to an emergency room than a primary care provider." Tseng will present the paper "Care Infrastructure for Digital Security in Intimate Partner Violence" in April at the ACM CHI Conference on Human Factors in Computing Systems in New Orleans. Tseng and her colleagues at Cornell Tech's Clinic to End Tech Abuse developed the new approach in partnership with New York City's Mayor's Office to End Domestic and Gender-Based Violence. Their research draws on eight months of data, as well as interviews with volunteer technology consultants and experts on intimate partner violence (IPV). "This work provides an honest look at both the benefits and burdens of running a volunteer technology consultant service for IPV survivors, as well as the challenges that arise as we work to safely provide computer security advice as care," said co-author Nicola Dell, associate professor at Cornell Tech's Jacobs Technion-Cornell Institute. "Our hope is that our experiences will be valuable for others who are interested in helping at-risk communities experiencing computer insecurity." Survivors can experience many forms of gender-based violence, including technology facilitated abuse, said Cecile Noel, commissioner of the Mayor's Office to End Domestic and Gender-Based Violence. "Cornell Tech's groundbreaking program not only helps survivors experiencing technology abuse but is also working to better understand how people misuse technology so that we can create better protections for survivors," Noel said. "We are proud of the critical role our longstanding partner Cornell Tech plays in improving the lives of survivors." Tech abuse often exists within a larger web of harm, Tseng said. "In an ideal world, the people on the "Geek Squad' would be able to treat tech abuse with the sensitivity of a social worker." Assailants can abuse their victims through tech including spyware, also known as stalkerware, and through inappropriate use of location-tracking features in phones and other devices. They harass their former partners on social media, such as by posting private photos and posing as their victims to alienate family and friends. Abusers can also hack into email accounts and change recovery emails and phone numbers to their own, potentially devastating their victims' careers. In previous models, counselors remained anonymous, impacting their ability to build trust with survivors. Short, one-time appointments were not long enough to address clients' needs. And appointments took place at a specific time; survivors who could not leave their homes or find a safe, private place to take a call were unable to access services and couldn't reach counselors at other times. It can be frustrating and even re-traumatizing for survivors to share their stories with new consultants at each appointment, Tseng said. One of the team's larger goals is to offer survivors more peace of mind and feelings of empowerment—that they have the tools to handle future challenges. "With technology, there are so many ways to remain entangled with your abuser even after you've physically and romantically left the relationship," Tseng said. One tricky element is determining how much support is realistic. While a one-time "urgent care" visit is probably insufficient, prolonged engagement would be unsustainable for consultants and the clinic as a whole. "In several cases, consultants ended up working with clients over many appointments stretching on for weeks or months," Tseng said. As a next step, she wants to explore additional ways to evaluate ongoing security-care relations from the perspective of survivors, particularly people from marginalized communities. Dell co-created the Clinic to End Tech Abuse with Thomas Ristenpart, associate professor at Cornell Tech; both Dell and Ristenpart are also affiliated with the Cornell Ann S. Bowers College of Computing and Information Science.
emtseng.me/assets/Tseng-2022-C … ital-Privacy-IPV.pdf
Biology
Researchers identify new mechanism for keeping DNA protein in line
Susan E. Tsutakawa et al, Phosphate steering by Flap Endonuclease 1 promotes 5′-flap specificity and incision to prevent genome instability, Nature Communications (2017). DOI: 10.1038/ncomms15855 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms15855
https://phys.org/news/2017-06-mechanism-dna-protein-line.html
Abstract DNA replication and repair enzyme Flap Endonuclease 1 (FEN1) is vital for genome integrity, and FEN1 mutations arise in multiple cancers. FEN1 precisely cleaves single-stranded (ss) 5′-flaps one nucleotide into duplex (ds) DNA. Yet, how FEN1 selects for but does not incise the ss 5′-flap was enigmatic. Here we combine crystallographic, biochemical and genetic analyses to show that two dsDNA binding sites set the 5′polarity and to reveal unexpected control of the DNA phosphodiester backbone by electrostatic interactions. Via ‘phosphate steering’, basic residues energetically steer an inverted ss 5′-flap through a gateway over FEN1’s active site and shift dsDNA for catalysis. Mutations of these residues cause an 18,000-fold reduction in catalytic rate in vitro and large-scale trinucleotide (GAA) n repeat expansions in vivo , implying failed phosphate-steering promotes an unanticipated lagging-strand template-switch mechanism during replication. Thus, phosphate steering is an unappreciated FEN1 function that enforces 5′-flap specificity and catalysis, preventing genomic instability. Introduction The structure-specific nuclease, flap endonuclease-1 (FEN1) plays a vital role in maintaining genome integrity by precisely processing intermediates of Okazaki fragment maturation, long-patch excision repair, telomere maintenance, and stalled replication forks. During DNA replication and repair, strand displacement synthesis produces single-stranded (ss) 5′-flaps, at junctions in double-stranded (ds) DNA. During replication in humans, FEN1 removes ∼ 50 million Okazaki fragment 5′-flaps with remarkable efficiency and selectivity to maintain genome integrity 1 , 2 , 3 . Consequently, FEN1 deletion is embryonically lethal in mammals 4 , and functional mutations can lead to cancer 5 . FEN1 also safeguards against DNA instability responsible for trinucleotide repeat expansion diseases 6 . As FEN1 is overexpressed in many cancer types 7 , 8 , it is an oncological therapy target 9 , 10 . Precise FEN1 incision site selection is central to DNA replication fidelity and repair. FEN1 preferentially binds to double flap substrates with a one nt 3′-flap and any length of 5′-flap, including zero. It catalyses a single hydrolytic incision one nucleotide (nt) into dsDNA ( Fig. 1a ) to yield nicked DNA ready for direct ligation 11 , 12 . Thus, FEN1 acts on dsDNA as both an endonuclease (with 5′-flap) and an exonuclease (without 5′-flap). Recent single molecule experiments show that FEN1 binds both ideal and non-ideal substrates but decisively incises only its true substrate 13 . In contrast to homologs in bacteriophage 14 , 15 , 16 and some eubacteria 17 , eukaryotic FEN1s do not hydrolyse within 5′-flap ssDNA. Figure 1: Specificity and inverted threading of ss5′-flap in hFEN1 D86N substrate structure. ( a ) Schematic FEN1 incision on an optimal double-flap substrate, incising 1 nt into dsDNA to ensure a ligatable product. ( b ) Proposed models for ssDNA selection. ( c ) Top view of hFEN1-D86N crystal structure showing extensive interaction to dsDNA arms of 5’ flap substrate. The 5’-flap substrate is composed of three strands; the 5′-flap strand (orange), the template strand (brown), and the 3′-flap strand (pink). Functionally critical regions in FEN1 include the gateway (blue) and the cap (violet) for selecting substrates with ss-5′-flaps, the hydrophobic wedge between the 3′-flap binding site and the gateway/cap (dark green), the K + ion and H2TH (purple) that interacts with the downstream DNA, the beta pin (grey) that locks in the DNA at the bend. Relative DNA orientation shown in schematic on lower right. ( d ) Front and side views of hFEN1-D86N crystal structure showing helical gateway and cap architecture position positively-charged residues to steer ss 5′-flaps through a protecting gateway in an inverted orientation across the active site. Relative DNA orientation is shown in schematic. The inverted 5′-flap ssDNA is threaded between gateway helices (blue) and under the helical cap (violet). The inverted threading reveals charged interactions to basic sidechains in the cap and van der Waals interactions to ssDNA. See also Supplementary Figs 1–3 ; Table 1 , and Supplementary Movies 1 and 2 . Full size image However, key features of FEN1 substrate selection remain unclear. FEN1 must efficiently remove 5′-flaps at discontinuous ss-dsDNA junctions yet avoid genome-threatening action on continuous ss–ds junctions, such as ss gaps or Holliday junctions. Paradoxically, other FEN1 5′-nuclease superfamily members 3 are specific for continuous DNA junctions: namely, ERCC5/XPG (nucleotide excision repair), which acts on continuous ss-ds bubble-like structures; and GEN1 (Holliday junction resolution), which processes four-way junctions. Structures determined with DNA of eukaryotic superfamily members lack ss-ds junction substrate with 5′-ssDNA or the attacking water molecule leaving cardinal questions unanswered 18 , 19 , 20 , 21 , 22 . For example, structures of FEN1 and Exo1 go from substrate duplex DNA with the scissile phosphodiester far from the catalytic metals to an unpaired terminal nt in the product; is the unpairing occurring before or after incision? Models of FEN1 specificity must address how ss–ds junctions are recognized and how 5′-flaps, as opposed to continuous ssDNA are recognized. There are threading and kinking models. To exclude continuous DNAs, 5′-flaps may thread through a ‘tunnel’ 21 , 23 , 24 , 25 formed by two superfamily-conserved helices flanking the active site, known as the ‘helical gateway,’ topped by a ‘helical cap’ ( Fig. 1b ). Due to cap and gateway disorder in DNA-free FEN1, they are thought disordered during threading and to undergo a disorder-to-order transition on 3′-flap binding 21 , 24 , 26 . In this threading model, however, ssDNA passes through a tunnel without an energy source and directly over the active site, risking non-specific incision. These issues prompted an alternative clamping model where the ss flap kinks away from the active site 11 , 20 ( Fig. 1b ). Whereas these models explain selection against continuous DNA junctions, FEN1 exonuclease activity does not require a 5′-flap. Furthermore, how FEN1 prevents off target incisions and moves the dsDNA junction onto the metal ions are not explained by these models. Here crystallographic analyses uncover an unprecedented electrostatic steering of an inverted 5′-flap through the human FEN1 (hFEN1) helical gateway. Gateway and cap positively-charged side chains are positioned to ‘steer’ the phosphodiester backbone across the active site, energetically promoting threading and preventing nonspecific hydrolysis within the 5′-flap. Mutational analysis of these positively charged ‘steering’ residues revealed an added role of phosphate steering in moving dsDNA towards the catalytic metal ions for reaction. Moreover, phosphate steering mutations efficiently blocked Rad27 ( S. cerevisae homolog of hFEN1) function, causing a compromised response to DNA damaging agents and dramatically increased expandable repeat instability. Results FEN1 selects for 5′-flaps by steering flap through a gateway To obtain structures of hFEN1 with a ss 5′-flap substrate for insight into ss 5′-flap selection, we crystallized three hFEN1 active site mutants D86N, R100A and D233N with a double-flap (DF) substrate and with Sm 3+ ( Fig. 1 and Supplementary Figs 1 and 2A ) 27 . Mg 2+ is the physiological cofactor. D86N, R100A and D233N mutations slow the hFEN1 catalysed reaction rate by factors of 530, 7,900 and 16 respectively ( Supplementary Fig. 2B ). The DF substrates in the crystal structures had a ss 5′-flap (4–5 nt) and a 1 nt 3′-flap, termed S4,1 or S5,1 ( Supplementary Fig. 1 ). The DNA-enzyme complex structures for hFEN1-D86N, hFEN1-R100A, and hFEN1-D233N were determined to 2.8, 2.65, and 2.1 Å resolution, respectively ( Figs 1c,d and 2 , Supplementary Fig. 2 and Table 1 ). In all cases, the overall protein resembled wild-type (wt) hFEN1 (with product DNA, PDB code 3Q8K) 21 , with root mean square deviation (RMSD) values of 0.26 for hFEN1-R100A, 0.22 for hFEN1-D233N, and 0.42 for hFEN1-D86N. Figure 2: FEN1 superfamily sequence and secondary structure alignment. Map of FEN1 secondary structure (PDB code 3q8k), structural elements, and mutants to a sequence alignment of FEN superfamily human members. XPG residues 117–766 were removed (dash) to facilitate alignment. Full size image Table 1 X-ray data collection and refinement statistics (molecular replacement). Full size table These structures show that FEN1 interacts primarily (88% by PISA interface analysis 28 ) with two regions of ∼ 100° bent dsDNA supporting prior observations 21 , rather than to the ss 5′-flap in these structures ( Figs 1c,d and 2 , Supplementary Movie 2 ). FEN1 binding to dsDNA is mediated by four regions: (1) a hydrophobic wedge (composed of helix 2 and helix 2–3 loop) and β pin (formed between β strands 8 and 9) sandwich upstream and downstream dsDNA portions at the bending point of the two-way junction with Tyr40 packing at the ss/dsDNA junction; (2) a C-terminal helix-hairpin-helix motif binds upstream dsDNA and the one nt 3′-flap and is absent from superfamily-related members hEXO1 (ref. 20 ) and bacteriophage 5′-nuclease structures 29 ; (3) the helix-2turn-helix (H2TH) motif with bound K + ion and positive side chains bind downstream dsDNA; and (4) the two-metal ion active site near the 5′-flap strand. Much of the interaction (43% by PISA analysis) is to the strand complementary to the flap strands, reinforcing dsDNA specificity. The dsDNA binding sites on either side of the active site, the K+ and the hydrophobic wedge, are spaced one helical turn apart ( Supplementary Movie 2 ). Their spacing enforces the specificity for helical dsDNA and places the 5′-flap in the active site, selecting against unstructured ssDNA or 3′-flaps that would require a narrower spacing. Additionally, the minor-groove phosphate backbone is recognized by superfamily-conserved Arg70 and Arg192 pair spaced 14 Å apart ( Fig. 2 , Supplementary Movie 1 ). Unique to FEN1, cap positive side chains (Lys125, Lys128, Arg129) interact with the template strand at the ss/dsDNA junction ( Supplementary Fig. 3 , Supplementary Movie 1 ). Lys128 and Arg47 pack against each other, linking the 3′-flap pocket to the gateway helices. The active site consists of seven superfamily-conserved metal-coordinating carboxylate residues plus invariant Lys93 and Arg100 from gateway helix 4 and Gly2 at the processed N-terminus ( Figs 2 and 3c , Supplementary Movie 4 ). An ordered gateway and cap formed by helices 2, 4 and 5 are observed above the active site in these three structures. Helix 2 Tyr40 forms part of the hydrophobic wedge and packs against the duplex DNA at the bend. Figure 3: Three FEN1 crystal structures show threading through the capped gateway. ( a ) DNA from hFEN1-D86N, hFEN1-R100A, and hFEN1-D233N structures showed threading not clamping. Tyr40 (stick model) changes its rotamer to track DNA movement through the gateway in the hFEN1-D86N structure. ( b ) A protein chain overlay between hFEN1-R100A (outline) and hFEN1-D86N (coloured) highlights how the helical gateway and cap and the DNA rotate closer together in the hFEN1-D86N complex. See Supplementary Movie 3 . ( c ) The hFEN1-D86N active site revealed a water molecule positioned for linear attack on the scissile phosphate. Orthogonal views are shown. The 2nd metal position (outlined in black and denoted by Me*) is not observed in the hFEN1-D86N and is shown by overlaying the protein from the wt-product structure (PDB code 3Q8K). See also Supplementary Figs 1–4 , and Supplementary Movie 4 . ( d ) Protein chain overlay of hFEN1-D86N-substrate (coloured) and wt-product (outline, PDB code 3Q8K) structures shows that the scissile phosphate is shifted in the active site ∼ 2 Å (demarked by arrow). Full size image In the hFEN1-D86N and hFEN1-R100A structures, the ssDNA (5′-flap) region of the substrate threaded through the tunnel formed by the gateway/cap ( Figs 1d and 2a , and Supplementary Fig. 2A and Supplementary Movie 1 ). This observation explains how FEN1 excludes continuous DNA like Holliday junctions and DNA bubbles. The third independent hFEN1-D233N crystal structure captures two cleaved nts from the 5′-flap bound on the other side of the tunnel from the dsDNA, consistent with threading ( Fig. 3a and Supplementary Fig. 2A ). Together, these three distinct structures support the threading model to validate substrates have a ss 5′-flap. Phosphate steering inverts the ss phosphodiester backbone In both threaded substrate structures, the ss 5′-flap phosphodiester backbone is ‘inverted’ between the +1 and +2 positions, with the +2 and +3 phosphates facing away from the active site metals and the DNA bases facing the metals ( Fig. 1b,d and Supplementary Movie 1 ). (We denote the plus and minus positions relative to the scissile phosphate). This inversion would place the flap phosphodiesters away from the catalytic metals and thereby logically reduce inadvertent incision within the ssDNA. In both structures, the inverted +1 phosphodiester is directly between the gateway helices with the bases on either side of the gateway. In the hFEN1-D86N structure, two basic residues of the gateway/cap, Arg104 and Lys132, were within 4–7 Å of the +1, +2 and +3 phosphodiester. These residues are positioned to energetically promote threading and an inverted orientation. They are conserved in FEN1 and semi-conserved across the 5′-nuclease superfamily and shown important for incision activity ( Fig. 2 and Supplementary Fig. 3 ) 3 , 21 , 30 . The +2 and +3 nt of the 5′-flap were sandwiched between the main chain (residues 86–89 and residues 132–135) on one side of the channel and Leu97 on the other ( Supplementary Fig. 2C ) by non-sequence specific van der Waals contacts. The overall inverted flap orientation resembles the hFEN1-R100A structure with the +1 phosphate remaining within 7 Å of Arg104, but shifted towards Arg103, presumably due to Arg100 removal. Together, these substrate structures suggest that basic residues enable a phosphate steering mechanism, which we here define as electrostatic interactions that can dynamically position the phosphodiester backbone. Shifting of the scissile phosphate and the catalytic mechanism In the hFEN1-D86N structure, we were surprised that the scissile phosphate was within catalytic distance of the active site while surrounding bases remained basepaired to the template strand ( Fig. 3 ). This contradicts the prevailing hypothesis that surrounding bases must unpair for the scissile phosphate to move into the active site for incision 3 . Similarities and functionally-significant differences appeared on closer examination of hFEN1-D86N, hFEN1-R100A, and an earlier structure of FEN1-substrate with no 5′-flap or +1 phosphate (PDB code 3Q8L). In all three substrate structures, the dsDNA major groove is widened as it approaches the active site, and DNA bases flanking the scissile phosphate are stacked with one another, with the +1 base packed against Tyr40. However, the basepairing, the scissile phosphodiester bond location and the Tyr40 rotamer are distinctly different in the respective complexes, despite containing the same dsDNA sequence. In the 3Q8L structure, the DNA remained fully basepaired, and the scissile phosphodiester was positioned ∼ 6 Å away from catalytic metals. In the hFEN1-R100A structure, the scissile phosphodiester bond was ∼ 4–5 Å away from the metal ions, although −1 and +1 nts have moved towards the active site and away from the template strand. The −1 and −2 nts display less base overlap (stacking), and the +1 and −1 nts are no longer hydrogen bonded to the template strand (4–6 Å apart). In striking contrast, the scissile phosphodiester bond was directly coordinated to the one active site metal ion in the hFEN1-D86N structure ( Fig. 3c ). Furthermore, the +1 and −1 nts remained unexpectedly basepaired to the template strand, which is shifted relative to the other substrate structures via a dsDNA distortion surrounding the scissile phosphodiester ( Fig. 3b and Supplementary Fig. 2D,E ). There is no base stacking between −1 and −2 nts in the 5′-flap strand; instead, an unusual interstrand base stacking interaction occurs between the −2 nt of the 5′-flap strand and the template strand opposite of the −1 nt. We had hypothesized that unpairing of the +1 and −1 was required to move the scissile phosphate to within catalytic distance of the active site metals 3 , 21 , 31 , 32 . This new hFEN1-D86N substrate structure shows instead that basic residues can rotate dsDNA into the active site with basepairing intact ( Fig. 3 , Supplementary Movie 3 ). Moreover, since the DNA in 3Q8L, which was the furthest from the active site, lacked a 5′-flap or +1 phosphate, the DNA movements observed in the hFEN1-R100A and hFEN1-D86N structures are likely partly a consequence of either the 5′-flap and/or the +1 phosphate. In concert with the DNA rotation in hFEN1-D86N, Tyr40 is in a different rotamer conformation from all other substrate or product bound or DNA-free hFEN1 structures ( Fig. 3a,b and Supplementary Movie 3 ) 21 , 26 . This Tyr40 rotamer shift tracks duplex DNA rotation into the active site. The Tyr ring is fully stacked on the +1 base, and its side chain hydroxyl forms a hydrogen bond to the +1 phosphate. Notably, as duplex DNA is not shifted close to the catalytic metal in the R100A structure, this structure may represent a pre-reactive substrate form. Its Tyr40 stacks at a 50° angle with the +1 nt and resembles the other hFEN1 structures, suggesting that the Tyr40 rotamer is linked to shifting duplex DNA into a catalytic position. In the hFEN1-D86N-substrate structure, cap and gateway helices 4 and 5 are shifted 1–3 Å towards the dsDNA relative to all other hFEN1 crystal structures with DNA ( Fig. 3b and Supplementary Movie 3 ). The backbone of helix 2 (which contains Tyr40) does not change position. Close examination of the hFEN1-D86N structure revealed a water molecule 3.3 Å from the scissile phosphate ( Fig. 3c , Supplementary Fig. 4 , and Supplementary Movie 4 ). This water is positioned for a linear attack on the scissile phosphate and for its evident activation by the catalytic metal and the Gly2 at the FEN1 N terminus, which was proposed to replace the ‘third’ metal in bacteriophage FEN 33 . Asp233 is 3 Å from the attacking water and contributes modestly to catalysis; the D233N mutant has 16-fold reduced but still substantial catalytic activity compared to mutants of other invariant carboxylates, such as D181A 21 and D86N ( Supplementary Fig. 2B ). When a second metal ion is modeled by overlay with the hFEN1-product structure (PDB code 3Q8K), the structure is reminiscent of the classical two-metal-ion catalysis 34 . Moreover, superfamily conserved and catalytically required 21 Lys93 and Arg100 sidechains point towards the scissile phosphodiester bond, poised to assist metal ion mediated hydrolysis. On the basis of the hFEN1-R100A structure, Arg100 is also likely essential for shifting of the scissile phosphate into direct contact with the catalytic metals. Notably, the scissile phosphate has moved ∼ 1–2 Å between hFEN1-D86N-substrate and wt hFEN1-product ( Fig. 3d ). An analogous metal movement into more optimal coordination geometry in an RNaseH-product structure was proposed to favour product formation 35 . We cannot exclude a possible third metal ion as time-resolved experiments on other enzymes show metals ions can appear and disappear during reaction 36 , 37 , 38 , 39 , 40 . Together these structures reinforce and extend biochemical data that suggest that FEN1 checks for the ss 5′-flap by threading it through a tunnel formed between the active site and capped gateway helices ( Fig. 1d ) 24 , 41 . The substrate structures imply the 5′-flap is (1) electrostatically steered through the capped gateway by conserved basic residues in the gateway and cap and (2) positioned in an inverted orientation. Biochemically testing phosphate steering If the gateway/cap region basic residues steer the phosphodiester backbone as implied by the structures, then their mutation should affect 5′-flap substrate incision rates. On the basis of the hFEN1-R100A structure, we mutated three basic residues (Arg103, Arg104 and Lys132) positioned to guide the phosphodiester backbone and stabilize the inverted ssDNA orientation ( Fig. 1d ). We also mutated Arg129, which is adjacent the other residues and could act in steering. When the helical cap is structured, Arg129 makes a long-range electrostatic interaction with a phosphate of the template strand 21 , a distance shortened in hFEN1-D86N by template strand relocation. Strikingly, these four basic residues are conserved across all FEN1s including yeast and archaeal, except for the less-specific phage 5′ nucleases ( Supplementary Fig. 3 ). As the helical gateway and cap regions are flexible before productive DNA binding 22 , 24 , 26 , specific interactions would seem unlikely during the flap threading process but electrostatic guidance is possible. Notably, as these side chains range from 10 to 19 Å from the target phosphate, they are unlikely to impact FEN1 activity by aspects other than electrostatic guidance and substrate-positioning. To test this idea, we mutated them to alanine or glutamate to either remove the attractive positive charge or provide a repulsive charge, respectively. These charge mutations all reduced specific incision activity on a 5′-flap substrate, S5,1 ( Fig. 4a ), indicating an important functional role. Under multiple turnover conditions, single mutations R103A and K132A moderately decreased the reaction rate relative to wt hFEN1 by 3- and 5-fold, respectively, whereas a 20-fold decrease was observed with either R104A or R129A ( Fig. 4b and Supplementary Fig. 5A,B ). These rate decreases are consistent with a single residue electrostatic guidance interaction 42 . Double mutant R104AK132A showed an additive effect with 200-fold reduced activity and, significantly, the corresponding repulsive mutant R104EK132E was far more severely compromised with a rate reduction of 11,000 compared to the wt enzyme. Importantly, the substrate dissociation constant ( K d ) for each of these double mutants was only modestly raised ( Supplementary Fig. 6 ). This suggests deficient substrate positioning, not poor binding, as the major contributing factor to diminished activity. Similarly, double mutants R103AR129A or R103ER129E showed reductions in reactivity of 70- or 5,000-fold, respectively, without any substantial effect on K d . Analogous trends in rate effects were observed under single turnover kinetic conditions ( Supplementary Fig. 5C,D ). Figure 4: Phosphate steering residue mutants show reduced activity. ( a ) Schematic of substrates used with fluorescent positions. ( b ) Comparison of multiple turnover rates for cleavage of each substrate at 50 nM. Reaction rates for glutamate mutants with S0,1-5OH were too slow to measure accurately, so threshold values are indicated. See also Supplementary Figs 1, 5 and 6 . Error bars are shown as a function of s.e.m., with replicate number given in Supplementary Fig. 5E . Full size image Mutating all four gateway/cap residues to glutamate (‘QUAD-E’ mutant) severely impaired activity (18,000-fold slower than wt FEN1). Strikingly, the K d increased only 17-fold showing the enzyme was folded and capable of substrate binding. This large rate decrease is remarkable for mutation of residues not acting in catalysis and distant from the active site: it resembles the penalty for streptavidin added to 5′-biotinylated substrates, which would prevent 5’-flap gateway/cap threading 24 . If the FEN1 basic cap residues are primarily required for ss 5′-flap steering, then their mutation should not be deleterious to incision activity on an exonucleolytic substrate lacking a 5′-flap but with a 5′-phosphate (S0,1-5P; Fig. 4a ). This substrate was hydrolysed sevenfold more slowly than the DF S5,1 by wt hFEN1 ( Fig. 4b and Supplementary Fig. 5A,B ) showing that threading the 5′-flap facilitates access to the catalytically competent conformation, as well as being a key mechanism in substrate selection. For reaction rates expressed relative to wt hFEN1 to normalize for this sevenfold difference, the gateway mutants all proved similarly defective on both the exonucleolytic (S0,1-5P) and endonucleolytic (S5,1) substrates (relative rates given in Supplementary Fig. 4B ). These results unmask a key universal role for +1 phosphate steering in the FEN1 incisions of both exonucleolytic and endonucleolytic substrates (since this phosphate is present in both substrates). Given the results with the exonucleolytic substrate and the observation that DNA movement towards the active site required a +1 phosphate 43 , we reasoned that some basic residues were electrostatically interacting with the +1 phosphodiester of dsDNA to facilitate this movement 44 . To test this hypothesis, we measured reaction rates with an analogous exonucleolytic substrate lacking the 5′-phosphate at the +1 position (S0,1-5OH; Fig. 4a ). This substrate was bound 20-fold more weakly and incised 300-fold more slowly than S0,1-5P by wt hFEN1 ( Fig. 4b , Supplementary Figs 4 and 5 ). These data indicate that 5′-phosphate (+1 phosphate) interactions stabilize the enzyme-substrate complex and contribute to catalysis. Combined and individual mutations of R103A and R129A all decreased incision rates of S0,1-5OH analogously to the other substrates. However, R104A, K132A and R104AK132A all processed S0,1-5OH at a similar rate to wt hFEN1. These results imply that the +1 phosphate group functionally interacts with Arg104 and Lys132, consistent with the phosphate steering hypothesis, but that Arg103 and Arg129 (along with Arg100 and Lys93) have long-range interactions to other parts of the DNA substrate, including the scissile phosphate itself. A role for phosphate steering in genome stability To test the biological importance of the basic cap and gateway residues, we made equivalent mutations in the helical gateway/cap region of Rad27, the S. cerevisiae homolog of hFEN1, to analyse their role in genome integrity in vivo . Alanine or glutamate mutations were introduced at Rad27 Arg104, Arg105, Arg127 and Lys130; equivalent to hFEN1 Arg103, Arg104, Arg129 and Lys132, respectively ( Fig. 5a ). Growth characteristics of double- or quadruple-mutant yeast strains were compared to wild type RAD27 (wt) strain and rad27-D179A (corresponding to human D181A whose incision rate is given in Supplementary Fig. 2B ), a severely catalytically impaired mutant, which displays an equivalent phenotype to the rad27 null strain (that is, sensitivity to hydroxyurea, a replication inhibitor and DNA-damaging UV light 1 , 45 ). Figure 5: Phenotypic and DNA repeat expansion defects of Rad27 basic cap residue mutations in yeast S. cerevisiae . ( a ) Table of the tested basic residues in yeast (and their human counterpart) and spot-test (serial fivefold dilutions) for yeast growth with and without exposure to hydroxyurea (replication inhibitor) or UV light (DNA-damaging). ( b ) Experimental system to measure the rates of large-scale repeat expansions in yeast. The (GAA) 100 /(TTC) 100 repeat is incorporated into the intron of an artificially split URA3 gene. Addition of ≥10 extra repeats inhibits reporter’s splicing, which allows cells with repeat expansions to grow on 5-FOA-containing media. ( c ) Effect of active site control and phosphate steering mutations in the RAD27 gene on repeat expansion rates (error bars represent 95% confidence intervals of calculated expansion rates). ( d ) Rad27 protein expression was not substantially altered in the mutated strains. See also Supplementary Fig. 7 and Supplementary Table 1 . Full size image Even without exogenous treatment, quadruple glutamate mutant ( Fig. 5a ) showed growth inhibition resembling that for the active site mutant rad27 - D179A . Replication stress induced by hydroxyurea greatly accentuated this effect. Moderate UV irradiation (100 J m −2 ) was strongly deleterious to both strains, and higher-dose irradiation (200 J m −2 ) further revealed UV sensitivity for the double glutamate (2E-1; R105EK130E) and quadruple alanine (4A) mutants ( Fig. 5a ). Thus, electrostatic interactions of gateway/cap basic residues with DNA are critical for flap endonuclease biological function, with particular deleterious effects on cells under replication stress and/or with damaged DNA. Rad27 inactivation in yeast stimulates expansion of trinucleotide repeats relevant to human disease 46 , 47 , 48 . We therefore tested the effect of phosphate steering mutations on expansion rates of (GAA) n repeats using our system ( Fig. 5b ), which contains a (GAA) 100 tract situated in the intron of a Ura3 reporter gene 49 , 50 . Addition of 10 or more repeats to the (GAA) 100 tract effectively blocks splicing, resulting in gene inactivation and rendering the yeast resistant to 5-fluoroorotic acid (5-FOA). The repeat expansion rates in the rad27 knockout and in the severely catalytically impaired D179A active site metal ligand mutant was increased by ∼ 100-fold compared to wt ( Fig. 5c,d ). Strikingly for a non-active site mutant, phosphate steering 4E mutant exhibited a quantitatively similar phenotype. The double glutamate (2E-1, 2E-2) and 4A mutants showed intermediate ( ∼ 10-fold) increases in repeat expansion rates. These results match growth characteristics of these mutants and emphasize the role of electrostatic interactions of the gateway basic residues with DNA in repeat-mediated genome instability. Ligation of unprocessed 5′-flaps to the 3′-end of the approaching Okazaki fragment is proposed to cause the elevated repeat expansions in Rad27 mutants 48 , 51 , 52 . In this scenario, one expects added repeat lengths to be relatively short: less than the size of an Okazaki fragment. In fact, the major mutations caused by disruption of the RAD27 gene in yeast were repeat-related expansions of 5–108 bases 53 . Recently, the median size of the unprocessed 5′-flap in S. pombe FEN1 knockout was measured as 89 nts 54 . Given these numbers, the median expansion size of GAA repeats in our experimental system should be ∼ 30 repeats in Rad27 mutants. To define the size distribution of expansion products, we measured the scale of repeat expansions in wt and Rad27 mutants described above via PCR ( Fig. 6a ). In the wt strain, median expansion size corresponded to 47 triplets 49 . The rad27 knockout was different: median expansion size was 32 repeats, and Kolmogorov-Smirnov (KS) comparison confirms a significant difference from the wt strain ( P <0.001), which agrees with known flap size in FEN1 knockouts 54 . The expansion scale in near-catalytic-dead (D179A) and 4E Rad27 mutants lies between the wt and knockout mutant: the median is 40 repeats and KS shows significant difference from wt ( P <0.05). Finally, the scale of expansions in 2E and 4A mutants is greater than wt with medians from 50 to 66 added repeats. Thus, the 100-fold increase in expansions ( Fig. 5c ) in phosphate steering mutants cannot be explained by an increase in small-scale expansions alone (caused by simple 5′-flap ligation), but is a consequence of larger expansions. Thus, most expansions in the Rad27 phosphate steering mutants originate via mechanisms distinct from simple 5′-flap ligation (see Discussion). Overall, these Rad27 results suggest that functional phosphate steering of 5′-flaps and dsDNA is vital for genome integrity: in promoting normal growth, in response to DNA damage, and in preventing trinucleotide repeat expansions. Figure 6: FEN1 phosphate steering is essential for lagging strand precision at DNA repeats. ( a ) Graphed distributions of repeat expansion lengths shows that the majority of expansions in the wt, phosphate steering and D179A Rad27 mutants are >30 repeats. The numbers of added repeats in each strain are shown as scatter plots alongside box-and-whisker plots with 5 and 95% whiskers. The number of colonies tested are given in the parentheses. ( b ) Two models for repeat expansions driven by the presence of an unprocessed 5′-flap. In model 1 (left panel) the repeat on the 5′-flap ligates to the 3′-end of the oncoming Okazaki fragment followed by its equilibration into a loop. After the next round of replication, up to ∼ 30 repeats can be added (see text for details). In model 2 (right panel), the 5′-flap folds back forming a triplex, which blocks Pol(δ) DNA synthesis along the lagging strand template and promotes its switch to the nascent leading strand. This template switch mechanism explains the accumulation of large-scale repeat expansions >30 repeats. Full size image Discussion We sought to understand the mechanism whereby FEN1s binds and precisely incises ss-dsDNA junctions yet excludes hydrolysis of continuous DNA substrates, reasoning that this specificity was key to FEN1 functions during replication and repair. These investigations resolve controversies and improve our understanding of how FEN1-DNA interactions provide specificity and genome stability. First, elucidation of a 5′-flap DNA threaded through the helical gateway/cap answers a longstanding question in eukaryotic FEN1 function and explains the selection of 5′-flap substrates with free 5′-termini. Although threading occurs in other enzymes, phosphate steering and inverted threading are extraordinary. For example, bacteriophage T5 5′-nuclease threads substrates 29 , but positions the 5′ flap primarily through hydrophobic interactions to the 5′ flap nucleobases. The phosphodiester is closer to the metals than the nucleobases, consistent with its lower incision site specificity and tendency to cleave within the ssDNA 5′-flap. In other enzymes, threading selects for free ss 5′-termini that will undergo incision and there is no inversion. However, FEN1 preserves, rather than degrades, the threaded nucleic acid. Second, our results uncover an essential function in FEN1 specificity and catalysis for phosphate steering, which we define as electrostatic interactions that dynamically control the phosphodiester backbone. The parallel effects of steering mutations on either endonucleolytic or exonucleolytic reactions (that is, on substrates with or without a 5′-flap) indicated involvement of basic gateway/cap residues in a rate-limiting step in the FEN1 catalytic pathway, that is, in moving the target phosphodiester bond from the ss-ds junction onto catalytic metal ions. Thus, phosphate steering may act in orienting the ss 5′-flap during threading (negative design to avoid off-target reactions) and moving the scissile phosphate into catalytic distance of the metals (positive design to enhance target reactions) ( Fig. 7a ). Notably, steering residue Arg104, is semiconserved throughout the superfamily suggesting that phosphate positioning occurs in other members. Figure 7: Multiple motifs for FEN1 substrate recognition and hydrolysis ensure accurate incision activity and prevent genomic instability. ( a ) Schematic model of the FEN1 mechanism emphasizing the functional role of phosphate steering in the dynamic processes of 5′-flap inverted threading and shifting of the duplex DNA towards the catalytic metals. ( b ) Tumour-associated mutations from breast, lung, skin, kidney, colorectal, ovarian and testicular cancers map to functionally-important structural motifs: dsDNA binding (P269L, L263H, R245G/W, R70L and R73G), 3′-flap binding (Leu53ins, A45V, S317F, E318V and R320Q), helical gateway/cap (I39T, Q112R and A119V) or active site (A159V). Full size image Third, the proposed requirement for double base unpairing for the dsDNA to reach the active site metal ions 3 needs re-evaluation. Our observation of basepaired DNA contacting an active site metal ion with a water molecule positioned for in-line attack, would generate the arrangement for ‘two-metal-ion’ catalysis. This basepaired catalytically competent conformation appears at odds with spectroscopic characterization of FEN1 and GEN1 substrate complexes 19 , 32 , 44 , and the inability of FEN1 to process duplexes cross-linked at the terminal basepair 31 , 55 , consistent with an unpairing mechanism. Yet, the DNA distortion seen in structures here ( Supplementary Fig. 2E ) provides an alternative explanation implying dsDNA can remain basepaired and roll onto the active site metal ions aided by Tyr40 rotation and by positive side chains on the helical gateway and cap. Whereas replication fidelity is canonically based on sequence, it furthermore depends on sequence-independent specificity in FEN1. Importantly, structural elements critically involved in FEN1 function, including phosphate steering and inverted threading, require key residues distant from the active site metal ions. Indeed, clinically relevant FEN1 mutations compiled by The Cancer Genome Atlas (TCGA) and others 5 , 56 , 57 map to these structural elements ( Fig. 7b ). So, although tumour mutation data has been called ‘a bewildering hodgepodge of genetic oddities’ 58 , for FEN1, there is a clear link of structurally-mapped mutations to compromised function, genomic instability and cancer. Although these mutations may retain nuclease activity, even tiny off target activity risks toxicity and genomic instability, and replication mutations account for two-thirds of the mutations in human cancers 59 . We uncovered a role for phosphate steering in triplet (GAA) n repeat expansions, that also implicates template switching from the lagging strand due to FEN1 defects. Most expansions in Rad27 phosphate steering mutants were large-scale (>30 repeats; Fig. 6a ) which is difficult to explain by the canonical flap-ligation model for repeat instability 46 , 48 , 51 , 60 . In this model, an unprocessed 5′-flap is ligated to the 3′-end of the approaching Okazaki fragment ( Fig. 6b , left), limiting the length of expansions to the size of those flaps. Recently, the median size of the 5′-flap in a FEN1 knockout was found to be 89 nts 54 , that is, ∼ 30 triplet repeats. Since median expansion size in phosphate steering mutants is >30 repeats, we propose that besides a flap-ligation model, a template switch between nascent repetitive strands occurs as a replication fork stumbles through the repeat sequence 50 ( Fig. 6b , right). Unprocessed (TTC) n 5′-flaps of the Okazaki fragments may form a stable triplex 61 with the downstream repetitive run. This could block displacement synthesis by the lagging strand polymerase 62 and prompt it to search for a new template. Large-scale repeat expansions would then occur when the polymerase switches template—continuing DNA synthesis along the nascent leading strand. As a starting repeat gets longer, larger expansions become feasible, consistent with the progressive increase in expansion amplitudes with the length of original repeat tract, as observed in human pedigrees 63 . The profound stimulation of large-scale expansions in the phosphate steering mutants unexpectedly sheds light on the molecular mechanism of template switching. A priori , either a nascent leading strand can switch onto the nascent lagging strand to use it as a template 49 , 50 , or the nascent lagging strand can switch onto the nascent leading strand serving as a template 47 . Since it is the lagging strand synthesis and specifically Okazaki fragment maturation that are unraveled in Rad27 mutants, the sheer magnitude of their effects on large-scale repeat expansions implies that the lagging strand likely switches onto the nascent leading strand accounting for the repeat instability, and this merits further biochemical investigation. Another question emerges from biochemical studies of FEN1 functions during long-patch base excision repair where expansions occur on dysregulation of DNA handoffs from polymerase β to FEN1 (ref. 64 ), which suggests studies to investigate whether phosphate steering may prevent expansions during long-patch repair. In summary, we find FEN1 phosphate steering energetically promotes dsDNA rotation into the active site and inverted threading of the 5′-flap to enforce efficiency and fidelity in replication and repair. Interestingly, elevated FEN1 expression safeguards against repeat instability in somatic tissues 6 . Phosphate steering mutations could thus be the trans -modifiers of repeat expansions during either somatic, or intergenerational transmissions in human disease 65 . Moreover, as the basic residues implicated in phosphate steering are largely conserved in the 5′-nuclease superfamily, control over the +1 and −1 phosphates may be a superfamily-conserved mechanism. Methods Site-directed mutagenesis Plasmids for expression of mutant proteins were prepared from either the pET29b-hFEN1Δ336(wt) or pET28b-hFEN1-(His) 6 constructs, as indicated above, following the protocol outlined in the QuikChange site-directed mutagenesis kit (Agilent Technologies, Inc.). Mutagenic primers were purchased from Fisher Scientific, with desalting, then reconstituted in ultrapure water and used as supplied. Mutagenic primer sequences were as follows: D86N, 5′-ggcggcttgccattaaagacatacacgggctt-3′ and 5′-aagcccgtgtatgtctttaatggcaagccgcc-3′; R103A, 5′-caaacgcagtgaggcgcgggctgaggca-3′ and 5′-tgcctcagcccgcgcctcactgcgtttg-3′; R103E, 5′-ccaaacgcagtgaggagcgggctgaggcag-3′ and 5′-ctgcctcagcccgctcctcactgcgtttgg-3′; R104A, 5′-ctctgcctcagccgcccgctcactgcgt-3′ and 5′-acgcagtgagcgggcggctgaggcagag-3′; R104E, 5′-acgcagtgagcgggaggctgaggcagag-3′ and 5′-ctctgcctcagcctcccgctcactgcgt-3′; R129A, 5′-ttagtgaccttcaccagcgccttagtgaatttttccacctc-3′ and 5′-gaggtggaaaaattcactaaggcgctggtgaaggtcactaa-3′; R129E, 5′-ttagtgaccttcaccagctccttagtgaatttttccacctc-3′ and 5′-gaggtggaaaaattcactaaggagctggtgaaggtcactaa-3′; K132A, 5′-cactaagcggctggtggcggtcactaagcagcac-3′ and 5′-gtgctgcttagtgaccgccaccagccgcttagtg-3′; K132E, 5′-gctgcttagtgacctccaccagccgcttagt-3′ and 5′-actaagcggctggtggaggtcactaagcagc-3′; R103ER104E, 5′-gcttctctgcctcagcctcctcctcactgcgtttggcca-3′ and 5′-tggccaaacgcagtgaggaggaggctgaggcagagaagc-3′; R129EK132E, 5′-gtgctgcttagtgacctccaccagctccttagtgaatttttccacc-3′ and 5′-ggtggaaaaattcactaaggagctggtggaggtcactaagcagcac-3′. Protein expression Plasmids encoding R100AΔ336 and D233NΔ336 human FEN1 for crystallography were generated by site-directed mutagenesis from the pET29b-hFEN1Δ336(wt) construct bearing a PreScission protease site and (His) 6 -tag after residue 336 of the wt sequence 21 . Full length wt hFEN1 was encoded using the pET28b-hFEN1-(His) 6 vector reported previously 12 , and all reported mutants were generated from this by site-directed mutagenesis. Proteins were expressed in Rosetta (DE3)pLysS competent cells grown in 2 × YT media or Terrific Broth to an OD 600 of 0.6–0.8 at 37 °C then induced by addition of 1 mM IPTG, followed by incubation at 18 °C for 18–24 h. Cells were collected by centrifugation at 6,000 g /4 °C, washed with PBS, then resuspended in buffer IMAC-A1 (20 mM Tris pH 7.0, 1.0 M NaCl, 5 mM imidazole, 0.02% NaN 3 , 5 mM β-mercaptoethanol supplemented with SIGMA FAST protease inhibitor tablets and 1 mg ml −1 chicken egg white lysozyme). Each suspension was kept on ice for 2 h then stored frozen at −20 °C until further processing, as detailed below. Purification of hFEN1 D86NΔ336 and R100AΔ336 and D233NΔ336 All steps were carried out at 4 °C. Chromatography was on an ÄKTA system with flow rate of 5.0 ml min −1 unless stated otherwise. Columns were from GE Healthcare, unless stated otherwise. Frozen lysates were thawed on ice and homogenized by sonication. Next, 0.1 volume of a 10% v/v TWEEN 20 solution was added. The mixture was clarified by centrifugation at 30,000 g for 30 min. Supernatant was loaded onto a Ni-IDA affinity column, which was then washed with 5 column volumes (CV) of buffer IMAC-A1, 5 CV of buffer IMAC-A2 (20 mM Tris pH 7.0, 0.5 M NaCl, 40 mM imidazole, 0.02% NaN 3 , 0.1% v/v TWEEN 20, 5 mM β-mercaptoethanol). FEN1 was eluted with 5 CV of buffer IMAC-B1 (250 mM imidazole pH 7.2, 0.5 M NaCl, 0.02% NaN 3 , 5 mM β-mercaptoethanol). Pooled fractions were diluted 1:5 with water and then loaded onto a HiPrep Heparin FF 16/10 column. The column was washed with 5 CV buffer HEP-A1 (25 mM Tris pH 7.5, 1 mM CaCl 2 , 0.02% NaN 3 , 20 mM β-mercaptoethanol). FEN1 was eluted with a linear gradient of 100% HEP-A1 to 100% HEP-A2 (25 mM Tris pH 7.5, 1 mM CaCl 2 , 1.0 M NaCl, 0.02% NaN 3 , 20 mM β-mercaptoethanol) in 20 CV. Pooled FEN1 fractions were diluted by slow addition of two volumes of 3.0 M (NH 4 ) 2 SO 4 at 4 °C. The solution was loaded onto a HiPrep Phenyl FF (high sub) 16/10 phenylsepharose column. The column was washed with 7 CV buffer P/S-B1 (25 mM Tris pH 7.5, 2.0 M (NH 4 ) 2 SO 4 , 2 mM CaCl 2 , 0.02% NaN 3 , 20 mM β-mercaptoethanol). FEN1 was eluted with a gradient of 100% P/S-B1 to 100% P/S-A1 (25 mM Tris pH 7.5, 10% v/v glycerol, 1 mM CaCl 2 , 0.02% NaN 3 , 20 mM β-mercaptoethanol) in 20 CV. Pooled fractions were concentrated to ∼ 7 ml using an Amicon stirred cell (Merck Millipore), then passed through 5 × 5 ml HiTrap Desalting columns arranged in tandem, injected in 1.5 ml portions. The desalting columns were equilibrated in 1 × TBS supplemented with 1 mM EDTA and 1 mM DTT, and eluted with the same buffer. Combined protein-containing eluent (35–40 ml) was treated with PreScission protease (20 μl of activity 10 U μl −1 ) and incubated at 4 °C overnight. Complete cleavage of the (His) 6 tag was verified by SDS-PAGE, then the protein solution concentrated to 5 ml using a Vivaspin 20 Centrifugal Concentrator (10,000 MWCO). A final purification step at a 0.5 ml min −1 flow rate with a Sephacryl S-100 HR column, equilibrated with 2 CV of 2 × SB (100 mM HEPES pH 7.5, 200 mM KCl, 2 mM CaCl 2 , 10 mM DTT, 0.04% NaN 3 ). FEN1 fractions were pooled and the protein concentration determined by A 280 , using the calculated OD 280 . The solution was concentrated to >200 μM using a Vivaspin 20 Centrifugal Concentrator (10,000 MWCO). Finally, the solution was mixed 1:1 v/v with cold glycerol, placed on a roller mixer until homogenous, then divided into 1 ml aliquots and stored as a 100 μM stock solution at −20 °C. Crystallography of mutant FEN1-DNA complexes hFEN1 mutants were crystallized with DF substrates (S5,1) or (S4,1) of slightly different sequence (desalted purity from IDT, Supplementary Fig. 1 ). hFEN1-D86NΔ336 (19 mg ml −1 ) was mixed in volumetric ratio 1:2:1 with 4.25 mM SmSO 4 , and 1.3 mM substrate S5,1-D86N. This mixture was in turn combined 1:1 with 12% mPEG 2,000, 20% saturated KCl, 5% ethylene glycol, 100 mM HEPES pH 7.5. Crystals were collected after 5 days at 15 °C. hFEN1-R100AΔ336 (19 mg ml −1 ) was mixed in volumetric ratio 1:2:1 with 3.75 mM SmSO 4 , and 1.3 mM substrate S4,1-R100A. This mixture was in turn combined 1:1 with 22% mPEG 2000, 20% saturated KCl, 5% ethylene glycol, 100 mM HEPES pH 7.5. Crystals were collected after ∼ 3 weeks at 15 °C. hFEN1-D233NΔ336 (8.2 mg ml −1 ) with 1.6 mM SmSO 4 , 0.25 mM substrate S4,1-D233N was mixed 1:1 with 24% mPEG 2000, 20% saturated KCl, 5% ethylene glycol, 100 mM HEPES pH 7.5. hFEN1-D86N data was collected at 0.98 Å (SSRL beamline 12-2) and processed with HKL2000. hFEN1-R100A data was collected at 0.98 Å (SSRL beamline 9-2) and processed with XDS. hFEN1-D233N data was collected at 1.12 Å (ALS beamline 12.3.1) and processed with HKL2000. hFEN1-D86N, hFEN1-R100 and hFEN1-D233N crystals diffracted to 2.8, 2.65 and 2.1 Å, respectively. Structures were solved by molecular replacement using PHASER 66 with human FEN1 protein as the search model and refined in PHENIX 67 with rounds of manual rebuilding in COOT 68 . For hFEN1-R100A, we refined the model using higher diffraction data to 2.1 Å, based on the theory that cutting off resolution at an arbitrary point leads to series termination errors. Flexible regions became more visible and we could follow the path of the 5′-flap more easily. The R and R free measures dropped substantially. We used a higher resolution structure (PDB code: 3Q8K) for reference in refinement. For the three structures, anomalous differences from the Sm 3+ atoms were used in refinement and modelling. In the active sites of the hFEN1-D86N, hFEN1-R100A and hFEN1-D233N structures there were, respectively, one, three and four Sm 3+ atoms, with partial occupancy. For all structures, there were no Ramachandran outliers. For hFEN1-D86n, 95% were favoured and 5% were allowed. For hFEN1-R100A, 96% were favoured and 4% were allowed. For hFEN1-D233N, 98% were favoured and 2% were allowed. Structure figures were created in PyMol (Schrödinger, LLC). Movies were created in Chimera 69 . Protein purification of full-length FEN1 proteins All steps were carried out using an ÄKTA FPLC system at 4 °C, at a flow rate of 5.0 ml min −1 unless stated otherwise. Frozen/thawed lysates were loaded onto a Ni-IDA column, followed by washing with 4 CV buffer IMAC-A1, 4 CV buffer IMAC-A2, a gradient of 100% IMAC-A2 to 100% IMAC-B1 in 2 CV, then 4 CV IMAC-B1. Pooled fractions were diluted 1:1 with 20 mM β-mercaptoethanol and loaded onto a 5 ml HiTrap Q FF column to remove nucleic acid contamination, with a 20 CV elution gradient from 0 to 1.0 M NaCl in 20 mM Tris pH 8.0, 1 mM EDTA, 0.02% NaN 3 , 20 mM β-mercaptoethanol. The flow-through containing FEN1 was diluted 1:4 with 20 mM β-mercaptoethanol and passed through the HiPrep Heparin FF 16/10 column as above. The purified FEN1 was exchanged into 2 × SB using a HiPrep 26/10 Desalting column, concentrated and prepared for storage as detailed above. Proteins requiring further purification (wt hFEN1 and D233N) were passed through the HiPrep Phenyl FF (high sub) 16/10 phenylsepharose column, as above. Protein-containing fractions were pooled and concentrated to 5 ml using an Amicon stirred cell, subjected to gel filtration and prepared for storage as outlined above. Oligonucleotide synthesis The DNA oligonucleotides used for crystallization ( Supplementary Fig. 1 ) were purchased from IDT as desalted oligonucleotides. They were resuspended in 10 mM HEPES 7.5, 50 mM KCl, 0.5 mM EDTA and annealed at ∼ 1–2 mM. The DNA oligonucleotides used to construct the kinetic substrates ( Supplementary Fig. 1 ) were purchased from DNA Technology A/S (Denmark) with HPLC purification. Except for E1 and E2 ( Supplementary Fig. 1A ), the oligonucleotides as supplied were reconstituted in ultrapure water and concentrations of stock solutions determined using calculated extinction coefficients (OD 260 ). Oligonucleotides E1 and E2 required additional HPLC purification, which was carried out using an OligoSep GC cartridge (Transgenomic; #NUC-99–3860) using buffers A (100 mM triethylammonium acetate pH 7.0, 0.025% v/v acetonitrile) and B (100 mM triethylammonium acetate pH 7.0, 25% acetonitrile) and a gradient of 5–50% B over 18 min, at 50 °C and a flow rate of 1.5 ml min −1 . Purified oligonucleotide in solution was loaded onto a 5 ml HiTrap DEAE FF column equilibrated with 3 CV of buffer C (10 mM Tris pH 7.5, 100 mM NaCl, 1 mM EDTA, 0.02% NaN 3 ). The column was washed with a further 3 CV of buffer C, then eluted using a step gradient of 100% buffer C-100% buffer D (10 mM Tris pH 7.5, 1.0 M NaCl, 1 mM EDTA, 0.02% NaN 3 ) in 3 CV. Fractions containing DNA were desalted into ultrapure water using NAP-25 columns. Desalted samples were dried then reconstituted as above. DNA constructs were annealed in 1 × FB (50 mM HEPES pH 7.5, 100 mM KCl) for at least 5 min at 95 °C, then left at ambient temperature for 30 min. FRET binding assay Values for K d were obtained using sequential titration of the appropriate enzyme into a 10 nM solution of the appropriate DNA construct, according to the reported protocol 44 . FRET efficiencies ( E ) were determined using the (ratio) A method by measuring the enhanced acceptor fluorescence at 37 °C. The steady state fluorescent spectra of 10 nM non-labelled (NL) trimolecular, donor-only labelled (DOL) and doubly labelled (DAL) DNA substrates ( Supplementary Fig. 1A,B ) were recorded using a Horiba Jobin Yvon FluoroMax-3 fluorometer. For direct excitation of the donor (fluorescein, DOL) or acceptor (TAMRA, AOL), the sample was excited at 490 nm or 560 nm (2 nm slit width) and the emission signal collected from 515–650 nm or 575–650 nm (5 nm slit width). Emission spectra were corrected for buffer and enzyme background signal by subtracting the signal from the NL DNA sample. In addition to 10 nM of the appropriate DNA construct, samples contained 10 mM CaCl 2 or 2 mM EDTA, 110 mM KCl, 55 mM HEPES pH 7.5, 0.1 mg ml −1 bovine serum albumin and 1 mM DTT. The first measurement was taken before the addition of protein with subsequent readings taken on the cumulative addition of the appropriate enzyme in the same buffer, with corrections made for dilution. Transfer efficiencies ( E ) were determined according to equation (1), where F DA and F D represent the fluorescent signal of the DAL and DOL DNA at the given wavelengths, respectively (for example, F DA ( λ D EX , λ A EM ), denotes the measured fluorescence of acceptor emission on excitation of the donor, for DAL DNA); ɛ D and ɛ A are the molar absorption coefficients of donor and acceptor at the given wavelengths; and ɛ D (490)/ ɛ A (560) and ɛ A (490)/ ɛ A (560) are determined experimentally from the absorbance spectra of DAL and the excitation spectra of singly TAMRA-AOL, respectively. Energy transfer efficiency ( E ) was fitted by non-linear regression in the Kaleidagraph program to equation (2), where E max and E min are the maximum and minimum energy transfer values, [ S ] is the substrate concentration, [ P ] is the protein concentration and K bend is the bending equilibrium dissociation constant of the protein substrate [PS] complex. Where And Donor (fluorescein) was excited at 490 nm with emission sampled as the average value of the signal between 515 and 525 nm, and acceptor (TAMRA) was excited at 560 nm with emission averaged between 580 and 590 nm. Multiple turnover rates Reaction mixtures (final volume 180 μl) were prepared in 1.5 ml microcentrifuge tubes with 50 nM final substrate concentration (S5,1; S0,1-5P; S0,1-5OH; or S0,1-5FAM) and incubated at 37 °C before addition of enzyme to initiate the reaction. The final composition of each reaction mixture was 1 × RB (55 mM HEPES pH 7.5, 110 mM KCl, 8 mM MgCl 2 , 0.1 mg ml −1 BSA) supplemented with 1 mM DTT. Enzyme concentrations were chosen to give ∼ 15% cleavage after 20 min, and any data points showing greater cleavage were discarded due to effects of substrate depletion. For substrates S5,1 and S0,1-5FAM, aliquots (20 μl) of each reaction mixture were quenched into 250 mM EDTA (50 μl) at seven different time points—typically 2, 4, 6, 8, 10, 12 and 20 min—and reaction progress monitored by dHPLC analysis using a WAVE system equipped with an OligoSep cartridge (4.6 × 50 mm; ADS Biotec). The 6-FAM label was detected by fluorescence (excitation 494 nm, emission 525 nm) and product(s) separated from unreacted substrate using the following gradient: 5–30% B over 1 min; 30–55% B over 4.5 min; 55–100% B over 1.5 min; 100% B for 1.4 min; ramp back to 5% B over 0.1 min; hold at 5% B for 2.4 min, where A is 0.1% v/v MeCN, 1 mM EDTA, 2.5 mM tetrabutylammonium bromide and B is 70% v/v MeCN, 1 mM EDTA, 2.5 mM tetrabutylammonium bromide 12 . Initial rates ( v , nM min −1 ) were determined by linear regression of plots of product concentration versus time and adjusted for enzyme concentration to give normalized rates ( v /[ E ], min −1 ). For analysis of exonucleolytic activity, reactions with substrates S0,1-5P and S0,1-5OH were run as above but quenched in 98% deionised formamide containing 10 mM EDTA. Time points and enzyme concentrations were selected to give 10–15% cleavage at the reaction end point (≥20 min). The quenched samples were analysed by capillary electrophoresis as detailed below, then rates determined and normalized as above. Analysis of reaction aliquots by capillary electrophoresis Capillary electrophoresis was performed with the P/ACE MDQ Plus system (Beckman Coultier) using the ssDNA 100-R Kit (AB SciEx UK Limited; #477480) according to the manufacturer’s instructions. Briefly, the supplied capillary (ID 100 μm, 30 cm long; 20 cm to detection window) was loaded with the commercially supplied gel using 70 psi of pressure for 5 min. The capillary was then equilibrated between two buffer vials containing Tris-Borate-Urea buffer (AB SciEx UK Limited; #338481) at 3, 5 and 9.3 kV for 2, 2 and 10 min, respectively, with a ramp time of 0.17 min. Samples were then run using a 5 s electrokinetic injection preceded by a 1 s plug injection of deionised water, before separation over 20 min with a voltage of 9.3 kV applied between two buffer vials; runs were carried out at 50 °C with constant pressure of 40 psi maintained on both sides of the capillary. The gel was replaced every five sample runs and running buffer was replaced every 20 sample runs. Peak detection was by laser induced fluorescence (LIF) using an excitation wavelength of 488 nm and a 520 nm filter to measure the emission. The electrophoretograms were integrated to determine the concentration of product formed at each time point. Initial rates of reaction ( v , nM min −1 ) were then obtained using linear regression, and converted to the reported normalized rates ( v /[ E ], min −1 ) as above. Single turnover rapid quench experiments Rapid quench experiments for determination of single turnover rate were carried out for wt hFEN1 and the mutants R104A, K132A, R103AR129A and D233N. Reactions were carried out at 37 °C using an RQF-63 device (HiTech Limited, Salisbury, UK) 12 , 70 . Premix stock solutions of enzyme and substrate were prepared at 2 × final concentration in reaction buffer (55 mM HEPES pH 7.5, 110 mM KCl, 8 mM MgCl 2 , 2.5 mM DTT and 0.1 mg ml −1 BSA) and kept on ice until use. For individual reactions, the two 80 μl sample injection loops of the instrument (lines A and B) were filled with aliquots of enzyme and substrate stock, respectively. The syringe feeding the quench line contained 1.5 M NaOH, 20 mM EDTA. Individual reactions were carried out using a controlled time delay of between 0.0091 and 51.241 s before quenching, with final concentrations of 5 nM substrate S5,1 and either 400 nM or 1,000 nM enzyme, as indicated ( Supplementary Fig. 4C,D ). Quenched reaction mixtures were analysed by dHPLC as described above for multiple turnover reactions, and rates were derived from curves consisting of at least 14 individual time points. The single turnover rate of the reaction was obtained as the first-order rate constant ( k ST ) derived using nonlinear least squares regression for a one- or two-phase exponential in GraphPad Prism 6.05 (GraphPad Software, Inc.). Model selection was by statistical analysis using Aikake’s Information Criteria (AIC). Benchtop single turnover experiments For the remaining proteins—hFEN1 mutants R104AK132A, R103ER129E, R104EK132E, QUAD-E (R103E/R104E/R129E/K132E) and D181A—reactions to determine single turnover rates were carried out using manual sampling, as described for the multiple turnover reactions above, except using 5 nM substrate S5,1 and an enzyme concentration of either 400 nM or 1,000 nM as indicated in each case ( Supplementary Fig. 4C,D ). A final reaction volume of 360 μl was used, permitting sampling of 14 time points per tube, which were typically chosen to span a reaction duration of at least 20 half-times. Quenched samples were analysed by dHPLC as detailed above, then single turnover rates were derived as described for the rapid quench experiments. Yeast strain construction To construct the individual yeast mutants, the hphMX4 hygromycin resistance marker was first integrated downstream of Rad27 , replacing genomic region ChrXI:224,681–224,712, in a strain containing the Ura3-(GAA) 100 cassette 49 derived from parent strain CH1585 (MATa leu2- Δ 1, trp1 -Δ 63, ura3–52 , and his3–200 ). The rate of (GAA) 100 expansion in this strain (designated Rad27-Hyg ) was indistinguishable from the wild type strain not carrying the downstream hphMX4 cassette. Genomic DNA from Rad27-Hyg was used as a template for PCR with a ∼ 100 bp forward primer containing the specific mutations and a reverse primer downstream of the hphMX4 cassette. These PCR products were used to transform the wt (GAA) 100 strain with selection on 200 μg ml −1 hygromycin. Transformants were screened by PCR and/or restriction digest, and the full-length sequences of the mutated Rad27 alleles were verified by Sanger sequencing. The length of the starting (GAA) 100 tract in the mutant strains was confirmed by PCR using primers A2 (5′-CTCGATGTGCAGAACCTGAAGCTTGATCT-3′) and B2 (5′-GCTCGAGTGCAGACCTCAAATTCGATGA-3′). Yeast spot assay Fivefold serial dilutions were made on an equivalent starting number of cells for each strain. A 2.5 μl aliquot of each dilution was spotted onto YPD, YPD with 10 μg ml −1 camptothecin, or YPD with 100 mM hydroxyurea. For UV treatment, cells spotted onto YPD were immediately irradiated using a UV Stratalinker 1,800 (Stratagene). Fluctuation assay and expansion rates At least two independent isolates of each yeast mutant were diluted from frozen stocks and grown for 40 h on solid rich growth media (YPD) supplemented with uracil. 16 individual colonies (8 per isolate) were dissolved in 200 μl of water and serially diluted. Appropriate dilutions were plated on synthetic complete media containing 0.09% 5-fluoro-orotic acid (5FOA) to select for large-scale expansion events or YPD to assess total cell number. Colonies on each plate were counted after three days of growth. For each mutant, at least 96 representative 5FOA colonies (8–12 per plate) were analysed for large-scale GAA expansion via PCR using primers A2 and B2 followed by agarose gel electrophoresis (1.5% agarose in 0.5X TBE). To determine a true expansion rate (as opposed to a gene inactivation rate), the number of 5FOA-r colonies counted per plate was adjusted by the overall percentage of GAA expansion events observed for that mutant. Expansion rates were calculated using the Ma-Sandri-Sarkar maximum likelihood estimator method with a correction for plating efficiency determined as z -1/ z ln( z ), where z is the fraction of the culture analysed (Rosche and Foster, 2000). PCR product lengths for the calculation of GAA expansion size were determined using cubic spline interpolation on Total Lab Quant software. Kolmogorov-Smirnov comparison of expansion lengths between strains was conducted using SPSS software—non-parametric testing of independent samples. Genotype information for each strain used is shown in Supplementary Table 1 . Extraction of Rad27 proteins and western blotting Wt and mutant strains in mid-log phase (OD 600 0.6–0.8, 10 mls) were pelleted, washed with water and frozen. Pellets were resuspended in 150 μl of distilled water, mixed with an equal volume of 0.6 M NaOH with a 10 min incubation at room temperature. After low speed centrifugation (153 g ) for 5 min, the supernatant was removed and each pellet resuspended in SDS sample loading buffer (60 mM Tris-HCl pH 6.8, 4% β-mercaptoethanol, 4% SDS, 0.01% bromophenol blue, 5% glycerol). The samples were boiled for 3 min, then 10 μl of each separated by 4–12% SDS–PAGE gel (Invitrogen) followed by Western blotting using anti-RAD27 goat polyclonal antibody (1:125 dilution; Santa Cruz Biotechnology, #sc-26719), donkey anti-goat IgG-HRP secondary antibody (1:2,500; Santa Cruz Biotechnology; #sc-2020) and visualized using an ECL detection kit (GE Healthcare). A nonspecific band present in all lanes was used as a loading control ( Fig. 5d ). Data availability Coordinates and structure factors are deposited with the Protein Data Bank (PDB) under the accession codes: 5UM9 (D86N), 5KSE (R100A), and 5K97 (D233N). The data that support the findings of this study are available from the corresponding authors on request. Additional information How to cite this article: Tsutakawa, S. E. et al . Phosphate steering by Flap Endonuclease 1 promotes 5′-flap specificity and incision to prevent genome instability. Nat. Commun. 8, 15855 doi: 10.1038/ncomms15855 (2017). Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Change history 07 August 2017 A correction has been published and is appended to both the HTML and PDF versions of this paper. The error has not been fixed in the paper. 07 August 2017 Nature Communications 8 Article number: 15855 (2017); Published: 27 June 2017; Updated: 7 August 2017 The financial support for this Article was not fully acknowledged. The Acknowledgements should have included the following: This research used resources of the Advanced Light Source and the StanfordSynchrotron Radiation Lightsource, which are DOE Office of Science User Facilities under contract no.
The actions of a protein used for DNA replication and repair are guided by electrostatic forces known as phosphate steering, a finding that not only reveals key details about a vital process in healthy cells, but provides new directions for cancer treatment research. The findings, published this week in the journal Nature Communications, focus on an enzyme called flap endonuclease 1, or FEN1. Using a combination of crystallographic, biochemical, and genetic analyses, researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) showed that phosphate steering kept FEN1 in line and working properly. "FEN1, like many DNA replication and repair proteins, have paradoxical roles relevant to cancer," said study lead author Susan Tsutakawa, a biochemist at Berkeley Lab's Molecular Biophysics and Integrated Bioimaging Division. "A mistake by FEN1 could damage the DNA, leading to the development of cancer. On the other side, many cancers need replication and repair proteins to survive and to repair DNA damaged from cancer treatments. New evidence shows that phosphate steering helps ensure that FEN1 behaves as it should to prevent genome instability." During the process of replication, double-stranded DNA unzips to expose the nucleotides along its two separate strands. In that process, flaps of single-stranded DNA are created. The job of FEN1 is to remove those flaps by positioning metal catalysts so that they can break down the phosphodiester bonds that make up the backbone of nucleic acid strands. This cleavage action occurs in the duplex DNA near the junction with the single-stranded flap. Flaps that remain uncleaved can lead to toxic DNA damage that either kill the cell or cause extensive mutations. For example, trinucleotide repeat expansions, a mutation associated with disorders such as Huntington's disease and fragile X syndrome, are characterized by the failure of FEN1 to cut off the excess strand. The schematic at top illustrates how inversion of the DNA flap keeps the phosphodiester bonds away from the metal catalysts that can inadvertently cut the strand. The bottom view shows a single-stranded DNA flap passing through a small opening in the FEN1 protein, guided by electrostatic forces in the basic region. Credit: Susan Tsutakawa/Berkeley Lab "What had been unclear before our study was how FEN1 was able to identify its exact target while preventing the indiscriminate cutting of single-stranded flaps," said Tsutakawa. "There must be a way for this protein to not shred similar targets, such as single-stranded RNA or DNA. Getting that right is critical." Tsutakawa worked with corresponding author John Tainer, Berkeley Lab research scientist and a professor at the University of Texas, at the Advanced Light Source, a DOE Office of Science User Facility that produces extremely bright X-ray beams suitable for solving the atomic structure of protein and DNA complexes. Using X-ray crystallography, they were able to get a molecular-level view of the FEN1 protein structure. They determined that the single-stranded flap threaded through a small hole formed by the FEN1 protein. The size of the hole serves as an extra check that FEN1 is binding the correct target. However, they surprisingly found that the single-stranded flap is inverted such that the more vulnerable part of the DNA, the phosphodiester backbone, faces away from the metal catalysts, thereby reducing the chance of inadvertent incision. The inversion is guided by a positively charged region in FEN1 that stabilizes the upside-down position and steers the negatively charged phosphodiester of the single-stranded DNA through the FEN1 tunnel. "These metals are like scissors and will cut any DNA near them," said Tsutakawa. "The positively charged region in FEN1 acts like a magnet, pulling the flap away from these metals and protecting the flap from being cut. This is how FEN1 avoids cutting single-stranded DNA or RNA." "This phosphate steering is a previously unknown mechanism for controlling the specificity of FEN1," she added. "Cancer cells need FEN proteins to replicate, so understanding how FEN1 works could help provide targets for research into treatments down the line."
10.1038/ncomms15855
Earth
New study shows arctic warming contributes to drought
Cody C. Routson et al, Mid-latitude net precipitation decreased with Arctic warming during the Holocene, Nature (2019). DOI: 10.1038/s41586-019-1060-3 Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1060-3
https://phys.org/news/2019-03-arctic-contributes-drought.html
Abstract The latitudinal temperature gradient between the Equator and the poles influences atmospheric stability, the strength of the jet stream and extratropical cyclones 1 , 2 , 3 . Recent global warming is weakening the annual surface gradient in the Northern Hemisphere by preferentially warming the high latitudes 4 ; however, the implications of these changes for mid-latitude climate remain uncertain 5 , 6 . Here we show that a weaker latitudinal temperature gradient—that is, warming of the Arctic with respect to the Equator—during the early to middle part of the Holocene coincided with substantial decreases in mid-latitude net precipitation (precipitation minus evapotranspiration, at 30° N to 50° N). We quantify the evolution of the gradient and of mid-latitude moisture both in a new compilation of Holocene palaeoclimate records spanning from 10° S to 90° N and in an ensemble of mid-Holocene climate model simulations. The observed pattern is consistent with the hypothesis that a weaker temperature gradient led to weaker mid-latitude westerly flow, weaker cyclones and decreased net terrestrial mid-latitude precipitation. Currently, the northern high latitudes are warming at rates nearly double the global average 4 , decreasing the Equator-to-pole temperature gradient to values comparable with those in the early to middle Holocene. If the patterns observed during the Holocene hold for current anthropogenically forced warming, the weaker latitudinal temperature gradient will lead to considerable reductions in mid-latitude water resources. Main The response of mid-latitude climate to Arctic warming is poorly understood, in part because of a lack of long-term observational data 7 . There is evidence that the strength of the latitudinal temperature gradient (LTG) influences the position, strength and meridionality of mid-latitude jet streams and storm tracks (Fig. 1 ) 1 , 2 , 8 , 9 . Connections between the LTG and the mid-latitudes, however, may be nonlinear 6 , and attribution of recent changes in mid-latitude climate to Arctic warming remains a topic of active research 5 . A better understanding of links between Arctic amplification, the LTG and hemispheric circulation would have important implications for characterizing future variability in the mid-latitude hydroclimate. Fig. 1: Conceptual diagram. a , Cold high-latitude temperatures lead to a strong temperature gradient between the Equator and the pole, a stronger subtropical jet, and enhanced mid-latitude moisture transport and net precipitation. b , Warming the high latitudes reduces the LTG, and is coincident with weaker Hadley circulation, weaker westerly jets, and decreased mid-latitude moisture transport and net precipitation. Full size image Mid-latitude weather is largely shaped by extratropical cyclones, which form in regions of maximum baroclinic instability related to the LTG 1 , 10 . One hypothesis supported by theory, observations and climate models is that Arctic warming weakens the LTG and reduces zonal mid-latitude westerly winds through the thermal wind relationship 3 , 8 , 11 , 12 . The weakened LTG reduces the baroclinic potential energy that fuels storm systems, reducing mid-latitude cyclone frequency and intensity 3 , 10 , and thus reducing annual net precipitation at mid-latitudes. Palaeoclimate archives spanning the Holocene provide an opportunity to evaluate the impact of Arctic warming on the LTG and mid-latitude hydroclimates. Some model results suggest that annual LTG changes, driven annually by obliquity and seasonally by precession, would have favoured a Holocene trend towards increasing mid- to high-latitude storm activity 3 . Annual insolation peaked around 10,000 years ago (10 ka) in the Arctic, with maximum warming occurring about 7 ka (ref. 13 ). Insolation and temperatures have subsequently declined faster at high latitudes than at the Equator (Extended Data Fig. 1a ) 14 , providing a natural baseline for assessing the relationship between the evolution of the surface LTG and mid-latitude hydroclimates. In this study, we examine an extensive dataset of multi-proxy time series for the Holocene climate (Fig. 2 ). We explore the evolution of the LTG at three temporal scales from different but temporally overlapping datasets: the past 100 years (ref. 15 ), 2,000 years (ref. 16 ) and 10,000 years. We apply the new global compilation of 2,000-year palaeotemperature records from PAGES 2k 16 to bridge between instrumental data and Holocene-long temperature reconstructions. We compare the Holocene LTG evolution with that of mid-latitude hydroclimate between 30° N and 50° N, a region that is strongly influenced by extratropical cyclones and that encompasses extensive dry-land farming and large population centres vulnerable to hydroclimate change. We then use an ensemble of mid-Holocene (6 ka) PMIP3 simulations to explore the mechanistic framework and seasonality of the changes, and to compare with the proxy data. Fig. 2: Spatial and temporal distribution of Holocene proxy records. a , Proxy temperature records. b , Mid-latitude (30° N to 50° N) proxy hydroclimate records. Abbreviations for proxy types include: ratio of nitrogen-15 isotopes/argon-40 isotopes ( 15 N/ 40 Ar), glycerol dialkyle glycerol tetraethers (GDGT), long-chain diol index (LDI), tetraether index of 86 carbons (TEX 86 ), magnesium/calcium ratio (Mg/Ca), tree-ring width (TRW), carbon-13 isotopes (δ 13 C), oxygen-18 isotopes (δ 18 O), loss on ignition (LOI), ash content of peat (peat ash), ratio of strontium and calcium (Sr/Ca), deuterium isotopes of leaf wax (δD), and records composed of two or more proxy types (hybrid) 29 . The maps were generated using code and associated data from ref. 30 . A list of sites with metadata, including references for each record, is in Supplementary Tables 1 and 2 . Full size image Gridded instrumental TS4.01 data 15 from the Climatic Research Unit (CRU) show that historical LTGs have weakened over the past century by about 0.02 °C per degree of latitude (Extended Data Fig. 2a ). The LTGs derived from the PAGES 2k network show recent LTG reductions, consistent with the instrumental observations, and they place the historical trend within a millennial-scale context (Extended Data Fig. 2b ). The Holocene analysis is focused on the postglacial period starting at 10 ka when Northern Hemisphere ice-sheet area had diminished to 25% of its full-glacial extent, and global atmospheric CO 2 concentration (265 ppm) and mean surface temperature reached or exceeded preindustrial values (summarized by ref. 17 ). The Laurentide Ice Sheet persisted until about 7 ka (ref. 18 ) and probably affected temperature and circulation regionally. Because of this, and because the spatial coverage of proxy sites is insufficient to detect spatial variability at fine scales, we focus our analysis on zonal averages. Our Holocene temperature analysis uses 236 records from 219 sites including 16 proxy types collected from six archive types (Fig. 2a ; Supplementary Table 1 ). Archive types include lake sediment ( n = 109), marine sediment ( n = 116), ice cores ( n = 3), peat bogs ( n = 3), speleothems ( n = 2) and tree rings ( n = 3). Alkenones and Mg/Ca are the dominant proxies for sea-surface temperature, whereas pollen- and chironomid-based reconstructions dominate the terrestrial temperature proxies. Temporal availability of temperature records is relatively uniform between 8 ka and 2 ka, with the maximum number of records available at about 5 ka. Sensitivity tests with instrumental and model data (see Methods ) show that our network accurately represents Northern Hemisphere temperature variability. The 72 mid-latitude hydroclimate records from 68 sites include five archive types (lake sediment ( n = 58), marine sediment ( n = 2), lagoon sediment ( n = 1), peat bogs ( n = 4) and speleothems ( n = 7)) and 17 proxy types (Fig. 2b ; Supplementary Table 2 ). Dominant proxy types include pollen, lake-level stratigraphy, oxygen isotopes and diatoms. Monsoon records were excluded from the hydroclimate analysis to help to isolate the influence of the LTG on large-scale circulation and precipitation. On the basis of the original authors’ interpretations, 45% of the temperature records represent mean-annual conditions, 48% summer and 7% winter. Mid-latitude hydroclimate records are 54% annual, 3% summer, 15% winter, 20% spring or fall, and 8% unspecified. For sites that include both season-specific and mean-annual reconstructions, the annual series was used. However, the mixture of seasonality influences our interpretations to some extent. We assume that over centennial to millennial timescales, temperature changes represented by different proxy types, including surface air, ocean surface and lake surface temperatures, generally co-vary, and we combine them in the following analysis. Holocene temperature histories for the Northern Hemisphere differed considerably by latitude (Fig. 3 ). Peak warmth in the polar region (70° N to 90° N, Fig. 3a ) occurred in the earliest Holocene, followed by a Holocene cooling trend. The greater variability reflects the limited number of records ( n = 20) contributing to the polar composite. The high-latitude (50° N to 70° N, Fig. 3b ) composite integrates 103 records. Holocene peak warmth occurred at about 7 ka. Later high-latitude peak warming may reflect the persistent influence of ice sheets, although the polar region had an earlier peak warming. Gradual cooling dominated the high latitudes after about 6 ka. The mid-latitude composite (30° N to 50° N, n = 65, Fig. 3c ) warmed to about 8 ka followed by cooling. Low-latitude (10° N to 30° N, n = 22, Fig. 3d ) and equatorial (10° S to 10° N, n = 26, Fig. 3e ) temperatures were stable or warmed slightly over the past 10 ka. Fig. 3: Northern Hemisphere latitudinal climate. a – e , Temperature composites by latitude. The numbers of contributing records are shown in grey (200-year bins). Temperature composites are all displayed on the same scale, with each y axis spanning 3 °C. Shading represents the 95% bootstrapped uncertainties, which integrate age and calibration uncertainty estimates (see Methods ). The Holocene composites show little long-term change in the equatorial regions, and greater low-frequency variability and trends in the middle to high latitudes. f , LTG in the Northern Hemisphere, calculated through three different methods including regression across temperatures averaged into 20° zonal bands (black), regression on all records (red), and high latitudes minus low latitudes (purple). The LTG estimates have been smoothed with a three-bin (600-year) moving window for comparison. Diamonds show the PMIP3 multi-model median for the simulated annual LTG at 6 ka and preindustrial, calculated using regression across 20° latitudinal averaged temperatures (black). g , Annual latitudinal insolation gradient for the Northern Hemisphere. h , Standardized, average mid-latitude (30° N to 50° N) net precipitation, with the number of contributing records in grey (200-year bins). Shading in f and h represents the one- and two-standard deviation sample, age and calibration bootstrapped uncertainty intervals (see Methods ). The vertical dashed line at 8 ka in f – h shows when the Laurentide Ice-Sheet area and potential influence on circulation was largely reduced (Extended Data Fig. 3 ) 18 . The weakest temperature gradients in the early to middle Holocene correspond to the period of maximum Holocene aridity. Source Data Full size image Holocene LTGs (Fig. 3f ) were weakest in the early to middle Holocene, when polar and high-latitude temperature anomalies were warmer (Fig. 3a,b ) relative to the low latitudes (Fig. 3d,e ). Proxy calibrations may underestimate the amplitude of Holocene temperature change (for example ref. 19 ); however, three methods for calculating the LTG (see Methods ) all show that the LTG strengthened after the middle Holocene (Fig. 3f ). Stable Holocene equatorial temperatures largely constrained the competing influence of tropical warming on the LTG and circulation (summarized by ref. 1 ). The millennial-scale LTG trend (different from zero with P < 0.0001) tracks changes in the annual latitudinal insolation gradient (Fig. 3g ), which we hypothesize was the primary driver of Holocene LTG changes. Similar to the LTG, mid-latitude net precipitation exhibits a strong Holocene trend. The driest mid-latitude conditions occurred in the early to middle Holocene, followed by a Holocene-length wetting trend (Fig. 3h ). Hydroclimate records published in calibrated units (for example, mm yr −1 , n = 15) indicate an average increase in annual net precipitation of 145 mm (93–187 mm, 5%–95% confidence interval, CI) since 8 ka, and 272 mm (130–312 mm) since about 10 ka. For the region best represented by the calibrated records (mid-latitude North America where annual precipitation equals 500–1,300 mm), the changes are equivalent to an increase of 11–29% in annual precipitation since 8 ka. Regression (Extended Data Fig. 3 ) indicates that the change corresponds to a net precipitation increase of 16.8 ± 7 mm per decrease of 0.01 W m −2 per degree of latitude in the annual latitudinal insolation gradient after the ice-sheet effect diminishes at about 8 ka. Early Holocene amplification of the dryness may be attributed to ice-sheet effects 20 . We used a PMIP3 climate model ensemble to test the hypothesis that a weaker LTG reduces mid-latitude rainfall (Fig. 1 ). The mid-Holocene (6 ka) minus preindustrial mean of 12 models shows that the LTG change was 0.011 (95% CI 0.003–0.017) °C per degree of latitude. This compares with the LTG change of 0.015 (95% CI 0.0004–0.031) °C per degree of latitude calculated from the palaeotemperature dataset for 6 ka. The modelled 6 ka simulations show that the decreased LTG is associated with reduced Hadley circulation, reduced jet-stream strength and reduced large-scale, mid-latitude precipitation (Fig. 4a,b ). The large-scale component of precipitation reflects precipitation from large-scale convergence and lifting, and partially excludes monsoon-related rainfall. This allows for a more analogous comparison with the hydroclimate proxy network, which excludes monsoon records. If total precipitation or precipitation-evaporation is examined instead (Extended Data Figs. 4g,i , and 5c ), the relationship persists for at least half of the models. Fig. 4: Mid-Holocene (6 ka) PMIP3 modelled circulation and large-scale precipitation anomalies relative to preindustrial. a , Nine-model mean circulation anomalies showing annual-mean changes in meridional and vertical motion (vectors; the scale for vertical motion has been increased to aid viewing) and zonal winds (contours; m s −1 ). Anomalies show weakened annual Hadley circulation and reduced zonal wind speed during the mid-Holocene. b , Relation between the LTG and large-scale precipitation (that is, precipitation corresponding to large-scale convergence and lifting) for 12 PMIP3 models. Blue symbols indicate zonal-mean values, with the temperature gradient calculated over 10° S to 90° N, and precipitation calculated over 30–50° N. Red symbols show averages calculated using model data at proxy locations only. Negative values on the x -axis indicate a reduced LTG at 6 ka relative to preindustrial. Three models shown in b (CSIRO-Mk3L-1-2, FGOALS-g2 and GISS-E2-R) lacked the output fields necessary for inclusion in the multi-model means in a and Extended Data. NH, Northern Hemisphere. Full size image We interpret the seasonality of circulation and precipitation change as follows. Annual high-latitude insolation was increased during the early to middle Holocene but was especially enhanced in summer (Extended Data Fig. 1 ) 14 . The PMIP3 simulations show reduced jet-stream strength and reduced mid-latitude, large-scale precipitation throughout the year, with the largest anomalies in summer (Extended Data Fig. 4 ). The largest summertime response is congruent with recent work that shows that changes in the summer LTG have a strong impact on summer cyclogenesis 21 , and congruent with the tendency of many proxies to track summer conditions more closely (for example ref. 22 ; see Methods ). We hypothesize that enhanced high-latitude summer insolation led to enhanced warming both at the surface and aloft. Warmer summers led to reduced sea-ice extent, a longer ice-free season and thinner sea ice, increasing the heat fluxes between the ocean and atmosphere that propagated into the cold season as observed during recent Arctic amplification 4 and in Holocene model simulations 23 . Arctic warming would have reduced the LTG, subsequently reducing the strength and frequency of storms for much of the year, and thereby reducing annual net precipitation at mid-latitudes. This mechanistic framework is consistent with model results showing that a strengthening LTG driven by obliquity and seasonally by precession would favour a Holocene trend towards increasing mid- to high-latitude storm activity 3 . Several alternative hypotheses have been proposed about Holocene circulation and potential links to changes in temperature and insolation gradients 9 , 24 . For example, changes in the insolation gradient have been linked to stronger wintertime westerlies over Europe 9 and weaker summertime westerly flow 24 . A predominantly positive mid-Holocene winter North Atlantic Oscillation has been suggested 25 , 26 , and a cooler tropical Pacific Ocean may have caused North American aridity 27 . These hypotheses are still debated, and there is no clear consensus on Holocene circulation changes 24 , 28 . Nevertheless, the reduced mid-latitude net precipitation inferred from the proxy records (Fig. 3h ), and the reduced westerlies and precipitation shown by the PMIP3 models (Fig. 4 ), are not consistent with stronger westerlies suggested by some of those studies. Recent and projected warming and drying feedbacks 12 suggest an alternative hypothesis that early to middle Holocene mid-latitude aridity was caused by higher summer insolation driving enhanced evaporation, rather than by changes in storm tracks and circulation. Precipitation and evaporation-induced drying were most likely complementary, which is supported in the PMIP3 models (Extended Data Fig. 4h ). The PMIP3 models indicate that mid-latitude drying (~40°–55° N) was dominated by reductions in precipitation (Extended Data Fig. 5c ). However, the simulated magnitude of drying is small (<0.05 mm d −1 ; Extended Data Fig. 5c ) compared with the magnitudes reconstructed using the calibrated datasets (>0.3 mm d −1 at 6 ka; Extended Data Fig. 3b ). The driest period of the Holocene coincided with cool conditions when ice sheets still persisted in the high latitudes, indicating that circulation change was likely to be a dominant driver of aridity at that time. In summary, the proxy climate data presented here show that a reduced early- to middle-Holocene LTG coincided with substantial decreases in mid-latitude net precipitation. More work and additional records are needed to fully resolve regional and sub-regional variability; however, an ensemble of PMIP3 models is consistent with the proxy evidence and shows a weaker mid-Holocene LTG corresponded with reduced jet-stream strength and reduced mid-latitude precipitation. Current and future conditions are more complex than during the Holocene because greenhouse gases are forcing a larger mean temperature change and have multiple and competing influences on circulation 1 , 12 . Nevertheless, it is reasonable to assume that the relationship between the surface LTG and circulation holds, offering a framework to help in understanding the impact of atmospheric dynamics on both past and future changes. Methods Holocene dataset Proxy records for Holocene temperature were selected that span a minimum duration of 4,000 years since 10 ka, with an average sample resolution finer than 400 years and age control points at least every ~3,000 years. Records were compiled from datasets in refs. 13 , 29 , 31 , 32 , NOAA-WDS Palaeoclimatology and PANGAEA data libraries, in addition to individual records not previously stored in public archives (Supplementary Tables 1 and 2 ). Temperature records were compiled between 10° S and 90° N, and only records previously calibrated to temperature units (degrees) were used in this analysis (that is, we did not include uncalibrated temperature records). Holocene hydroclimate patterns were characterized using proxy records that met the above age and resolution criteria. We included both calibrated records (for example, mm yr −1 of precipitation) and uncalibrated records (for example, δ 18 O of lacustrine calcite) that were interpreted by the original authors as sensitive to changes in hydroclimate such as precipitation, precipitation minus evaporation, lake level and drought severity. Mid-latitude hydroclimate patterns were calculated from sites located between 30° N and 50° N. Monsoon records were excluded from the analysis to isolate the trends in hydroclimate sourced primarily from extratropical cyclones and associated flow of westerly winds. Data normalization and binning Methods for analysing the different datasets, including instrumental temperature 15 , 2,000-year (2k) temperature 16 , Holocene temperature and Holocene hydroclimate, are summarized in Supplementary Tables 1 and 2 . MATLAB code used to map the dataset (Fig. 2 ) and compute composites (Fig. 3 ) was modified from ref. 33 . The mean for 10–0 ka (or the entire record-length mean if shorter) was subtracted from each record. Hydroclimate proxies were also normalized to have variance of ±1 s.d. over the period 10–0 ka or over the entire record length if shorter, so that relative hydroclimate changes can be compared between climatologically diverse regions (for example, ref. 34 ). Holocene temperature and hydroclimate records were binned to 200-year resolution by averaging the measurements within 200-year intervals. 2k temperature records 16 were binned to 20-year resolution, and historic CRU TS4.01 data 15 were averaged to annual (1-year) resolution. Producing composites Composite time series were used to characterize Holocene temperature changes and mid-latitude net precipitation. Temperature records were composited across five 20° zonal bands between 10° S and 90° N. Mid-latitude hydroclimate records were composited between 30° N and 50° N. Composites were generated using an equal-area grid (for example Extended Data Fig. 6f ) to reduce the influence of clustered sites on the composites, especially at the higher latitudes. The equal-area grid was generated in MATLAB following methods developed by ref. 35 . Records inside each grid were averaged and then the grids were averaged. The median was used to reduce the influence of outliers. Composite uncertainties were estimated with a bootstrap sampling approach to develop a probability distribution from the available data and associated uncertainties 36 . Uncertainties arising from the spatial and temporal distribution of available records were characterized using sampling-with-replacement. Age uncertainties were estimated as a normal distribution with a 10% standard deviation for each sample measurement in every record. Age uncertainty was applied by multiplying the age of each sample within each record by a random number drawn from a normal distribution with a mean of 1 and a standard deviation of 0.1 in each of 500 bootstrapped iterations. A more formal treatment of the age uncertainty would require an analysis of each record’s age model, which is not available for many records in the database; nonetheless, our accounting for the age uncertainty is a conservative estimation of the likely effect. Additionally, we estimate temperature calibration uncertainty as a normal distribution for each measurement with a standard deviation by proxy as follows: 1.7 °C for chironomid, 1.1 °C for alkenone, 1 °C for pollen, 0.3 °C for ice and 1.5 °C for other proxy types. Records were then composited and the above process repeated over 500 iterations to generate a probabilistic distribution of composites. Scaling the temperature composites Many of the Holocene temperature records were originally published as temperature anomalies rather than as absolute temperatures, which are needed to calculate hemispheric LTGs. We applied the following shingled approach, using modern CRU TS4.01 observations 15 and the PAGES 2k temperature network 16 , to scale the overlapping mean Holocene temperature records. The largely uncalibrated 2k network data were composited by 20° zonal bands between 10° S and 90° N (Extended Data Fig. 7 ) and converted to temperature units by scaling the overlapping data to match the instrumental mean and variance of the CRU TS4.01 data 15 . The CRU TS4.01 temperatures were weighted by the cosine of latitude to account for the smaller surface area of the high-latitude bands and composited into corresponding 20° zonal bands. The mean for 500–1,500 years bp (450–1450 ce ) of the individual Holocene records and the 20° zonal composites were then scaled to the overlapping mean 2k composites. Individually scaled records and scaled composites were used in separate methods for calculating the LTG as described below. The variance of the Holocene temperature records did not require scaling because only Holocene records previously scaled to temperature units were used. Calculating temperature gradients Northern Hemisphere LTGs between the colder high latitudes and warmer low latitudes were calculated using three methods. The first method was applied to (1) the twentieth century using CRU TS4.01 data 15 , (2) the past 2,000 years using PAGES 2k temperature network 16 and (3) the entire Holocene using our new compilation of Holocene-length temperature reconstructions. This method relied on weighted linear regression across temperatures composited for five 20° zonal bands between 10° S and 90° N (Fig. 3a–e and Extended Data Fig. 8a ). The 20° width of each band provided enough proxy records to generate relatively stable Holocene (Fig. 3a–e ) and 2k (Extended Data Fig. 7 ) temperature estimates, while also representing a broad meridional temperature range. Narrower bands (10° and 15° wide) were also tested but resulted in too few records to generate robust composites, especially at the highest latitudes. Simulated PMIP3 gradients (Fig. 3f ) were also calculated using the above method, but no scaling was required because the model data were already in native temperature units. The second method applied regression on the distribution of individual Holocene temperature records rather than zonal composites (Extended Data Fig. 8b ). Each record was scaled to its respective latitude also using a shingling approach. First, CRU TS4.01 gridded observations 15 were used to scale the PAGES 2k network 16 , which was then used to scale the overlapping mean of the individual Holocene records. Only Holocene records with data in the interval 500 to 1,500 bp were used so they could be scaled to the overlapping portion of the 2k network. Many of the low-resolution 2k records had insufficient overlap with the instrumental period to scale their variance using the instrumental data individually. Instead, the 2k records were composited by 20° latitude bands, and interpolation was used to calculate the latitudinal temperatures to scale the individual Holocene records based on their latitude. Specifically, the Piecewise Cubic Hermite Interpolating Polynomial algorithm in MATLAB was used to interpolate between the 20° zonal composites based on the PAGES 2k dataset. The 500 to 1,500 bp mean was used to scale the Holocene records. Both regression-based methods applied robust, area-weighted linear regression in MATLAB. Regression was weighted by the cosine of latitude to account for the represented earth surface area to provide an area-weighted estimate of the surface thermal energy gradient 37 . For regression across composites, the LTGs were calculated on each of the 500-bootstrapped latitudinal composite ensembles, generating a distribution of possible LTG realizations, reflecting the range of potential age, sample and calibration uncertainties. The third method for calculating the LTG relied on the difference between high-latitude and low-latitude temperature composites 9 rather than the slope of regression. Temperature records were composited for 50° N to 90° N to characterize high latitudes, and from 10° S to 30° N to characterize low latitudes (Extended Data Fig. 6g, h ). The Holocene LTG was computed by subtracting the low-latitude composite from the high-latitude composite. Calculating insolation gradients Latitudinal insolation gradients (Fig. 3g ) were calculated using insolation time-series data from ref. 14 output from MATLAB code from ref. 38 . Holocene annual insolation time-series were averaged into 20° latitudinal bands following the latitude intervals in Fig. 3a–e . We applied robust regression weighted by the cosine of latitude to calculate the latitudinal insolation slope between northern low and high latitudes following the same method that we used to calculate the evolution of the Holocene LTG on zonally averaged temperature composites. Calibrated hydroclimate records The small subset of hydroclimate records published in calibrated units (for example, mm yr −1 , n = 15, Supplementary Table 2 ) were binned into 200-year intervals, and the median value was used for the composite time series (Extended Data Fig. 3 ). These records are primarily located in North America and are based on pollen and stratigraphy proxy types. A regression model using the latitudinal insolation gradient, Laurentide Ice-Sheet area and an autoregressive (ar1) term (for the residuals to account for autocorrelation in the hydroclimate records) was used to estimate the magnitude of Holocene effective precipitation change as characterized by this subset of records. Holocene climate models To explore Holocene climate change in models, mid-Holocene (6 ka) and preindustrial (0 ka) simulations were analysed in 12 general circulation models (GCMs) from the Paleoclimate Modelling Intercomparison Project phase III (PMIP3). Compared with the preindustrial period, mid-Holocene simulations are forced by altered astronomical parameters as well as prescribed greenhouse gases. Ice sheets had already melted to their preindustrial extents, making this a good period for exploring post-glacial climate changes. Climate anomalies are explored as mid-Holocene minus preindustrial. The experimental design is described in refs. 39 , 40 . The 12 models analysed in this research are the models for which the necessary outputs were readily accessible, and are listed as follows: bcc-csm1-1, CCSM4, CNRM-CM5, CSIRO-Mk3-6-0, CSIRO-Mk3L-1-2, FGOALS-g2, FGOALS-s2, GISS-E2-R, IPSL-CM5A-LR, MIROC-ESM, MPI-ESM-P and MRI-CGCM3. Three models (CSIRO-Mk3L-1-2, GISS-E2-R, and FGOALS-g2) were omitted from the ensemble-mean analyses (Fig. 4a and Extended Data Figs. 4 , 5 ) because they lacked necessary output variables. Where models were examined individually (Fig. 4b ), calculations were made using the models’ original grids. For the remaining analyses, model output was regridded onto a common 2° latitude by 2.66° longitude grid, with common pressure levels for non-surface variables, to aid comparison between models. Dataset limitations The extensive palaeotemperature multi-proxy dataset used in this study provides a unique view into past climate variability on a hemispheric scale. Nonetheless, there are limitations inherent to proxy records, including uncertainties related to seasonality, sample density, spatial distribution, chronology, calibration and other factors that limit our interpretations. Importantly for this analysis, most sea-surface temperature reconstructions are based on Mg/Ca and alkenones, which predominantly record growing-season temperatures (for example ref. 41 ), even though they are typically calibrated to annual temperatures. Warm-season bias can also occur in other proxies that are active during the summer growing season. We selected annual temperature reconstructions when available, but our temperature results have a warm-season bias 42 . As reported by the original authors, 48% of the reconstructions reflect warm-season temperatures, but this does not address warm-season biases of proxies scaled to annual temperatures. Although our temperature database is biased towards the warm season, the impact of this bias is limited at high latitudes where the pattern of long-term summer temperature anomalies is likely to be comparable with annual anomalies. In the Arctic, summer temperature anomalies have the potential to impact annual temperatures disproportionally by controlling glacier and sea-ice extent, and the expansion of tundra over forest, which together have large impacts on long-term annual mean temperature. This phenomenon is evident in climate model simulations of the mid-Holocene, which consistently show a sustained impact of increased summer insolation on temperature anomalies into the Arctic fall and winter, despite decreases in insolation during these seasons (for example ref. 43 ). An additional effect that can cause summer and annual temperature anomalies to co-vary is that both summer and annual high-latitude insolation decrease through the late Holocene owing to changes in obliquity (Extended Data Fig. 1a ). Individual temperature (Extended Data Fig. 6a–e ) and hydroclimate (Extended Data Fig. 9h ) records are also highly variable and unevenly distributed geographically (Fig. 2 ). The temperature dataset (as are its zonally averaged and gridded composites) is weighted towards data-rich regions of the Alaska-Yukon, North Atlantic/Fennoscandia and western Tropical Pacific. Data-poor regions reflect a combination of limited dataset generation and data accessibility. In light of data-poor regions, we conducted a set of sensitivity tests to assess the representativeness of the proxy network. Representativeness of the proxy network relative to zonal averages Sensitivity analyses were conducted to test whether our proxy network accurately represents the large spatial and temporal patterns addressed in this study. Gridded instrumental-based temperatures from the CRU TS4.01 0.5° dataset were used to test how well the proxy locations represent the mean temperature over the entire 20° latitudinal bands, as shown in Fig. 3a–e . Instrumental temperature data were binned to decadal resolution to better represent the long timescales integrated by the proxy records. Grid cells corresponding to the locations of proxy records were then averaged and compared with the mean of the entire latitudinal band in which they are located. The temperature proxy locations explain between 77% and 96% of the variance in the latitudinal average (Extended Data Fig. 10a–e ). The mean temperatures at the proxy locations are offset (warmer or colder) compared with the latitudinal average for most zones. The effect of this offset on the calculated LTGs is minimal, however, because our zonal mean Holocene reconstructions are adjusted to the latitudinal average over the past 2 ka, which itself is scaled to instrumental observations. The representativeness of the proxy network of the mid-latitude hydroclimate dataset was also tested using CRU TS4.01 gridded precipitation observations following the same methods as for the temperature network (Extended Data Fig. 10g ). The hydroclimate proxy locations explain 78% of the variance of the latitudinal average. In addition to the instrumental data, we also used the ensemble of PMIP3 models to assess how well our proxy network represents the latitudinal bands. The change in mid-Holocene minus preindustrial temperatures for the five latitude bands was calculated for both the proxy locations and the latitudinal averages in 12 models (Extended Data Fig. 10 ). The proxy locations explain 93% of the variance in the latitudinal averages of mid-Holocene minus preindustrial changes. We also calculated the LTGs and precipitation changes using the proxy locations in the mid-Holocene (6 ka) runs of PMIP3 models. The mean of the proxy sites is strikingly similar to the mean of the full field, with increasing precipitation associated with a stronger LTG (Fig. 4b ). Finally, correlation analysis using the proxy records was also used to assess whether the hydroclimate proxy network accurately represents the broad region of interest (Extended Data Fig. 10h ). New composites with iteratively smaller sample sizes were generated by randomly removing 1–71 records from the hydroclimate composite over 20,000 iterations. Correlation strength between the composites and the final composite drops quickly as sample size decreases below about 40 hydroclimate records, indicating that our sample network ( n = 72) is sufficient to capture the broad temporal patterns addressed in this paper. Effect of standardizing the moisture records Most Holocene hydroclimate records are published using their native proxy-value units (for example, lake level) and not converted to units of precipitation or evaporation amount (for example, mm yr −1 ). To integrate and summarize the hydroclimate proxy records, we converted them into relative units by subtracting the Holocene mean (0–10 ka), or subtracting the full-record-length mean if shorter, and dividing by the standard deviation calculated over the same interval. To evaluate the effect of standardization, we applied the same standardization methodology to gridded instrumental observations of precipitation from CRU TS4.01 (Extended Data Fig. 10g ). The results show that the standardized mid-latitude average explains 98% of the variance of the mid-latitude average in native units (mm month −1 ), demonstrating that standardized hydroclimate time series closely track the temporal variability and the magnitude of change. In addition to the standardized records, our proxy dataset includes 15 moisture records that were reported in precipitation units (mm yr −1 ) (Supplementary Table 2 ). These records were used to quantify the absolute magnitude of the Holocene wetting trend in areas that they represent (predominantly North America). Palaeodata–model comparison Simulations of mid-Holocene (6 ka) climate by the PMIP3 model ensemble are largely consistent with the proxy data for this time slice. In response to increased obliquity at 6 ka, annual-mean insolation was increased at the poles and decreased at the Equator (Extended Data Fig. 1a ). A Northern Hemisphere spring perihelion additionally modified the seasonal insolation cycle (Extended Data Fig. 1b ). The PMIP3 multi-model mean response to this forcing shows a weaker meridional temperature gradient compared with preindustrial (Extended Data Fig. 5a, b ), decreased zonal wind strength (Fig. 4a ), reduced Hadley circulation (Fig. 4a ) and reductions in mid-latitude net precipitation (Extended Data Fig. 5c ).The reduction in net precipitation is due to changes in both precipitation and evaporation. In the models, precipitation can also be separated into large-scale precipitation (that is, precipitation changes due to large-scale convergence or lifting) and convective precipitation (that is, precipitation related to smaller-scale processes, which must be parameterized in the models). The reduction in mid-latitude precipitation is primarily in the large-scale category. Changes in Hadley circulation shown in the models are consistent with the mechanistic framework that a stronger/weaker LTG would lead to increased/reduced meridional circulation. However, stronger Hadley circulation would potentially be a countervailing force to enhanced storm activity, leading to drier conditions on the subtropical edge of the mid-latitudes. Changes in Hadley Circulation could account for some of the variability observed amongst the mid-latitude hydroclimate records. Below 30° N latitude, the models show a wide range of precipitation changes at 6 ka relative to preindustrial (Extended Data Fig. 5c ), predominantly related to the position of the intertropical convergence zone and strength of monsoon systems. Although the changes are large at low latitudes, there is also considerable spread among the modelled responses. The magnitude of mid-latitude (30° N to 50° N) precipitation change simulated by the models is smaller than those in the proxy records. Only a small subset of the proxy records is calibrated to hydroclimate units, primarily located in North America. Nonetheless, these proxy records show an increase of 93–187 mm yr −1 in precipitation since 8 ka, whereas the model ensemble-mean suggests only an increase of ~4 mm yr −1 since 6 ka when averaged over the same locations and seasons, though larger anomalies are present regionally, during different parts of the year, or in individual models. Proxy records commonly indicate greater palaeoclimate change than those in models 44 , and palaeodata–model differences have been described more generally 42 , 45 , but more work is needed to resolve this data–model discrepancy. Hydroclimate regional differences As noted by previous work, the timing and progression of Holocene hydroclimate differed between North America and Eurasia 20 . The dataset presented here suggests that the driest mid-latitude conditions occurred in North America ( n = 43) during the early to middle Holocene with a gradual, nearly linear, transition from a drier to wetter environment (Extended Data Fig. 9f ), whereas Eurasia ( n = 29) had more variable hydroclimates over the Holocene (Extended Data Fig. 9g ). Eurasia shows a Holocene wetting trend, but the trend is interrupted by a relatively arid interval between 4 ka and 5 ka. It is unclear to what extent these regional differences are robust in our dataset. The Eurasian composite relies on fewer records and spans a climatologically diverse region. For example, Asia has strong monsoon systems, and although records interpreted as monsoon indicators were excluded from this analysis, the monsoon boundary was probably further north during the Holocene 46 . Additional records and further analyses are needed to unravel regional and sub-regional hydroclimate variability. Hydroclimate proxies Individual hydroclimate proxy types ( n = 17) in the dataset were examined to assess if different proxy types were in agreement. Holocene composites including the dominant hydroclimate proxy types: physical sediment properties (stratigraphy; n = 23), pollen ( n = 14), δ 18 O of lacustrine calcite ( n = 9), diatom ( n = 5) and other proxies ( n = 21) are shown in Extended Data Fig. 9a–e . Stratigraphy records show the driest conditions between 10 ka and ~6 ka, followed by a relatively linear wetting trend. Pollen records, located both in North America and Asia, suggest that conditions were driest in the earliest Holocene with a steep wetting trend to ~7 ka. After 7 ka, pollen records suggest variable but generally increasing net precipitation through the remainder of the Holocene. Oxygen-isotope-inferred hydroclimate records, primarily from the Middle East and North America, show a trend towards wetter conditions between 10 ka and ~3 ka, then drying to the present. The five diatom records are exceptionally variable, with no clear Holocene trends. The other proxies (that is, proxy types consisting of fewer than five individual records) suggest subtle to no Holocene trends. Of the dominant proxy types included, stratigraphy and pollen records show the strongest Holocene wetting trends. Data availability All of the proxy and instrumental climate records that were analysed in this study are from published sources. Supplementary Tables 1 and 2 include the citations to the original publications for each of the Holocene-long temperature and hydroclimate proxy records, respectively. The proxy data and basic metadata for the time series compiled for this study from these sources are available at the World Data Service for Paleoclimatology hosted by NOAA ( ). The landing page includes links to digital versions of the primary results (time series) generated by this study, including the (1) Holocene temperature composites by latitude (Fig. 3a–e ), (2) Northern Hemisphere LTG (Fig. 3f ) and (3) mid-latitude net precipitation reconstruction (Fig. 3h ). The proxy temperature records for the past 2,000 years were compiled by the PAGES2k Consortium 16 and are available at: . The CRU instrumental data are available at . PMIP3 model output is available at . Code availability The MATLAB code ( ) used to create the figures in this article was modified from code developed by Emile-Geay et al. 33 , which is available at under a free BSD licence.
When the Arctic warmed after the ice age 10,000 years ago, it created perfect conditions for drought. According to new research led by a University of Wyoming scientist, similar changes could be in store today because a warming Arctic weakens the temperature difference between the tropics and the poles. This, in turn, results in less precipitation, weaker cyclones and weaker mid-latitude westerly wind flow—a recipe for prolonged drought. The temperature difference between the tropics and the poles drives a lot of weather. When those opposite temperatures are wider, the result is more precipitation, stronger cyclones and more robust wind flow. However, due to the Arctic ice melting and warming up the poles, those disparate temperatures are becoming closer. "Our analysis shows that, when the Arctic is warmer, the jet stream and other wind patterns tend to be weaker," says Bryan Shuman, a UW professor in the Department of Geology and Geophysics. "The temperature difference in the Arctic and the tropics is less steep. The change brings less precipitation to the mid-latitudes." Shuman is a co-author of a new study that is highlighted in a paper, titled "Mid-Latitude Net Precipitation Decreased With Arctic Warming During the Holocene," published today (March 27) online in Nature, an international weekly science journal. The print version of the article will be published April 4. Researchers from Northern Arizona University; Universite Catholique de Louvain in Louvain-In-Neuve, Belgium; the Florence Bascom Geoscience Center in Reston, Va.; and Cornell University also contributed to the paper. "The Nature paper takes a global approach and relates the history of severe dry periods of temperature changes. Importantly, when temperatures have changed in similar ways to today (warming of the Arctic), the mid-latitudes—particularly places like Wyoming and other parts of central North America—dried out," Shuman explains. "Climate models anticipate similar changes in the future." Currently, the northern high latitudes are warming at rates that are double the global average. This will decrease the equator-to-pole temperature gradient to values comparable with the early to middle Holocene Period, according to the paper. Shuman says his research contribution, using geological evidence, was helping to estimate how dry conditions have been in the past 10,000 years. His research included three water bodies in Wyoming: Lake of the Woods, located above Dubois; Little Windy Hill Pond in the Snowy Range; and Rainbow Lake in the Beartooth Mountains. "Lakes are these natural recorders of wet and dry conditions," Shuman says. "When lakes rise or lower, it leaves geological evidence behind." The researchers' Holocene temperature analysis included 236 records from 219 sites. During the past 10,000 years, many of the lakes studied were lower earlier in history than today, Shuman says. "Wyoming had several thousand years where a number of lakes dried up, and sand dunes were active where they now have vegetation," Shuman says. "Expanding to the East Coast, it is a wet landscape today. But 10,000 years ago, the East Coast was nearly as dry as the Great Plains." The research group looked at the evolution of the tropic-to-pole temperature difference from three time periods: 100 years ago, 2,000 years ago and 10,000 years ago. For the last 100 years, many atmospheric records facilitated the analysis but, for the past 2,000 years or 10,000 years, there were fewer records available. Tree rings can help to expand studies to measure temperatures over the past 2,000 years, but lake deposits, cave deposits and glacier ice were studied to record prior temperatures and precipitation. "This information creates a test for climate models," Shuman says. "If you want to use a computer to make a forecast of the future, then it's useful to test that computer's ability to make a forecast for some other time period. The geological evidence provides an excellent test."
10.1038/s41586-019-1060-3
Medicine
Gene-editing technique could speed up study of cancer mutations
Tyler Jacks, A prime editor mouse to model a broad spectrum of somatic mutations in vivo, Nature Biotechnology (2023). DOI: 10.1038/s41587-023-01783-y. www.nature.com/articles/s41587-023-01783-y Journal information: Nature Biotechnology
https://dx.doi.org/10.1038/s41587-023-01783-y
https://medicalxpress.com/news/2023-05-gene-editing-technique-cancer-mutations.html
Abstract Genetically engineered mouse models only capture a small fraction of the genetic lesions that drive human cancer. Current CRISPR–Cas9 models can expand this fraction but are limited by their reliance on error-prone DNA repair. Here we develop a system for in vivo prime editing by encoding a Cre-inducible prime editor in the mouse germline. This model allows rapid, precise engineering of a wide range of mutations in cell lines and organoids derived from primary tissues, including a clinically relevant Kras mutation associated with drug resistance and Trp53 hotspot mutations commonly observed in pancreatic cancer. With this system, we demonstrate somatic prime editing in vivo using lipid nanoparticles, and we model lung and pancreatic cancer through viral delivery of prime editing guide RNAs or orthotopic transplantation of prime-edited organoids. We believe that this approach will accelerate functional studies of cancer-associated mutations and complex genetic combinations that are challenging to construct with traditional models. Main Cancer is driven by somatic mutations that accumulate throughout progression and often display extensive intertumoral heterogeneity, occurring in thousands of different combinations across human cancer 1 , 2 . The precise nature of driver mutations and their combinations can profoundly influence how cancers initiate, progress and respond to therapy, establishing tumor genotype as a critical determinant of disease outcome 3 , 4 . Emerging precision oncology treatment paradigms aim to match specific therapies with tumor genotypes, and this strategy has shown promise for several driver mutations 5 , 6 . To expand the promise of precision oncology to more patients, it is critical to develop tools to systematically interrogate the effects of distinct genetic lesions and combinations thereof on the overall tumor phenotype, particularly in vivo. Genetically engineered mouse models (GEMMs) have proven invaluable for elucidating the mechanisms by which cancer drivers promote tumor development and progression in vivo 7 , 8 . However, generating new GEMMs using traditional approaches is an expensive, laborious and time-consuming process. Established GEMMs can also take months for investigators to acquire and often require laborious breeding programs to combine multiple alleles of interest and to establish a colony of sufficient size for experimental cohorts. These factors impede studies aimed at developing precision oncology treatments for tumors driven by specific genetic variants, which continue to be identified on a regular basis 9 . Genome editing technologies like CRISPR–Cas9 can be used to rapidly engineer somatic mutations when delivered exogenously or when installed as germline alleles 10 , 11 , 12 , 13 , 14 . While these models have accelerated the study of putative cancer driver genes, they are most frequently used to induce DNA double-stranded breaks (DSBs), leading to inactivation of tumor suppressor genes via error-prone repair and frameshifting insertion/deletion (indel) formation. Although CRISPR-based homology-directed repair (HDR) has been used to model precise single nucleotide variants (SNVs) in Cas9-knockin mice, this method requires an exogenous DNA donor template and is limited by low efficiency and high rates of indel byproducts 15 . Furthermore, the requirement for DSBs to induce frameshifts or HDR-based precise edits can lead to confounding genotoxic effects, including on-target chromothripsis events and artificial fitness costs incurred through continued disruption of edited oncogenes 16 , 17 . Precision genome editing technologies like base editing 18 can be used to model cancer in mice by installing specific transition mutations with high efficiency and negligible indel byproducts 11 . Although precise and highly efficient, base editors also have limitations, including the requirement for different base editor enzymes depending on the mutation being studied (for example, cytosine base editor (CBE) or adenine base editor (ABE)), and their propensity for bystander editing, which can prohibit introducing desired amino acid substitutions. While the recent development of C:G and A:Y transversion base editors will expand the scope of cancer modeling 19 , 20 , 21 , 22 , current base editing technology is not amenable to modeling the full spectrum of small somatic mutations. In contrast to base editing and standard CRISPR–Cas9, prime editing enables engineering the full spectrum of single nucleotide substitutions and indels with high product purity 23 , 24 . Prime editors employ a Cas9 nickase coupled with a reverse transcriptase that complexes with prime editing guide RNAs (pegRNAs). pegRNAs encode mutations of interest within a reverse transcriptase template (RTT) 23 , 24 , enabling highly precise and programmable editing. Prime editing thus offers a versatile approach to study the full spectrum of cancer driver mutations, their combinations and the growing catalog of secondary mutations that confer resistance to targeted therapies 25 , 26 , 27 , 28 . Beyond editing versatility, prime editing also avoids the formation of indel byproducts associated with DSBs. This is particularly important for studying SNVs with putative neomorphic qualities in tumor suppressor genes, as HDR-directed mutations would be diluted by the higher rate of naturally selected indels. Prime editing also exhibits lower rates of unintended activity at off-target loci, reducing the risk of confounding off-target effects 24 , 29 . These advantages, combined with broad editing capacity, provide an unprecedented opportunity to generate faithful models of human cancer. With these considerations in mind, we developed both conditional and tissue-restricted prime editing GEMMs (PE GEMMs) that eliminate the need for exogenous delivery of prime editors, which can be challenging given their significant size 30 , 31 . Encoding the prime-editing machinery within the mouse germline also minimizes confounding acute or chronic anti-tumor immune responses that could be induced by exogenous delivery of a Cas9-based fusion protein 32 , 33 , 34 . In conjunction with the development of PE GEMMs, we also developed a range of DNA vectors and engineered pegRNAs (epegRNAs) that promote efficient prime editing in a variety of cell lines and organoids derived from these mice. With this toolset, we established organoid models harboring Trp53 mutations frequently found in patients with pancreatic cancer but not modeled by current GEMMs of the disease, as well as a clinically relevant Kras mutation associated with resistance to KRAS G12C inhibitors. We further showed that PE GEMMs enable efficient prime editing in vivo via viral or nonviral delivery of pegRNAs to a variety of tissues. Extending these studies, we harnessed PE GEMMs to model cancer in vivo through somatic initiation of autochthonous lung and pancreatic adenocarcinomas, and by orthotopic transplantation of prime-edited pancreatic organoids. We also investigated the oncogenic potential of a variety of primary Kras mutations in the lung, including the poorly understood Kras G12A mutation present in more than 10% of patients with lung adenocarcinoma. We expect PE GEMMs to both expand the landscape of achievable cancer-associated mutations and accelerate the techniques required to study their function and associated therapeutic vulnerabilities. Results Quantification of cancer mutations amenable to prime editing Recent study has shown that base editing can be used to elucidate the function of specific cancer-associated genetic variants 35 and to systematically probe a large fraction of all possible alleles for genes and proteins of interest 36 . Base editors are primarily capable of engineering transition SNVs 23 (A·T > G·C or G·C > A·T), although the base editor architecture has recently been adapted to produce C·G > G·C transversions with variable efficiency 19 , 20 , 37 , 38 , 39 . In contrast, prime editors can engineer all transition and transversion SNVs 24 , as well as defined indel alleles 40 , 41 , expanding the potential for rapid modeling of genetic variants even further. To define the expanded editing capacity afforded by prime editing, we quantified the abilities of both base and prime editing to install specific somatic mutations identified from a cohort of 43,035 genetically-profiled patients with cancer from the Memorial Sloan Kettering-Integrated Mutation Profiling of Actionable Cancer Targets (MSK-IMPACT) dataset (Fig. 1a,b and Supplementary Fig. 1 ) 9 , 35 . Of 422,822 mutations identified from the targeted exon sequencing of 594 cancer-associated genes, 82.3% are SNVs, while the remaining 17.7% are deletions (DEL), insertions (INS) and di-nucleotide variant (DNV)/oligo-nucleotide variants (ONV), in descending order of frequency (Fig. 1a ). Fig. 1: Quantification of cancer-associated mutations amenable to modeling by base editing or prime editing. a , Distribution of somatic variant types in a cohort of 43,035 patients with 422,822 mutations observed in 594 cancer-associated genes. b , Schematic of the modeling capabilities of base editing (top) and prime editing (bottom). c , Quantification of somatic SNVs by type. SNVs amenable to modeling by CBEs are shown in purple, while SNVs amenable to ABEs are shown in blue. Transversions are shown in gray. d , Quantification of mutations amenable to modeling with CBEs or ABEs that use an NG (light green) or NGG PAM (dark green). All percentages are given as a percentage of all mutations in the dataset. e , Quantification of mutations amenable to modeling by a prime editor using an NGG PAM (dark green) coupled with a pegRNA with an RT template length of 30 nucleotides. f , Percentage of mutations with at least one suitable pegRNA as a function of the RT template length of the pegRNA, excluding the additional length of a homologous region in the RT template. Calculations assume an NGG PAM. g , Quantification of orthologous coding mutations potentially amenable to modeling by base editing in mice. Mutations are defined as orthologous if they derive from a wild-type amino acid conserved in the mouse ortholog, as determined by pairwise protein alignment between human and mouse protein sequences. The rightmost bar indicates the fraction of orthologous coding mutations that can be modeled by base editors that recognize NG or NGG PAMs. ‘Excluded mutations’ refers to mutations that fall in a gene lacking an ortholog. h , Quantification of orthologous coding mutations potentially amenable to modeling by prime editing. The rightmost bar indicates the ability of an NG or NGG prime editor to model these mutations, assuming an RT template greater than 30 nt. i , Summary of the mutation modeling capabilities of base and prime editing assuming an NGG PAM. SNV, single nucleotide variants; DEL, deletions; INS, insertions; DNV, di-nucleotide variants; ONV, oligo-nucleotide variants. Full size image To estimate what fraction of common cancer-associated mutations are captured in currently available transgenic mouse models, we analyzed a dataset curated from the Mouse Genome Informatics database ( Methods ) 42 , 43 . We found that 65 of the 100 most frequent SNVs in MSK-IMPACT, including 50 of 84 missense SNVs, are not represented by published mouse cancer models (Supplementary Tables 1 and 2 ). Notably, the majority of these SNVs are transitions, which comprise 61.8% of all SNVs in the overall MSK-IMPACT dataset and are theoretically compatible with engineering using base editors (Fig. 1c ). In general, 38.4% of all mutations in the dataset are amenable to base editing using a canonical NGG PAM sequence 23 , 35 (Fig. 1d ). The total mutation coverage with base editing increases to 51% when accounting for base editors that use more abundant NG PAM sequences. With base editors, adjacent identical nucleotides can be collaterally edited and result in undesired editing outcomes. When considering only mutations without identical bases present within one adjacent nucleotide, the total mutation coverage drops to 29.6% (Fig. 1d ). This analysis does not account for the location of a desired edit within the protospacer, which can influence base editing efficiency and the total fraction of amenable mutations (Extended Data Fig. 1b ). We used a similar approach to quantify the modeling capabilities of prime editors that use an NGG or NG PAM coupled with variable RTT lengths encoded within pegRNAs (Supplementary Fig. 1 ). Using an NGG PAM and RTT length of 30 base pairs (bp), excluding the additional length of a homologous region in the RTT, prime editing theoretically reaches 95.8% coverage of all mutations in this dataset (Fig. 1e ). This value increases to 99.9% for prime editors that could theoretically use an NG PAM (Fig. 1e ). Moreover, analysis of the relationship between RTT length and modeling capabilities reveals that ~85% of mutations in this dataset can be modeled by placing the mutation within the first 15 bp of the RTT (Fig. 1f ). These parameters are well within the recommended guidelines for pegRNA RTT length, even with the additional size required for a region of homology 23 . Collectively, this analysis suggests that both base editing and prime editing can serve as versatile technologies for modeling cancer-associated mutations. We also sought to determine the fraction of cancer-associated mutations that derive from protein sequences conserved in mouse orthologs. We reasoned that only this subset of conserved sequences, when mutated in mouse systems, could be expected to mimic effects seen in human cancer. To quantify the ability of base and prime editors to model cancer-associated mutations in mice, we performed pairwise alignment on orthologous mouse and human proteins to define whether mutations derive from a conserved wild-type amino acid and reside in a region of homology (Supplementary Fig. 1 ). Of the SNVs that occur in coding sequences, 90.9% derive from codons that encode conserved amino acids between mouse and human. Of these conserved, cancer-associated SNVs, 61.8% are amenable to base editing (NG or NGG PAM), which translates to 43.1% of all mutations in the dataset (Fig. 1g ). In contrast, NG or NGG prime editors are capable of modeling 100% of coding mutations that occur at conserved amino acid residues in mice (84.2% of all mutations in the dataset) (Fig. 1h ). In total, 80.8% of human cancer-associated mutations observed in this dataset could be modeled in mice with prime editors using a traditional NGG PAM (Fig. 1f,i ). This same pattern holds when filtering the dataset to only mutations that occur in multiple patients, and when considering various stringencies of homology in the regions flanking the mutations of interest (Extended Data Figs. 1f and 2c ). In total, these results demonstrate that prime editing could substantially broaden both the diversity and number of human cancer-associated mutations that can be rapidly modeled in mouse orthologs. Development of a Cre-inducible prime editor allele We sought to develop a transgenic system capable of precisely engineering the majority of cancer-associated mutations without requiring exogenous delivery of a prime editor enzyme. To accomplish this, we targeted a transgene expression cassette encoding the PE2 enzyme and the mNeonGreen (mNG) 44 fluorescent reporter, separated by the P2A ribosome skipping sequence, into the Rosa26 locus 10 , 45 (Fig. 2a ). Like the previous Cre-inducible Rosa26 alleles 10 , 46 , 47 , transgene expression is driven by the CAG promoter and is induced by Cre-mediated excision of a loxP-stop-loxP (LSL) cassette. A neomycin resistance gene was included to enable the selection of cells containing the targeted allele. We also incorporated FRT/FRT3 sequences flanking the central construct to enable Flp recombinase-mediated replacement of the Rosa26 PE2 allele with future generations of prime editor enzymes or other desirable editors 29 , 48 . This vector was targeted to Trp53 flox/flox C57BL/6J ES cells, where Trp53 can be deleted upon expression of Cre recombinase (Supplementary Fig. 2 ). Chimeric mice were then crossed to wild-type C57BL/6J mice to generate pure strain heterozygous Trp53 flox/+ ; Rosa26 PE2 /+ mice. These mice were subsequently crossed with Trp53 +/+ and Trp53 flox/flox mice to generate Rosa26 PE2 /+ mice on wild-type and Trp53 flox/flox backgrounds. Fig. 2: Design and functional validation of the Rosa26 PE2 prime editor allele. a , Schematic depicting the design of the Cre-inducible Rosa26 PE2 allele. b , Schematic depicting the formation of UPEC and UPEmS vectors from templates encoding an RFP by Golden Gate assembly. c , Bright-field images of pancreatic organoids derived from chimeric prime editor mice and wild-type mice with and without treatment with neomycin. This experiment was completed once. d , Bright-field and fluorescent images showing PE2-P2A-mNG expression only after exposure to Cre encoded by a UPEC vector. This experiment was completed more than five times with consistent results. e , Schematic depicting the derivation of multiple organoids and a fibroblast cell line from Rosa26 PE2/+ prime editor mice. f , Editing efficiency of a trinucleotide (+GGG) insertion located 8 bp downstream of the start codon in Dnmt1 in pancreatic organoids, lung organoids and TTFs. Unintended indel byproducts in all conditions were present in <1% of sequencing reads. Data and error bars indicate the mean and standard deviation of three independent transductions. g , Editing efficiency and indel byproduct frequency of Dnmt1 +GGG in liver tissue 1 week after tail vein injection with LNPs harboring either Cre mRNA and pegRNA ( n = 5 mice) (left) or pegRNA alone ( n = 3 mice) (right). Data and error bars indicate the mean and standard deviation of independent animals. h , Bright-field and fluorescent images of pancreases derived from Rosa26 PE2/+ (left) or Pdx-1 cre;Rosa26 PE2/+ mice (right). This experiment was completed twice with consistent results. i , Immunofluorescence imaging of intestinal tissue derived from Villin-creER T2 ; Rosa26 PE2/+ mice that were either untreated (left) or exposed to tamoxifen (right; 4-OHT). Tissue slides were stained with the DNA stain DAPI (4′,6-diamidino-2-phenylindole; top) or with an antibody specific to Cas9 (bottom). Scale bar indicates 100 µm. This experiment was completed once. Full size image Functional validation of the prime editor allele in organoids To confirm the functionality of the Rosa26 PE2 allele, we developed two lentiviral vectors that coexpress a pegRNA and either Cre recombinase (hU6-pegRNA-EF-1α-Cre (UPEC)) or the red fluorescent protein (RFP), mScarlet 49 (hU6-pegRNA-EFS-mScarlet (UPEmS)) (Fig. 2b ). We derived pancreatic organoids from chimeric Trp53 flox/flox ;Rosa26 PE2/+ mice and developed a pure culture of transgene-containing cells via selection with neomycin (Fig. 2c and Supplementary Fig. 3 ). As expected, these pancreatic organoids displayed Cre-dependent mNG expression upon transduction with UPEC vectors (Fig. 2d and Supplementary Fig. 3 ). To test the prime-editing functionality of this allele, we designed a Dnmt1 -targeting pegRNA encoding a + 1 CCC INS, which templates a trinucleotide insertion of a GGG codon encoding glycine at residue 4 of Dnmt1 . UPEC-transduced organoids were selected using nutlin-3a, a mouse double minute 2 homolog (MDM2) inhibitor that induces cell cycle arrest in Trp53 -proficient (but not Trp53 -deficient) cells 50 , enriching for those Trp53 flox/flox cells that underwent Cre-mediated recombination following UPEC transduction. After selection, we detected up to 33.8% editing efficiency and minimal indel byproducts at Dnmt1 (Supplementary Fig. 3 ). These results validate the functionality of the Rosa26 PE2 allele, including its ability to mediate prime editing of endogenous loci when using optimized pegRNAs. Prime editing in organoids derived from the Rosa26 PE2 model We next sought to evaluate prime editing across multiple tissues. To accomplish this, we derived lung organoids, pancreatic organoids and tail-tip-derived fibroblasts (TTFs) from multiple Rosa26 PE2/+ mice (Fig. 2e ). Consistent with results using chimera-derived organoids, we observed highly efficient Dnmt1 editing across all investigated tissues (Fig. 2f ). Corroborating the well-established on-target fidelity of prime editing 24 , 29 , 51 , we did not detect off-target prime editing across multiple loci prioritized based on protospacer homology 52 (Extended Data Fig. 3c,d ). A previous study established a subset of DNA damage repair (DDR) genes as key factors influencing prime-editing efficiency 29 . Given p53’s fundamental role in DDR, we examined whether Dnmt1 +GGG editing levels differed substantially across Trp53 +/+ and Trp53 flox/flox conditions. In both TTFs and pancreatic organoids, we noticed a consistent twofold to threefold decrease in Dnmt1 +GGG editing in Trp53 +/+ relative to Trp53 flox/flox tissues (Extended Data Fig. 3a,b ). This result suggests that Trp53 status may affect prime-editing efficiency, although we still observe highly efficient editing across loci in Trp53 -proficient tissues. Prime editing in vivo with lipid nanoparticles To determine whether PE GEMMs enable prime editing in vivo, we co-formulated Cre mRNA and a synthetic pegRNA encoding the Dnmt1 +GGG insertion within lipid nanoparticles (LNPs). We then treated Rosa26 PE2/+ and Rosa26 PE2/PE2 mice with one of two LNP formulations 53 ( Methods ) via tail vein injection. After 1 week, we observed Cre-induced fluorescence in the livers of mice that received pegRNA-bearing LNPs, but not in a control mouse that received PBS (Supplementary Fig. 4 ). We also detected moderately efficient prime editing (up to 3.4%) at Dnmt1 as assessed by bulk liver analysis, and we did not detect significant editing in mice that received LNPs harboring only the Dnmt1 pegRNA (that is, lacking Cre mRNA) (Fig. 2g ). These results confirm that PE GEMMs are amenable to precision edits in vivo. Generation of constitutive and inducible PE GEMMs Prime editing in vivo could be more convenient if the need for Cre co-delivery was eliminated. To demonstrate the compatibility of the conditional PE2 allele with tissue-restricted Cre drivers, we generated additional PE GEMMs through genetic crosses with mice harboring alleles that express Cre recombinase from endogenous loci. First, we crossed Rosa26 PE2/PE2 mice to Pdx-1 cre 54 , a pancreas-specific Cre driver allele, and Villin-creER T2 , an inducible, intestinal epithelial Cre driver allele 55 . As expected, Pdx-1 cre ; Rosa26 PE2/+ mice showed bright and robust evidence of mNG expression in the pancreas (Fig. 2h ), and Villin-creER T2 ; Rosa26 PE2/+ mice demonstrated PE2 expression in intestinal epithelial cells upon treatment with tamoxifen (Fig. 2i ). Notably, histologic analysis of the pancreas and intestinal epithelia, respectively, revealed no gross or pathologic abnormalities, suggesting that constitutive or inducible expression of the PE2 enzyme does not lead to toxicity in these tissues (Supplementary Fig. 4 ). Optimization of Kras -targeted pegRNAs We next sought to empirically identify highly efficient pegRNAs that introduce the Kras G12D transition as an SNV (GGT > GAT). Based on previous study 56 , we hypothesized that spacer sequences capable of producing the highest Cas9 indel efficiency in mouse N2A cells would serve as ideal scaffolds for high-efficiency pegRNA designs (Supplementary Fig. 5 and Supplementary Table 3 ). Using TTFs, we observed up to ~5% editing efficiency of Kras G12D with spacer-optimized pegRNAs (Fig. 3a and Supplementary Fig. 5 ). To further increase editing efficiency, we modified our best-performing pegRNA with a structured RNA pseudoknot motif, prequeosine 1 -1 riboswitch aptamer (tevopreQ1), recently shown to enhance prime-editing efficiency by more than threefold in cell lines 51 . This resulted in up to ~18.4% editing efficiency of Kras G12D in pancreatic organoids and TTFs (Fig. 3b ). We then modified this epegRNA to template the Kras G12C transversion and observed ~0.5% editing efficiency in pancreatic organoids and ~5% in TTFs. We also generated Kras G12A and Kras G12R epegRNAs and observed up to ~30% editing efficiency with both epegRNAs in TTFs (Fig. 3b ). Fig. 3: Ex vivo prime editing and functional testing of Kras and Trp53 mutations. a , Editing efficiency and indel byproduct frequency of the Kras G12D transition mutation templated by pegRNAs based on a single Cas9 spacer ( n = 3 for each pegRNA). pegRNAs are delineated by differences in the lengths of the primer binding site (PBS) and RTT. Data and error bars indicate the mean and standard deviation of three independent transductions. b , Editing activity of four epegRNAs templating Kras G12 mutations in TTFs or pancreatic organoids. Data and error bars indicate the mean and standard deviation of three independent transductions. Indel byproduct calculations were pooled from all conditions within each tissue. c , Allele frequencies of Kras G12D or Kras G12C mutations in pancreatic organoids before and after two passages of treatment with gefitinib (1 µM; n = 1). d , Bright-field images of prime-edited Kras G12C or Kras G12D organoids treated for 4 d with either control DMSO, sotorasib (2 µM) and gefitinib (1 µM), MRTX1133 (5 µM) or MRTX1133 and gefitinib. This experiment was repeated three times with consistent results. e , Viability of Kras G12D pancreatic organoids under various treatment conditions. f , Allele frequency of Kras Y96C in Kras G12C organoids during and after treatment with sotorasib (2 µM) and gefitinib ( n = 1). After two passages, organoids were split into two groups, which included continued treatment (continuous treatment) in one group and removal of treatment in a second group (transient treatment). g , Allele and indel byproduct frequencies of Trp53 R245Q ( n = 5), Trp53 R245W ( n = 4), Trp53 R250FS ( n = 2) and Trp53 M240FS-14nt ( n = 3) in Trp53 flox/+ pancreatic organoids treated with nutlin-3a for three to five passages after transduction with UPEC vectors. Note that the highest indel frequency depicted for Trp53 R245W derives primarily from a scaffold insertion in a single replicate. Trp53 R250FS denotes a dinucleotide deletion. Trp53 M240FS-14nt denotes a fourteen-nucleotide deletion. Data and error bars indicate the mean and standard deviation across three or more independent transductions. h , Immunoblot indicating detectable levels of p53 protein in prime-edited Trp53 flox/R245Q and Trp53 flox/R245W organoids and an absence of detectable protein in Trp53 flox/R250FS organoids. Source data Full size image Both Kras G12A and Kras G12R epegRNAs template G·C-to-C·G substitutions, which proceed from C·C mismatch intermediates. These mismatches are not efficiently repaired by mismatch repair (MMR) and are thought to have higher basal prime-editing rates as a consequence 29 . A study by Chen et al. has indicated that co-installation of silent or benign MMR-evasive edits can promote higher prime-editing efficiency, consistent with the increased editing efficiency in producing Kras G12A and Kras G12R over Kras G12C (ref. 29 ). To further probe this phenomenon, we compared a variety of epegRNAs templating cancer-associated mutations across Kras , Trp53 and Egfr to counterparts modified with silent or inconsequential edits. In nearly every case, we found that installing MMR-evasive edits amplified prime-editing efficiencies by more than threefold, often resulting in efficiencies greater than 20% (Extended Data Fig. 4b–d ). Collectively, these data demonstrate that the Rosa26 PE2 allele enables efficient installation of SNVs, multinucleotide alterations and insertions and deletions across a diverse array of cell lines and organoids. To confirm the functional effects of these mutations, we installed either Kras G12D or Kras G12C mutations in Trp53 flox/flox ;Rosa26 PE2/+ pancreatic organoids and selected transduced cells with nutlin-3a. We then treated prime-edited organoids with the epidermal growth factor receptor (EGFR) inhibitor, gefitinib, to select for the oncogenic Kras mutation 57 and evaluated the fraction of cells containing the intended edits before and after treatment. Consistent with receptor-independent signaling downstream of EGFR, only Kras G12D and Kras G12C prime-edited cells survived treatment with gefitinib, while control cells infected with the template UPEC lacking a pegRNA did not (Fig. 3c and Extended Data Fig. 5a,b ). We then tested whether cells transduced with Kras G12C epegRNAs were sensitive to sotorasib, a KRAS G12C -specific inhibitor, alone or in combination with gefitinib. Consistent with the previous study 58 , we found that Kras G12C pancreatic organoids were uniquely sensitive to the combination of sotorasib and gefitinib, while Kras G12D organoids were unaffected by these treatments (Fig. 3d and Extended Data Fig. 5b ). While KRAS G12C inhibition has shown promising signs of clinical efficacy in pancreatic cancer 5 , 59 , current preclinical efforts focused on KRAS G12D inhibition have the potential to benefit a broader fraction of patients with this disease (>38%) 60 , 61 . Therefore, we treated prime-edited Kras G12D pancreatic organoids with MRTX1133 (ref. 62 ), a KRAS G12D inhibitor, alone or in combination with gefitinib. Consistent with results using sotorasib, we found that Kras G12D organoids were substantially more sensitive to the combination treatment compared with MRTX1133 alone (Fig. 3d,e ), suggesting that concomitant EGFR inhibition may be a broadly effective strategy to augment the overall efficacy of KRAS mutant inhibitors in pancreatic cancer cells. Rapid interrogation of resistance mutations While targeted therapies have revolutionized modern cancer treatment, therapy resistance is common and frequently arises through the acquisition of secondary missense mutations affecting the drugged driver 28 , 63 , 64 . A recent study in ref. 63 revealed a class of secondary KRAS mutations occurring in over 10% of patients with non-small cell lung cancer and colorectal cancer with acquired resistance to adagrasib, a KRAS G12C inhibitor. Intriguingly, several mutations occur in codons 95–96, which occupy the switch II pocket targeted by adagrasib and sotorasib. To test the utility of the Rosa26 PE2 model to functionally interrogate mutations associated with resistance, we developed an epegRNA designed to introduce the Kras Y96C transversion and tested its capacity to promote resistance in prime-edited Kras G12C pancreatic organoids treated with gefitinib and sotorasib (Extended Data Fig. 5d ). All organoids were initially treated with both inhibitors for two passages, followed by continued treatment for three additional passages in one group (continuous treatment) and treatment removal in the second group (transient treatment). Consistent with patient data 63 , organoids transduced with the Kras Y96C epegRNA were resistant to combined treatment with gefitinib and sotorasib and exhibited increased allele frequency of the Kras Y96C mutation over time (Fig. 3f ). Positive selection for composite Kras G12C;Y96C mutant organoids was not observed in organoids following the removal of gefitinib and sotorasib, confirming the requirement of the selective pressure exerted by the treatment. Although initially discovered in patients with lung cancer treated with sotorasib monotherapy, these data indicate that secondary KRAS mutations can also confer therapy resistance in other tissues and combination treatment contexts. The above results demonstrate that the Rosa26 PE2 allele can be harnessed for rapid preclinical evaluation of emerging mechanisms of resistance to targeted therapies in tissues of interest and, ultimately, for testing second-generation therapies designed to overcome resistance. Engineering of common p53 mutations with prime editing A key advantage of PE GEMMs is the ability to mediate nearly any codon substitution in accessible tissues, enabling tissue-specific functional studies of genetic variants with putative effects on tumor progression. TP53 is the most frequently mutated gene in human cancer and is often altered via missense mutations that can confer gain-of-function properties in certain contexts 65 . In an analysis of data from cBioPortal 66 , 67 , we found that some of the most frequent p53 amino acid substitutions observed in lung ( TP53 R158L and TP53 R270L ) and pancreatic adenocarcinomas ( TP53 R248W and TP53 R248Q ) have not been targeted to the endogenous Trp53 locus in mouse models (Supplementary Fig. 6 ), despite having putative gain-of-function effects 68 , 69 , 70 . Notably, three of these mutations are transversions that cannot be modeled using base editing, and the human amino acid (p53 R248 ), but not the human codon (CGG versus CGC), is conserved in mouse Trp53 . Therefore, engineering the Trp53 R245W mutation in mice requires a dinucleotide substitution uniquely suitable to prime editing (Supplementary Fig. 6 ). We developed a suite of epegRNAs designed to introduce both Trp53 R245W and Trp53 R245Q and two truncating deletions, Trp53 R250FS and Trp53 M240FS-14nt , using a Trp53 +/+ cell line derived from mouse 3TZ cells (Supplementary Fig. 7 ). After selection with nutlin-3a, most Trp53 flox/+ ; Rosa26 PE2/+ pancreatic organoids transduced with each of these epegRNAs exhibited a prime-edited allele frequency near 100% (Fig. 3g ). We also observed an average of >90% editing purity in these organoids (Fig. 3g and Extended Data Fig. 6c ). Western blots confirmed that Trp53 R245Q and Trp53 R245W cells retained p53 protein expression, while Trp53 R250FS cells did not (Fig. 3h ). While the ratio of prime-edited reads to random indel-bearing reads was typically high, we did observe a variable unintended single nucleotide substitution (0.24%–11.34% of reads) attributable to partial RT of the scaffold sequence when prime editing Trp53 R245Q (Supplementary Fig. 6 ). In one instance, we also observed an insertion of the scaffold sequence when prime editing Trp53 R245W (~7% of reads). Notably, we did not observe any of these unintended events with an epegRNA templating Trp53 M240FS-14nt , which was designed to evade MMR and exhibited a high basal editing efficiency (Extended Data Fig. 6a,c ). In all cases, we observed negligible off-target activity at computationally predicted loci, even after more than 4 weeks of culturing organoids with sustained pegRNA expression (Extended Data Fig. 6d–f ). This result is most striking for Trp53 R245W , which is templated by a pegRNA bearing a protospacer that shares 100% sequence homology with an off-target locus on chromosome 17 (Supplementary Table 4 ). We detected an average of 0.002% editing at this locus, which was substantially greater than the sequencing error rate in control samples (Extended Data Fig. 6f ). No other loci displayed editing levels higher than those observed in controls. Collectively, these results establish the utility of our approach for high-fidelity installation of mutations using systems that can be rationally engineered and easily translated to an in vivo setting. Modeling lung and pancreatic adenocarcinomas in vivo To benchmark the utility of PE GEMMs to model cancer in vivo, we initiated lung and pancreatic adenocarcinomas using autochthonous and orthotopic transplantation strategies (Fig. 4a ). To model lung cancer, we intratracheally transduced the lungs of Trp53 flox/flox ; Rosa26 PE2/+ and Trp53 flox/flox ; Rosa26 PE2/PE2 mice with UPEC lentiviruses encoding the template vector ( n = 4) or pegRNAs for Kras G12D ( n = 20), Kras G12R ( n = 9), Kras G12A ( n = 10), Kras G12C ( n = 13) or the neutral Dnmt1 +GGG ( n = 6). We also infected Trp53 +/+ ; Rosa26 PE2/+ mice with UPEC- Kras G12D to model low-grade lesions and assess in vivo prime editing in a Trp53 -proficient setting. Fig. 4: PE GEMMs enable autochthonous and orthotopic modeling of lung and pancreatic cancer. a , Schematic depicting the design of in vivo experiments. Lung tumors were initiated with lentivirus-encoding UPEC vectors. Pancreatic tumors were initiated by orthotopic transplantation of prime-edited pancreatic organoids. ‘Template’ refers to the template UPEC vector lacking a pegRNA. b , Representative bright-field and fluorescence images of lungs derived from mice infected with the UPEC vector encoding the neutral Dnmt1 +GGG pegRNA, Kras G12D , Kras G12A or Kras G12R epegRNAs described in Fig. 3b . Kras G12D modeling was performed twice with consistent results. Kras G12A and Kras G12R modeling was performed once, and replicates were consistent with representative images shown. c , H&E staining of representative tissue from a control mouse infected with UPEC- Dnmt1 +GGG (bottom), and tumor-bearing mice infected with UPEC- Kras G12D , UPEC- Kras G12A and UPEC- Kras G12R (top). Scale bars from left to right indicate 2 mm, 100 µm and 20 µm, respectively. d , Bar charts indicate the distribution of grades across 16-week lesions from UPEC- Kras G12D ( n = 14 mice), UPEC- Kras G12A ( n = 10 mice) or UPEC- Kras G12R ( n = 9 mice). Data and error bars indicate the mean and standard deviation of all biological replicates in each condition. Statistical significance was calculated using unpaired, two-tailed t -tests comparing the fraction of grade 1 lesions in Kras G12A -driven tumor tissue to Kras G12D -driven tumor tissue ( P < 0.0001) or Kras G12R -driven tumor tissue ( P < 0.0001). **** P < 0.0001. e , Allele frequencies of Kras G12D ( n = 4 mice), Kras G12A ( n = 6 mice), Kras G12R ( n = 4 mice) (16 weeks) and Kras G12C + silent edits ( n = 5 mice) (12 weeks) and indel byproducts in bulk lung tumors. Data and error bars indicate the mean and standard deviation across tumors from independent mice. f , H&E staining of representative pancreatic adenocarcinomas from a mouse transplanted with Kras G12D organoids (top) and a mouse transplanted with Kras G12C organoids (bottom). Scale bars from left to right indicate 2 mm and 25 µm, respectively. g , Mass of pancreata of Kras G12D ( n = 6 mice), Kras G12C ( n = 9 mice) or UPEC-template ( n = 6 mice) organoid transplant recipients measured in milligrams ( n = 6–9 mice). Data and error bars indicate the mean and standard deviation across tumors from independent mice. Statistical significance was calculated using a two-tailed Mann–Whitney U test ( P = 0.036). * P < 0.05. Full size image In Trp53 flox/flox recipients, tumors initiated by UPEC- Kras G12D were readily visible by µCT at 14 weeks postinjection (Supplementary Fig. 8 ). At 16 weeks, we observed multifocal fluorescent lesions in 16 of 20 (80%) UPEC- Kras G12D recipients and in none of the controls (Fig. 4b ). Histopathological analysis confirmed that lesions induced by prime editing recapitulated the full spectrum of lung cancer progression, from grade 1 atypical adenomatous hyperplasia through grade 4 adenocarcinoma. By immunohistochemistry, prime-edited tumors recapitulated the cellular and molecular evolution seen in the classical Kras LSL-G12D/+ ; Trp53 flox/flox (KP) GEMM model, demonstrating the downregulation of lung lineage transcription factor Nkx2-1 and the expression of chromatin regulator Hmga2 in poorly differentiated, advanced lesions 71 , 72 , 73 (Fig. 4c,d and Extended Data Fig. 7b,c ). We confirmed that tumors were initiated through on-target prime editing by sequencing genomic DNA derived from several bulk tumors (Fig. 4e ). Prime editing in vivo did not require a loss of p53, as 2 of 3 Trp53 +/+ ;Rosa26 PE2/+ mice developed fluorescent tumors upon infection with UPEC -Kras G12D , consistent with previous studies demonstrating that oncogenic Kras is sufficient to drive lung adenoma formation in vivo 74 (Extended Data Fig. 8e,g ). These adenomas also harbored the intended Kras G12D mutation. Similar to UPEC- Kras G12D recipients, UPEC- Kras G12A and UPEC- Kras G12R recipients consistently presented multifocal fluorescent lesions driven by on-target prime editing throughout the lung (Fig. 4c–e ). However, both UPEC- Kras G12A and UPEC- Kras G12R recipients presented with greater tumor numbers than UPEC- Kras G12D recipients (Fig. 4b and Extended Data Fig. 8b ). While this is likely attributable in part to more efficient editing with the Kras G12A and Kras G12R epegRNAs (Fig. 3b ), there were also discernible differences in the apparent oncogenic capacity of these mutations. In 8 of 9 UPEC- Kras G12R recipients, the overall tumor burden was substantially higher than the Kras G12A setting (Extended Data Fig. 8a ). Furthermore, histopathological analysis revealed that Kras G12R and Kras G12D tumors were of consistently higher grades relative to Kras G12A lesions (Fig. 4d ). This is particularly striking given the relative rarity of KRAS G12R in patients with lung cancer (<1% of KRAS mutations; Discussion), although, of note, our data are consistent with previous study demonstrating that Kras G12R is highly oncogenic in mouse models 15 . Taken together, these results highlight significant allele-specific differences in the oncogenic capacity of different Kras mutations and showcase the utility of PE GEMMs for rapidly discovering such phenotypes. In contrast to other Kras mutations, only 4 of 13 (31%) UPEC- Kras G12C recipients presented tumors when collected at 19 weeks, likely a consequence of the lower prime-editing efficiency of the Kras G12C epegRNA. Furthermore, deep amplicon sequencing of these tumors occasionally revealed unintentional edits, including an additional silent substitution in codon 11 in one case (Extended Data Fig. 8d ). To address this shortcoming, we designed an improved Kras G12C epegRNA encoding MMR-evasive substitutions, which edits at a 3.2-fold higher efficiency (Extended Data Fig. 4b ). At 12 weeks, 8 of 9 Trp53 flox/flox ; Rosa26 PE2/PE2 mice infected with this epegRNA developed multifocal tumor burden (Extended Data Fig. 8b,c ). Targeted sequencing confirmed the presence of the multinucleotide substitution encoding Kras G12C , without any unintended byproducts (Fig. 4e ). To further test the potential of PE GEMMs for cancer modeling in vivo, we transplanted prime-edited Kras G12D/+ ; Trp53 flox/flox ; Rosa26 PE2/+ and Kras G12C/+ ; Trp53 floxflox ; Rosa26 PE2/+ pancreatic organoids into immunocompetent mice harboring the Rosa26 PE2 allele (to ensure immunological tolerance 75 to the prime editor enzyme). As controls, we transplanted Trp53 flox/flox ; Rosa26 PE2/+ organoids infected with the template UPEC vector. Tumors were visible via ultrasound by 5 weeks (Supplementary Fig. 8 ), and fluorescent tumors that reflected the spectrum of pancreatic neoplasia were observed in 8 of 9 Kras G12D/+ recipients by 9 weeks post-transplantation (Fig. 4f and Extended Data Fig. 9a ). Notably, only 4 of 9 mice (44%) from the cohort of animals transplanted with Kras G12C/+ pancreatic organoids developed lesions. Of the remaining five mice, one developed a high-grade PanIN, while the rest did not develop any lesions. Tumor burden in Kras G12C mice was substantially lower than in Kras G12D mice, as reflected in pancreatic weight measurements (Fig. 4g ). These results are consistent with previous observations, suggesting that Kras G12C may be less tumorigenic in the pancreas 58 . Metastases were only observed in Kras G12D recipients (Extended Data Fig. 9a ), indicative of a more aggressive phenotype of these tumors. We did not observe tumor formation in control recipients by ultrasound, microscopy or histology, consistent with previous study showing that Trp53 knockout alone is insufficient for pancreatic tumorigenesis 47 , 76 . To model autochthonous pancreatic adenocarcinoma, we adapted a strategy of retrograde pancreatic duct viral delivery 47 , 77 . We infected Trp53 flox/flox ; Rosa26 PE2/+ mice with UPEC vectors encoding either Kras G12D or Kras Y96C as a control. Notably, 3 of 4 Kras G12D - infected animals developed pancreatic adenocarcinoma, while no tumors were detected in Kras Y96C - infected animals (Extended Data Fig. 9e ). Discussion Advances in genome editing technologies have accelerated functional genetic studies, yet most approaches to model cancer mutations have relied on Cas9-mediated gene disruption via non-homologous end joining, failing to recapitulate many genetic lesions observed in human cancer. Emerging precision genome editing technologies like base editing and prime editing are poised to fill this gap by allowing the engineering of specific cancer-associated mutations. Nevertheless, the considerable size of base editors and prime editors makes delivery to most tissues and cell types challenging, posing significant limitations for in vivo studies. Previous studies have addressed this using split-prime editor systems that enable prime editing in vivo when delivered by dual adeno-associated virus (AAV) vectors. However, dual-AAV approaches remain hampered by delivery challenges to many tissues and, notably, they can elicit an immune response against the prime editor enzyme 34 , 78 . The immunogenicity of genome editing reagents delivered exogenously substantially complicates cancer modeling experiments. With these challenges in mind, we developed a PE GEMM capable of rapidly installing a variety of genetic lesions with single nucleotide precision across in vitro, ex vivo or in vivo contexts, as well as in an autochthonous, immunocompetent setting. By expressing the PE2 enzyme endogenously, we bypass the risk of a confounding immune response and substantially expand the capacity to deliver other functional cargo, such as Cre. We used this model to install a variety of cancer-associated mutations, including transversions, transitions, multinucleotide substitutions and deletions across Trp53 , Egfr and Kras . In the context of our pancreatic orthotopic transplant experiments, we observed that different Kras mutations exhibit variable in vivo tumor-initiating potential, consistent with previous study comparing Kras G12C and Kras G12D autochthonous models in the pancreas 58 . In the lung, we found that Kras G12A , Kras G12D and Kras G12R promote efficient but variable tumor formation. Tumor burden differences across genotypes are likely driven in part by variable pegRNA efficiencies, yet we also observed significant differences in the phenotype and grade of tumors when using rationally optimized pegRNAs. For example, Kras G12A -driven tumors exhibited a less advanced, more differentiated histopathology than Kras G12R and Kras G12D . The significant tumor-initiating potential of Kras G12R is notable, given the rarity of KRAS G12R in patients with non-small cell lung cancer 61 , but is consistent with previous results from ref. 15 . Intriguingly, KRAS G12R is known to have substantially impaired GTP hydrolysis relative to other KRAS G12 mutants 79 . This property could enhance oncogenicity, yet KRAS G12R is found at low frequency in most solid tumor types, except pancreatic cancer 61 . In pancreatic models, Zafra et al. previously found that Kras G12R mutations exhibit little to no PanIN formation potential in Trp53 +/+ mice constitutively expressing Kras G12R in the pancreas, while Kras G12D promoted significant PanIN formation in most of the entire organ 58 . In contrast, transplanted Kras G12R ; Trp53 flox/flox organoids generated tumors with efficiency similar to Kras G12D ; Trp53 flox/flox organoids. These findings and our study suggest that mutation-specific properties may subject KRAS G12R to especially potent tumor suppressive mechanisms that are lost in the context of concomitant Trp53 knockout specific to the mouse experiments described. This warrants further investigation in the context of other genotypes (for example, Trp53 +/+ ) and experiments in which the sequence of mutations is temporally controlled. We also observed Kras allele-specific responses to mutant-specific targeted therapies. For example, similar to previous studies of KRAS G12C inhibitors 58 , 80 , we found that a KRAS G12D inhibitor, MRTX1133, elicits a more powerful effect on prime-edited Kras G12D pancreatic organoids when combined with the EGFR inhibitor, gefitinib. Several other clinical agents targeting a broader spectrum of oncogene mutations are undergoing clinical evaluation, and sotorasib and adagrasib, two KRAS G12C inhibitors, have now been approved by the Food and Drug Administration 60 , 62 . PE GEMMs represent ideal systems for rapid interrogation of the effects of targeted therapies in the context of virtually any oncogenic mutation, including secondary resistance mutations like KRAS Y96C that are now being identified in patients. PE GEMMs also enable in vivo interrogation of these mutations in the context of syngeneic and immunocompetent mice. This broad utility for modeling Kras mutations in vivo is critical, as mutant KRAS inhibition has been shown to impact the tumor-immune microenvironment in models of colon cancer 81 , 82 and may synergize with immune checkpoint blockade in other tissues not yet examined. Beyond KRAS , we demonstrate in pancreatic organoids the precise installation and selection of two Trp53 dinucleotide substitutions encoding two mutant amino acid residues frequently observed at the same codon in human pancreatic cancer, as well as out-of-frame multinucleotide deletions at a nearby codon. We observed over 90% editing purity after the selection of all these mutations in vitro. Despite a high intended edit-to-unintended indel ratio, we also observed an unintended single nucleotide substitution at variable frequency when prime editing Trp53 R245Q (Supplementary Fig. 6 ). We attribute this event to partial homology between the genomic region immediately following the RTT and the few nucleotides in the pegRNA scaffold that are commonly reverse-transcribed and excised during DNA repair, a prime-editing intermediate noted by ref. 51 . Such unintended edits could be avoided by using an alternative pegRNA with an RTT ending a few nucleotides up or downstream to eliminate the homology or could be reduced by introducing silent edits that prevent repeated editing of the same target site, as we demonstrated with the epegRNA encoding Trp53 M240FS-14nt . This pegRNA is based on the same protospacer as Trp53 R245Q , yet has a longer RTT and encodes a deletion that eliminates both the seed and PAM sequences. However, this phenomenon merits additional caution during pegRNA design and may be exacerbated in long-term prime-editing experiments, such as when selecting cell lines over several passages with continuous expression of the prime editor and pegRNA. The overall editing purity highlights the utility of prime editing for precise engineering of mutations with negligible indel byproducts. This is a key advantage over Cas9 HDR-based approaches, in which the high rate of indel byproducts could dilute intended point mutations in vitro and in vivo. Low editing purity could especially limit the study of specific point mutations in tumor suppressor genes, as unintended indels in these genes can produce frameshift mutations subject to positive selection. This limitation is especially important when considering that many genes, including TP53 , often harbor point mutations that confer different properties relative to loss-of-function truncations, including gain-of-function effects 68 , 83 , 84 , 85 . For instance, Schulz-Heddergott et al. demonstrated that TP53 R248Q exhibits a gain-of-function effect by hyperactivating the JAK2/STAT3 pathway, leading to more aggressive tumor progression in models of colon cancer 68 . These observations remain largely untested in models of pancreatic cancer in vivo due to a lack of suitable transgenic mouse models and human cell lines 84 . PE GEMMs are poised to fill critical gaps like this by allowing rapid and fine-tuned mutation control in a variety of tissue settings. Although we did not explore them here, a variety of techniques are available to optimize prime-editing efficiency, such as PE3 and PE3b editing strategies that combine nicking guides to bias DNA repair toward the incorporation of prime-edited nucleotides. Nevertheless, strategies based on single pegRNAs are more straightforward, have better multiplexing capacity because they rarely cause indels and are better suited for high-throughput studies like genetic screens. In general, we found that spacer optimization and testing of up to 15 guides were sufficient to identify epegRNAs suitable for our experiments. We also found that silent or benign MMR-evasive edits close to the intended mutation reliably amplify prime-editing efficiency by several fold, even for epegRNAs with optimized spacer sequences and PBS and RTT lengths. These techniques enabled us to identify epegRNAs that edit with greater than 20% efficiency across several cancer-associated genes. Future users should consider these and other strategies, including the co-delivery of an MLH1 dominant negative gene (PE4/5) 29 or sensor-based pegRNA library approaches 35 , to maximize overall prime-editing efficiencies, which may be especially helpful for in vivo applications. We generally observed negligible off-target activity at computationally predicted loci, including one example with a protospacer identical to the intended target. This result corroborates the high on-target fidelity of prime editing. As established in previous studies 24 , 29 , 51 , additional homology required for repair using the RT product limits activity at off-target loci. While our results are consistent with previous literature, future studies could employ whole-genome sequencing to fully characterize off-target prime editing beyond a limited number of prioritized loci. While we focused on installing somatic cancer driver mutations, we anticipate that PE GEMMs could be employed for broader applications. In principle, germline Rosa26 PE2 alleles could be used to construct heritable mutations by modifying zygotes with pegRNAs encoding known drivers of inherited disease. We also envision sophisticated tumor modeling with the insertion of custom neoepitopes and other functional genetic sequences. These applications would enable investigators to address key questions in cancer genetics, immunology and diverse genetic diseases while reducing the need to generate, genotype and otherwise maintain traditional GEMMs. Finally, the combination of multiple epegRNAs in the context of a modified UPEC vector or LNP formulation should enable autochthonous generation of tumors defined by custom sets of multiple driver mutations in wild-type prime editor mice. This would enable increasingly complex studies of cooperating driver mutations. With these capabilities, PE GEMMs can provide a rapid preclinical avenue to complement both fundamental and clinical investigations aimed at treating cancer with precision treatment paradigms. Methods Analysis of prime and base editor capabilities for modeling cancer-associated mutations We constructed a Python-based computational pipeline to compare the abilities of prime and base editors to model cancer-associated mutations. Data were retrieved from MSK-IMPACT datasets 35 . Analysis of cancer mutations incorporated in transgenic mouse models We used the MouseMine tool from the Mouse Genome Informatics database 42 , 43 to obtain a comprehensive list of published transgenic alleles. We initiated our search using the mammalian phenotype code MP:0002006 (‘neoplasm’) to retrieve all mouse models related to the study of cancer. We then modified the search with the following parameters: ‘allele type’, ‘mutations (name)’, ‘alleles (name and molecular note and attribute string)’ and ‘subjects (synonyms → names)’. We then filtered the results to retain only allele types annotated as ‘targeted’, ‘transgenic’ or ‘endonuclease mediated’. After exporting these data (Supplementary Table 2 ), we identified the 100 most frequent SNVs present in the MSK-IMPACT dataset. We then manually cross-referenced these two lists to identify available models representing specific mutations. In cases where models were absent in the MouseMine list, we performed a manual literature search to confirm the absence of models in the published literature. Using this approach, we designated for each mutation (1) whether any transgenic allele exists that can be used to model cancer in mice and (2) whether any existing models enable selective expression in a tissue of interest (for example, through Cre recombinase-induced removal of an LSL cassette). Design and cloning of the Cre-inducible prime editor allele The PE2-P2A-mNG Rosa26 targeting vector was generated with a backbone formed via BstBI and AscI restriction enzyme digestion of the Sp Cas9-NLS-P2A-EGFP Rosa26 targeting vector 10 , 45 . A fragment encoding the PE2 enzyme was generated by PCR amplification from the pCMV-PE2 plasmid obtained from Addgene 24 (132775), and a fragment containing the P2A-mNG sequence was amplified from a plasmid encoding Cre-P2A-mNG. Two additional fragments containing WPRE-pA-PGK (Woodchuck Hepatitis Virus Posttranscriptional Regulatory Element - poly(A)-PGK promoter) and a neomycin resistance gene (NeoR-pA) were PCR-amplified from the Sp Cas9-NLS-P2A-EGFP vector. An FRT3 site was installed by incorporating overlapping portions of this motif into the PCR primers. All primers used are listed and described in Supplementary Table 5 . A five-part Gibson assembly reaction generated the final targeting vector using these components 86 . Embryonic stem cell targeting, validation and chimera generation P4*, a C57BL/6J Kras +/+ ;Trp53 flox/flox (P) mouse embryonic stem (ES) cell line, was generated by crossing a hormone-primed C57Bl6J Trp53 flox/flox female with a C57Bl/6J Kras LSL-G12D ;Trp53 flox/flox male. At 3.5 d after coitum, blastocysts were flushed from the uterus, isolated and cultured individually on a mouse embryonic fibroblast (MEF) feeder layer. After 5–7 d in culture, the outgrown inner cell mass was isolated, trypsinized and replated on a fresh MEF layer. ES cell lines were genotyped for Kras LSL-G12D , Trp53 flox/flox and Zfy (Y-chromosome specific). Notably, 36 µg of the prime editor targeting vector (R26–CAGG-LoxStopLox-Cas9 (H840A) -MMLVRT-P2A-mNeonGreen-WPRE-bHGpA; PGK-Neo-PGKpA) was linearized with PvuI, phenol/CHCl 3 extracted, and then ethanol precipitated. After resuspending the DNA in 150 µl of PBS, it was mixed with 3 × 10 6 P4* ES cells in 650 µl of PBS in a 4-mm electroporation cuvette. The cell–DNA mixture was pulsed once in a BioRad Genepulsar 2 (600 V and 25 µF) followed by replating of the cells on irradiated MEFs. After 48 h, the ES cell cultures were placed under selection with Geneticin (GIBCO) at 350 µg ml − 1 . A total of 45 colonies were manually picked using a stereomicroscope. Each clone was expanded and evaluated for correct integration by PCR with primers spanning the 5′ homology arm. Eleven PCR-positive clones were further evaluated using southern blot analysis. Briefly, genomic DNA was digested with EcoRV-HF (NEB) overnight. Digestions were electrophoresed on 0.7% agarose gels and blotted to Amersham Hybond XL nylon membranes (GE Healthcare). Samples were probed with 32 P-labeled ‘Rosa26 3’ ‘external’ and Cas9 ‘internal’ probes applied in Church buffer (probe sequences available on request). Correctly targeted clones verified by both PCR and southern blot analysis were injected into albino C57BL/6J blastocysts. High-degree chimeras (visually assessed by coat color percentage) from the 100C7 and 100C8 ES cell clones successfully transmitted the prime editor allele through the germline. Nucleofection of Neuro-2a cells and genomic DNA preparation To evaluate spacers near the genetic locus encoding G12 in Kras , Neuro-2a cells were nucleofected using the SF Cell Line 4D-Nucleofector X Kit (Lonza) with 2 × 10 5 cells per sample (program DS-137). Notably, 800 ng of SpCas9-expressing plasmid and 200 ng of single guide RNA (sgRNA)-expressing plasmid were used according to the manufacturer’s protocol. Three days following nucleofection, the cells were washed with PBS after removing the media and then lysed by the addition of 150 µl of freshly prepared lysis buffer (10 mM Tris–HCl, pH 8 at 23 °C; 0.05% SDS; 25 μg ml −1 of proteinase K (Qiagen)). The Kras amplicon was amplified from the genomic DNA samples, sequenced on an Illumina MiSeq and analyzed with CRISPResso2 (ref. 87 ) for indel quantification 37 . pegRNA design and cloning pegRNAs were designed in part using the pegRNA design tool, Prime Design 88 . In some cases (for example, editing at Kras G12D ), CRISPR sgRNAs were tested before pegRNA design to select spacers that exhibited the highest level of Cas9 activity. For some designs, the trimmed evopreQ 1 motif was included to form epegRNAs and optimize editing efficiency within a limited cohort of initial candidates 51 . pegRNAs and their sequences are provided in Supplementary Table 3 . All pegRNAs were tested within the context of UPEC or hU6-RFP/UPEmS vectors. All pegRNA-expressing vectors were assembled via Golden Gate Assembly 89 using the uncut template plasmid and three annealed oligo pairs consisting of the spacer sequence, the scaffold and the 3′ extension, all with compatible overhangs. Assembly was facilitated using the Golden Gate Assembly Kit (BsmBI-v2) from New England BioLabs. The UPEmS template vector was generated via Gibson assembly of three insert fragments and a linearized backbone. Two fragments were formed by PCR amplification from the ‘pU6 pegRNA GG acceptor’ plasmid (Addgene plasmid, 132777) 24 . Specifically, the hU6 promoter was amplified using primers modified to install a BsmBI recognition site and the pAF Gibson adapter sequence on either side of the promoter (pAF-hU6-BsmBI), and the RFP component was also amplified in part using a primer that installed another BsmBI recognition site (forming BsmBI-RFP-BsmBI-pAR/gBF). A third fragment, gAR/pBF-EFS-mScarlet-gBR, was amplified from a separate lentiviral plasmid containing U6-sgRNA-EFS-mScarlet. All fragments were designed to contain compatible overhangs for Gibson assembly. All vectors with detailed maps and sequences will be deposited into Addgene. The UPEC template plasmid (hU6-RFP-EF-1ɑ-Cre) was developed by Gibson assembly of two insert fragments and the same backbone used to clone pUPEmS. The pBF-EF-1alpha-Cre-gBR fragment was generated using pBF and gBR PCR primers targeting the pUSEC (U6-sgRNA-EF-1alpha-Cre) vector 86 , 90 . The pAF-U6-RFP-gAR fragment was amplified from the UPEmS vector. Generation of tail-tip-derived Rosa26 PE2/+ fibroblasts To generate Rosa26 PE2 cell lines for convenient testing of pegRNAs, a 2-cm piece was excised from the tail tip of an anesthetized, 3.5-week-old male. The sample was sprayed with ethanol and then dipped in PBS several times. A lengthwise incision was made, and the outside skin and hair were removed. The sample was then incubated at 37 °C in digestion buffer comprised of 5 ml DMEM, 25 µl penicillin-streptomycin, 5 µl Amphotericin B, 10 µl DNase (40 U ml −1 ; −20 °C; 1:500), 50 µl collagenase (100 mg ml −1 ; 1:100) and 50 µl CaCl 2 (36 mM; 1:100). Samples were then washed twice with PBS, and dissociated chunks were added to a 6-cm dish. Additional media containing Amphotericin B was added the following day. HEK293 and fibroblast cell culture conditions HEK293, split-PE2 3TZ and tail-tip-derived Rosa26 PE2/+ fibroblast cells were cultured in standard media consisting of Dulbecco’s Modified Eagle’s Medium (DMEM) (Corning), penicillin-streptomycin and 10% (vol/vol) FBS. All cultured cells were incubated at 37 °C and 5% CO 2 . Pancreatic ductal organoid culture Pancreata from mice of the desired genotype were dissected manually and minced with a razor blade. Pancreas tissue was then dissociated by 20 min of gentle agitation in pancreas digestion buffer (1× PBS (Corning), 125 U ml −1 collagenase IV (Worthington)) at 37 °C. Tissue suspensions were then strained through 70 µM filters, washed with 1× PBS, and pelleted with slow deceleration by centrifugation. Cells were resuspended in 100% Matrigel (Corning) and plated as 50 µl domes into 24-well plates (GenClone). Upon solidification of domes, cells were cultured in organoid complete media 47 , or alternatively, in a complete medium as follows: AdDMEM/F-12 medium supplemented with HEPES (1×, Invitrogen), GlutaMAX (1×, Invitrogen), penicillin/streptomycin (1×, VWR), B27 (1×, Invitrogen), R-Spondin1-Conditioned Medium (10% vol/vol), A83-01 (0.5 µM, Tocris), mouse epidermal growth factor (mEGF; 0.05 µg ml −1 , PeproTech), Fibroblast Growth Factor 10 (FGF-10; 0.1 µg ml −1 , PeproTech), Gastrin I (0.01 µM, Tocris), recombinant mouse Noggin (0.1 µg ml −1 , PeproTech), N-acetyl- l -cysteine (1.25 mM, Sigma-Aldrich), nicotinamide (10 mM, Sigma-Aldrich) and Y-27632 (10.5 µM, Cayman Chemical Company). Organoids were passaged using TrypLE Express (Life Technologies) for Matrigel digestion for 15–30 min at 37 °C. Organoids were infected at a high multiplicity of infection to ensure 100% recombination. Briefly, concentrated lentivirus (either diluted 1:9 or undiluted) was introduced to cell suspensions at the time of passage. For Trp53 flox/flox lines, Nutlin-3a was added to organoid media (10 µM; Sigma-Aldrich) to ensure the purification of recombined organoids. For prime-edited organoids harboring Kras G12D or Kras G12C mutations, organoids were cultured in the presence of 1 µM Gefitinib in full organoid media (Cayman) to select for the intended edit. Sotorasib (Selleck) was added to media at 1, 2 and 5 µM. MRTX1133 (MedChem) was added to the media at 2 µM or 5 µM. Prime-edited mutations were confirmed by deep amplicon sequencing of organoids several days after the initial infection with lentivirus, and then again after several passages under treatment with the drug. For the selection of transgene-containing cells from chimera-derived pancreatic organoids, organoids were treated with 800 µg ml −1 of Geneticin (GIBCO). Organoid viability and proliferation were quantified using the alamarBlue HS Cell Viability Reagent (Thermo Fisher Scientific). Viability reagent was directly added to organoid culture at 1/10 media volume. After 24 h, 200 µl of reagent-containing media was removed and assayed in replication in a Tecan Infinite Pro m200 using the manufacturer’s parameters. Lung organoid culture Lung organoids were derived from 8–20-week-old mice 91 . Fresh lung tissue was transferred into 500 µl disase and minced. Digestion buffer of 3–5 ml containing advanced DMEM/F-12, penicillin–streptomycin, Amphotericin B, 1 mg ml −1 Collagenase (Sigma, C9407-500MG), 40 U ml −1 DNase I (Roche, 10104159001), 5 µM HEPES and 0.36 mM CaCl 2 was added for a 20–60-min incubation at 37 °C in a rotating oven. The resulting suspension was incubated in 1 ml ACK Lysis Buffer (Thermo Fisher Scientific, A1049201) for 3–5 min at room temperature to lyse red blood cells. Samples were then washed two times with fluorescence-activated cell sorting (FACS) buffer (1× PBS with 1 mM EDTA and 0.1% BSA) and filtered through 40 µm mesh. Samples were resuspended in 150 µl FACS buffer, and CD45 cells were depleted using the EasySep Mouse CD45 Positive Selection kit (STEMCELL Technologies, 18945). Cells were stained with anti-mouse CD31-APC (1:500; Biolegend, 102507), CD45-APC (1:500; BD Biosciences, 559864), EpCAM-PE (1:500; Biolegend, 118206) and MHCII-APC-eFluor-780 (1:500; Thermo Fisher Scientific, 47-5321-82). The suspensions were then sorted for DAPI - , CD31 - , CD45 - , EpCAM + and MHCII + cells, visualized using BD FACS Diva v8. Approximately 20,000 sorted AT2 cells were mixed with Growth Factor Reduced Matrigel (Corning) at a ratio of 1:9 and seeded onto multiwell plates as 20 µl drops. The drops were incubated at 37 °C for 15 min to solidify and then overlaid with F 7 NHCS medium supplemented with Y-27632 (Cayman). For passaging, matrigel drops were dissolved in TrypLE Express (Sigma, 12604-013) and incubated at 37 °C for 7–15 min. The organoid suspensions were then dissociated into single cells by vigorous pipetting, washed twice, resuspended in 1× PBS and plated as described above. Generation of a split-PE2 fibroblast cell line A cell line based on mouse 3TZs cells was developed to test Trp53 -targeted pegRNAs on a Trp53 +/+ background. Two plasmids containing halves of the PE2 enzyme and distinct antibiotic resistance genes were generated via Gibson assembly. The split intein-based constructs described in refs. 24 , 92 were used to enable post-translational splicing of the intein motifs and subsequent joining of the halves to form the full PE2 enzyme. Specifically, the N-terminal half of PE2 (the first 573 amino acids of the Cas9 nickase joined to the Npu N-intein) was PCR-amplified from the U6-DNMT1-hSynapsin-PE2-N-terminal-P2A-EGFP-KASH-lenti plasmid (Addgene, 135955) and then cloned into a puromycin resistance gene-containing backbone. A blasticidin resistance gene-containing backbone was assembled into a second vector with a PCR-amplified DNA fragment encoding the C-terminal half of PE2 (Npu C-intein joined to the remaining C-terminal half of PE2), amplified from the hSynapsin-PE2-C-terminal-lenti plasmid (Addgene, 135956). The two constructs were incorporated into lentiviruses, which were used to transduce mouse 3TZ fibroblast cells, followed by selection with up to 10 µg m −1 of puromycin and 20 µg ml −1 of blasticidin. Production of lentivirus and transduction Lentivirus was produced by transfection of the expression vector into 293FS* cells along with psPAX2 (psPAX2 was a gift from Didier Trono—Addgene plasmid, 12260; ; RRID: Addgene_12260 ) and pMD2.G (pMD2.G was a gift from Didier Trono—Addgene plasmid, 12259; ; RRID: Addgene_12259 ) packaging plasmids at a 4:3:1 ratio using polyethylenimine or Mirus transfection reagent. A volume of 1 ml of small-scale viral supernatant was added directly to 1 × 10 5 cells at seeding in a six-well plate (Corning) for transduction. Small-scale transductions were supplemented with polybrene (10 mg ml −1 , 1:1,000; Sigma). Concentrated large-scale lentivirus and small-scale viruses were stored at −80 °C if not used immediately. Generally, cell lines were infected with small-scale virus, while organoids were infected with large-scale virus. Quantification of lentiviral titer was performed using a GFP Cre reporter 3TZ cell line 14 . Intratracheal delivery of lentivirus into the lung Mice were anesthetized in an isoflurane chamber. A total of 6 × 10 4 transducing units (TU) or 1 × 10 5 TU of lentivirus containing UPEC vectors encoding pegRNAs and Cre recombinase were injected intratracheally into Rosa26 PE2 mice 93 . Mice were sex and age-matched within 4 weeks across experimental arms. Orthotopic transplantation of pancreatic organoids Animals were anesthetized with isoflurane, the left abdominal side was depilated with Nair and the surgical region was disinfected with Chloraprep swabstick (BD). A small incision (~1.5 cm) was made in the left subcostal area, and the spleen and pancreas were exteriorized with ring forceps. The organoid suspension (containing 1 × 10 5 organoid cells in 100 µl of 50% PBS + 50% Matrigel) was injected using a 30-gauge needle into the pancreatic parenchyma parallel to the main pancreatic artery. The pancreas and spleen were gently internalized, and the peritoneal and skin layers were sutured independently using a 4/0 PGA suture and a 4/0 silk suture, respectively (AD Surgical). All mice received preoperative analgesia Buprenorphine Sustained-Release (Bup-SR; 0.5 mg kg −1 ) and were followed postoperatively for any signs of distress. Organoid/Matrigel mixtures were kept on ice throughout the whole procedure to avoid solidification. For orthotopic transplantation, syngeneic C57BL/6J Rosa26 PE2 mice (aged 6–17 weeks) were used as recipients. Male pancreatic organoids were only transplanted into male recipients. Autochthonous pancreatic tumor modeling Retrograde pancreatic duct infection with lentivirus was modified from previously reported techniques 77 . The ventral abdomen was depilated (using Nair) 1 d before surgery. Animals were anesthetized with isoflurane. The surgical area was disinfected with betadine/isopropanol and a 2- to 3-cm incision was made in the anterior abdomen. A subsequent vertical incision was made through the abdominal wall, securing the incision edges with a Colibri retractor. A Nikon stereomicroscope was used to visualize the pancreas, common bile duct and sphincter of Oddi. The common bile duct and cystic duct were gently separated from the portal vein and hepatic artery using blunt dissection with Moria forceps. A microclip was placed over the common bile duct to prevent the influx of the viral particles into the liver or gallbladder. A 30-gauge needle was used to cannulate the common bile duct at the level of the sphincter of Oddi, and 150 µl virus was injected over 30 s. After injection and removal of instruments, the peritoneum was closed using running 5-0 Vicryl sutures. The abdominal wall and fascia were closed using simple interrupted 5-0 Vicryl sutures. Animals were administered postoperative sustained-release Buprenorphine (Bup-SR) and were monitored postoperatively for signs of discomfort or distress. For retrograde pancreatic ductal installation, male mice (aged 8–20 weeks) and female mice (aged 8–20 weeks) were transduced with 500,000 TU in serum-free media (Opti-MEM; Gibco). Lipid nanoparticle (LNP) formulation and injection LNPs were formulated with modifications from an existing protocol 53 . Dnmt1 +GGG Synthetic pegRNA (1 mg) was ordered from Agilent Technologies. The first three and last three nucleotides were modified with 2′O-methyl groups. The first three and last three nucleotide bonds were phosphorothioate-modified bonds. Both modifications were made to increase the stability of the guide. Cre mRNA was obtained from Trilink. A weight ratio of 1:7.5 total mRNA ionizable lipid was used for LNP formulation, with a 1:2 ratio of Cre mRNA: pegRNA. The aqueous phase was prepared with 25 mM sodium acetate (pH 5.2), Cre mRNA and pegRNA solution. Two organic phase preparations were made by adding an ionizable lipid (Lipid 10 or 306-O12B) to cholesterol (Sigma-Aldrich), DOPC (Avanti) and DMG-PEG (Sunbright) stock solutions in 100% ethanol, at a 50:38.5:10:1.5 molar ratio. Nanoparticles were prepared by combining the organic and aqueous phases at a 1:3 ratio and assembled using a NanoAssemblr (Precision Nanosystems). LNPs were dialyzed for 4 h against PBS in Thermo Fisher Scientific Slide-A-Lyzer dialysis Cassettes (3.5 K MWCO). LNPs were kept on ice before animal dosing. Mice were administered a maximal dose of 60 µg total RNA via tail vein injection, corresponding to roughly 200 µl per mouse. Animal studies All mouse experiments described in this study were approved by the Massachusetts Institute of Technology Institutional Animal Care and Use Committee (IACUC) (institutional animal welfare assurance, A-3125-01). For Vilin-cre ERT2 ;Rosa26 PE2/+ animals, Tamoxifen was administered in the diet (Envigo, TD.130860) for 2 weeks before tissue collection. Mice aged between 7 and 20 weeks old were chosen for in vivo experiments. Mice of both sexes were used for autochthonous lung tumor initiation, and male mice were chosen for orthotopic pancreatic organoid experiments as the transplanted organoid line was male-derived. Mice were assessed for morbidity according to guidelines set by the MIT Division of Comparative Medicine and were humanely killed before natural expiration. Ultrasound imaging Animals were anesthetized with isoflurane and the left subcostal region of animals was depilated with Nair. Animals were imaged with a Vevo3100/LAZRX ultrasound and photoacoustic imaging system (Fujifilm Visualsonics). Anesthetized animals were positioned supine and angled on an imaging platform for visualization of peritoneal organs. Landmark organs including the kidney and spleen were first identified before imaging. A thin layer of ultrasound gel was applied over the depilated region of the abdomen. The transducer (VisualSonics 550S) was positioned above the abdomen and set at the scanning midpoint of the healthy pancreas or tumor. Approximately 1 cm of scanning area was used to capture the entirety of pancreas tumors, using a z-slick thickness of 0.04 mm. Ultrasound scans were uploaded to Vevo Lab Software, from which representative images were exported. Rodent µCT Mice were anesthetized with isoflurane (3%, then maintained at 2.0–2.5% in oxygen—VetEquip) and scanned in a prone position using a Skyscan 1276 (Bruker) with the following parameters: 100 kVp source voltage, 200 μA current, 0.5 mm aluminum X-ray filter, 108 ms exposure time and 0.65-degree rotational step size over 360 degrees in a continuous rotation. With 4 × 4 detector binning, the nominal pixel size after reconstruction (Bruker NRecon software) was 40.16 microns. Data were visualized using ImageJ. Histology, immunohistochemistry and immunofluorescence Pancreata from control and tumor-bearing animals were manually dissected from the peritoneal cavity after they were killed. Tumor-bearing lung was flushed with 1× PBS and separated into separate lobes. Tissue was fixed in Zinc Formalin overnight, transferred to 70% ethanol and then embedded in paraffin. Hematoxylin and eosin (H&E) staining was performed and digitally scanned images of H&E slides were obtained with an Aperio ScanScope at ×20 magnification. Histologic quantification of tumor grade was performed by an automated deep neural network available through Aiforia image analysis software with the nsclc_v25 algorithm. For IHC, slides were incubated at 4 °C overnight with the following antibodies: anti-NKX2-1 (1:1,000; Abcam, ab76013; RRID:AB_1310784), anti-SFTPC (1:5,000; Millipore Sigma, AB3786; RRID: AB_91588 ) and anti-HMGA2 (1:1,000; Cell Signaling Technologies, 8179S; RRID:AB_11178942). ImPRESS Anti-Rabbit Horseradish Peroxidase and DAB Peroxidase Substrate Kits (Vector) were used to develop slides. Tissues were counterstained with hematoxylin. Slides were digitally scanned and analyzed using QuPath 94 . For IF, slides were incubated at 4 °C with anti-Cas9 (E7M1H, 1:100, CST 19526). Horse anti-rabbit secondary (AF488, 1:400) was used. All slides were counterstained with DAPI (1:20,000) and imaged using a Nikon 80 Eclipse 80i fluorescence microscope using ×10 and ×20 objectives and an attached Andor camera. Immunoblotting Pancreatic organoids were dissociated with TrypLE for 30 min at 37 °C, washed with 6× PBS and then lysed in cell lysis buffer (RIPA with 100× HALT protease and phosphatase inhibitors). Blots were incubated with primary antibody (p53 clone 1C12, Cell Signaling Technology (CST), β-Actin clone 13E5, CST, 1:5,000) overnight at 4 ° imaged on a ChemiDoc Gel Imaging System (BioRad). DNA sequencing and analysis of genomic DNA samples Target loci were amplified from genomic DNA using PCR primers listed and described in Supplementary Table 5 . Amplicons were then purified using either agarose gel extraction or using a QIAquick PCR purification kit (Qiagen). Purified amplicons were typically then submitted to the Massachusetts General Hospital Center for Computational and Integrative Biology’s DNA Core for next-generation sequencing (samples prepared according to guidelines provided for the CRISPR Sequencing service). Amplicons prepared for evaluating prime-editing efficiency of the initial Trp53 245 - and Kras96 -targeted pegRNAs were given unique Illumina TruSeq barcodes for pooled sequencing. Barcoded PCR products were pooled and purified by electrophoresis with a 2% agarose gel using a Gel Extraction Kit (QIAGEN), eluting with 30 μl H 2 O. DNA concentration was quantified using a Qubit dsDNA High Sensitivity Assay Kit (Thermo Fisher Scientific) and sequenced on an Illumina MiSeq instrument (single-end read, 250–300 cycles) according to the manufacturer’s protocols. Sequencing reads were aligned to reference amplicons and analyzed using the deep sequencing analysis program, CRISPResso2 (ref. 87 ), V2.2.6. CRISPResso2 parameters employed for each target are described in Supplementary Note . Prime-editing efficiency was calculated as the percentage of reads aligning to the prime-edited amplicon (excluding indels) relative to all reads aligning to both the prime-edited and reference amplicons (including indels). Only reads with an average Phred score of ≥30 were considered. Indel percentages were calculated in a similar fashion using the total number of indel-bearing reads designated as ‘discarded’ by CRISPResso2. For experiments involving pegRNAs that alter multiple nucleotides, the allele frequency tables output by CRISPResso2 were consulted to confirm that the majority of prime-edited reads contained all of the intended nucleotide alterations. Sequencing and analysis of off-target loci Off-target loci were identified using Cas-OFFinder 52 . For each of the four indicated protospacers (Supplementary Table 4 ), all off-target sites with three or fewer mismatches relative to the target protospacer were identified. Bulges were not permitted. These results were then filtered down to up to four off-target loci per protospacer. First, loci were prioritized by selecting candidates with the lowest number of protospacer mismatches. When loci contained the same number of protospacer mismatches, off-target sites were ranked by the lowest number of primer binding site mismatches. Finally, if loci contained the same number of both protospacer and primer binding site mismatches, the difference in DNA melting temperature between the mismatched target and the original primer binding site sequence was computed. This calculation was performed with the OligoAnalyzer Tool, version 3.1, from Integrated DNA Technologies 95 with default parameters. Loci with the smallest difference in melting temperature compared to the non-mismatched strand were then prioritized. The off-target loci identified by this analysis are described in Supplementary Table 4 . Next-generation sequencing was performed for amplicons generated from each off-target locus. Off-target prime editing was assessed by aligning sequencing reads to off-target amplicons using CRISPResso2 in batch mode 87 . The parameters ‘-w 20’ and ‘-q 30’ were used in all cases, along with the corresponding off-target protospacer as the guide RNA input sequence. Prime editing was then assessed using an approach described in ref. 29 . For each sample, the ‘Nucleotide_percentage_summary’ file output by CRISPResso2 was used to compare the DNA sequence immediately downstream of the nick site to the sequence encoded by the pegRNA. The first mismatched nucleotide site was then examined to quantify the percentage of reads bearing the allele encoded by the pegRNA. For each off-target locus, this percentage of prime-edited reads was compared between samples cultured in the presence of the examined pegRNA versus samples cultured in the presence of an irrelevant pegRNA. All off-target samples were sequenced within the same MiSeq run to ensure a similar sequencing error rate between tested and control groups. Mutation frequency estimates from cBioPortal Somatic mutation frequencies of TP53 in human pancreatic cancer were estimated using cBioPortal 66 , 67 for the following four patient cohorts: CPTAC, TCGA (Firehose Legacy), QCMG and ICGC. Non-Small Cell Lung Cancer 96 (TRACERx, NEJM and Nature 2017) and Pan-Lung Cancer 97 (TCGA, Nat Genet 2016) cohorts were used for estimates for lung adenocarcinoma. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Rosa26 PE2 mice on wild-type (JAX stock JR037953) and Trp53 flox/flox (JAX stock JR037954; ref. 98 ) backgrounds are available from the Jackson Laboratory. Plasmids will be made available through Addgene upon publication. Amplicon sequencing data have been deposited in the SRA repository under accession PRJNA951647 (ref. 99 ). All other materials and data, including Rosa26 PE2 cell and organoid lines, are available from the corresponding author upon reasonable request. Source data are provided in this paper. Code availability The pipeline, all related scripts, and intermediate data needed to reproduce our results are available at (2022; ref. 100 ).
Genomic studies of cancer patients have revealed thousands of mutations linked to tumor development. However, for the vast majority of those mutations, researchers are unsure of how they contribute to cancer because there's no easy way to study them in animal models. In an advance that could help scientists make a dent in that long list of unexplored mutations, MIT researchers have developed a way to easily engineer specific cancer-linked mutations into mouse models. Using this technique, which is based on CRISPR genome-editing technology, the researchers have created models of several different mutations of the cancer-causing gene Kras, in different organs. They believe this technique could also be used for nearly any other type of cancer mutation that has been identified. Such models could help researchers identify and test new drugs that target these mutations. "This is a remarkably powerful tool for examining the effects of essentially any mutation of interest in an intact animal, and in a fraction of the time required for earlier methods," says Tyler Jacks, the David H. Koch Professor of Biology, a member of the Koch Institute for Integrative Cancer Research at MIT, and one of the senior authors of the new study. Francisco Sánchez-Rivera, an assistant professor of biology at MIT and member of the Koch Institute, and David Liu, a professor in the Harvard University Department of Chemistry and Chemical Biology and a core institute member of the Broad Institute, are also senior authors of the study, which appears today in Nature Biotechnology. Zack Ely Ph.D. '22, a former MIT graduate student who is now a visiting scientist at MIT, and MIT graduate student Nicolas Mathey-Andrews are the lead authors of the paper. Faster editing Testing cancer drugs in mouse models is an important step in determining whether they are safe and effective enough to go into human clinical trials. Over the past 20 years, researchers have used genetic engineering to create mouse models by deleting tumor suppressor genes or activating cancer-promoting genes. However, this approach is labor-intensive and requires several months or even years to produce and analyze mice with a single cancer-linked mutation. "A graduate student can build a whole Ph.D. around building a model for one mutation," Ely says. "With traditional models, it would take the field decades to catch up to all of the mutations we've discovered with the Cancer Genome Atlas." In the mid-2010s, researchers began exploring the possibility of using the CRISPR genome-editing system to make cancerous mutations more easily. Some of this work occurred in Jacks' lab, where Sánchez-Rivera (then an MIT graduate student) and his colleagues showed that they could use CRISPR to quickly and easily knock out genes that are often lost in tumors. However, while this approach makes it easy to knock out genes, it doesn't lend itself to inserting new mutations into a gene because it relies on the cell's DNA repair mechanisms, which tend to introduce errors. Inspired by research from Liu's lab at the Broad Institute, the MIT team wanted to come up with a way to perform more precise gene-editing that would allow them to make very targeted mutations to either oncogenes (genes that drive cancer) or tumor suppressors. In 2019, Liu and colleagues reported a new version of CRISPR genome-editing called prime editing. Unlike the original version of CRISPR, which uses an enzyme called Cas9 to create double-stranded breaks in DNA, prime editing uses a modified enzyme called Cas9 nickase, which is fused to another enzyme called reverse transcriptase. This fusion enzyme cuts only one strand of the DNA helix, which avoids introducing double-stranded DNA breaks that can lead to errors when the cell repairs the DNA. The MIT researchers designed their new mouse models by engineering the gene for the prime editor enzyme into the germline cells of the mice, which means that it will be present in every cell of the organism. The encoded prime editor enzyme allows cells to copy an RNA sequence into DNA that is incorporated into the genome. However, the prime editor gene remains silent until activated by the delivery of a specific protein called Cre recombinase. Since the prime editing system is installed in the mouse genome, researchers can initiate tumor growth by injecting Cre recombinase into the tissue where they want a cancer mutation to be expressed, along with a guide RNA that directs Cas9 nickase to make a specific edit in the cells' genome. The RNA guide can be designed to induce single DNA base substitutions, deletions, or additions in a specified gene, allowing the researchers to create any cancer mutation they wish. Modeling mutations To demonstrate the potential of this technique, the researchers engineered several different mutations into the Kras gene, which drives about 30% of all human cancers, including nearly all pancreatic adenocarcinomas. However, not all Kras mutations are identical. Many Kras mutations occur at a location known as G12, where the amino acid glycine is found, and depending on the mutation, this glycine can be converted into one of several different amino acids. The researchers developed models of four different types of Kras mutations found in lung cancer: G12C, G12D, G12R, and G12A. To their surprise, they found that the tumors generated in each of these models had very different traits. For example, G12R mutations produced large, aggressive lung tumors, while G12A tumors were smaller and progressed more slowly. Learning more about how these mutations affect tumor development differently could help researchers develop drugs that target each of the different mutations. Currently, there are only two FDA-approved drugs that target Kras mutations, and they are both specific to the G12C mutation, which accounts for about 30% of the Kras mutations seen in lung cancer. The researchers also used their technique to create pancreatic organoids with several different types of mutations in the tumor suppressor gene p53, and they are now developing mouse models of these mutations. They are also working on generating models of additional Kras mutations, along with other mutations that help to confer resistance to Kras inhibitors. "One thing that we're excited about is looking at combinations of mutations including Kras mutations that drives tumorigenesis, along with resistance associated mutations," Mathey-Andrews says. "We hope that will give us a handle on not just whether the mutation causes resistance, but what does a resistant tumor look like?" The researchers have made mice with the prime editing system engineered into their genome available through a repository at the Jackson Laboratory, and they hope that other labs will begin to use this technique for their own studies of cancer mutations.
10.1038/s41587-023-01783-y
Biology
When did genetic variations that make us human emerge?
Alejandro Andirkó et al, Temporal mapping of derived high-frequency gene variants supports the mosaic nature of the evolution of Homo sapiens, Scientific Reports (2022). DOI: 10.1038/s41598-022-13589-0 Journal information: Scientific Reports
https://dx.doi.org/10.1038/s41598-022-13589-0
https://phys.org/news/2022-07-genetic-variations-human-emerge.html
Abstract Large-scale estimations of the time of emergence of variants are essential to examine hypotheses concerning human evolution with precision. Using an open repository of genetic variant age estimations, we offer here a temporal evaluation of various evolutionarily relevant datasets, such as Homo sapiens -specific variants, high-frequency variants found in genetic windows under positive selection, introgressed variants from extinct human species, as well as putative regulatory variants specific to various brain regions. We find a recurrent bimodal distribution of high-frequency variants, but also evidence for specific enrichments of gene categories in distinct time windows, pointing to different periods of phenotypic changes, resulting in a mosaic. With a temporal classification of genetic mutations in hand, we then applied a machine learning tool to predict what genes have changed more in certain time windows, and which tissues these genes may have impacted more. Overall, we provide a fine-grained temporal mapping of derived variants in Homo sapiens that helps to illuminate the intricate evolutionary history of our species. Introduction The past decade has seen a significant shift in our understanding of the evolution of our lineage. We now recognize that anatomical features used as diagnostic for our species (globular neurocranium, small, retracted face, presence of a chin, narrow trunk, to cite only a few of the most salient traits associated with “anatomical modernity”) did not emerge as a package, from a single geographical location, but rather emerged gradually, in a mosaic-like fashion across the entire African continent and quite possibly beyond 1 , 2 , 3 . Likewise, behavioral characteristics once thought to be exclusive of Homo sapiens (funerary rituals, parietal art, ‘symbolic’ artefacts, etc.) have recently been attested in some form in closely related (extinct) clades, casting doubt on a simple definition of ‘cognitive/behavioral’ modernity 4 . We have also come to appreciate the extent of repeated (multidirectional) gene flow between Homo sapiens and Neanderthals and Denisovans, raising interesting questions about speciation 5 , 6 , 7 , 8 . Last, but not least, it is now well established that our species has a long history. Robust genetic analyses 9 indicate a divergence time between us and other hominins for whom genomes are available of roughly 700kya, leaving perhaps as many as 500ky between then and the earliest fossils displaying a near-complete suite of modern traits (Omo Kibish 1, Herto 1 and 2) 10 . Such a long period of time is likely to contain enough opportunities for multiple rounds of evolutionary modifications. Taken together, these findings render completely implausible simplistic narratives about the ‘modern human condition’ that seek to identify a specific geographical location or genetic mutation that would ‘define’ us 11 . Genomic analysis of ancient human remains in Africa reveal deep population splits and complex admixture patterns among populations 12 , 13 , 14 . At the same time, reanalysis of fossils in Africa 15 points to the extended presence of multiple hominins on this continent, together with real possibilities of admixture 16 , 17 . Lastly, our deeper understanding of other hominins points to derived characteristics in these lineages that make some of our species’ traits more ancestral (less ‘modern’) than previously believed 18 . In the context of this significant rewriting of our deep history, we decided to explore the temporal structure of an extended catalog of single nucleotide changes found at high frequency (HF \(\ge\) 90%) across major modern populations we previously generated on the basis of 3 high-coverage “archaic” genomes 19 , that is, Neanderthal/Denisovan individuals, used as outgroups. This catalog aims to offer a richer picture of molecular events setting us apart from our closest extinct relatives. In order to probe the temporal nature of this data, we took advantage of the Genealogical Estimation of Variant Age (GEVA) tool 20 . GEVA is a coalescence-based method that provides age estimates for over 45 million human variants. GEVA is non-parametric, making no assumptions about demographic history, tree shapes, or selection (for additional details on GEVA, see “ Methods ”). Our overall objective here is to use the temporal resolution afforded by GEVA to estimate the age of emergence of polymorphic sites, and gain further insights into the complex evolutionary trajectory of our species. Our analysis reveals a bimodal temporal distribution of modern human derived high-frequency variants and provides insights into milestones of Homo sapiens evolution through the investigation of the molecular correlates and the predicted impact of variants across evolutionary-relevant periods. Our chronological atlas allows us to provide a time window estimate of introgression events and evaluate the age of variants associated with signals of positive selection, tissue-specific changes, and specifically an estimate of the age of emergence of (enhancer) regulatory variants associated with different brain regions. Our enrichment analysis uncovers GO-terms unique to specific temporal windows, such as facial and behavioral-related terms for a period (between 300 and 500 k years) preceding the dating of human fossils like that of Jebel Irhoud. Our machine learning-based analyses predicting differential gene expression regulation of mapped variants (through 21 ) reveals a trend towards downregulation in brain-related tissues and allowed us to identify variant-associated genes whose differential regulation may specifically affect brain structures such as the cerebellum. Results Figure 1 ( a ) Density of distribution of derived Homo sapiens alleles over time in an aggregated control set (n = 1000) of random variants across the genome and two sets of derived ones: all derived variants, and those found at high-frequency. Horizontal lines mark distribution quantiles 0.25, 0.5 and 0.75. ( b ) Line plot showing the bimodal distribution of high-frequency variants using different generation times (in the text, we used 29 years, following 62 ). Full size image The distribution of derived alleles over time follows a bimodal distribution (Fig. 1 a,b; see also Fig. S2 for a more elaborated version), with a global maximum around 40 kya (for complete allele counts, see “ Methods ”). The two modes of the distribution of HF variants likely correspond to two periods of significance in the evolutionary history of Homo sapiens . The more recent peak of HF variants arguably corresponds to the period of population dispersal and replacement following the last major out of Africa event 22 , 23 , while the older distribution contains the period associated with the divergence between Homo sapiens and other Homo species 9 , 24 . Figure 2 ( a ) Selected temporal windows used in our study to further interrogate the nature and distribution of HF variants. ( b ) Distribution of introgressed alleles over time, as identified by 27 , 30 . ( c ) Plots of HF variants in datasets relevant to human evolution, including regions under positive selection 29 , regions depleted of archaic introgression 27 , 28 and genes showing an excess of HF variants (‘excess’ and ‘length’) 19 . Variant counts in ( a , c , d ) are squared to aid visualization. ( d ) Kernel density difference between the highest point in the distributions of ( d ) (leftmost peak) and the second, older highest density peak, normalized, in percentage units. Full size image In order to divide the data into smaller temporal clusters for downstream analysis we considered a k -means clustering analysis (at \(k=3\) and \(k=4\) , Fig. S1 ). This clustering method yields a division clear enough to distinguish between “early” and “late” Homo sapiens “specimens” 10 , with a protracted period overlapping with the split with other Homo species. (The availability of ancient DNA from other hominins would yield a better resolution of that period.) However, we reasoned that such a k-means division is not precise enough to represent key milestones used to test specific time-sensitive hypotheses. For this reason, we adopted a literature-based approach, establishing different cutoffs adapted to the need of each analysis below. Our basic division consisted of three periods (see Fig. 2 a): a recent period from the present to 300 thousand years ago (kya), the local minimum, roughly corresponding to the period considered until recently to mark the emergence of Homo sapiens 12 ; a later period from 300 to 500 kya, the period right before the dating of fossils associated with earlier members of our species such as the Jebel Irhoud fossil 25 and, incidentally, the critical juncture between the first and second temporal windows when comparing the two k -means clustering analyses we performed (Fig. S1 ); and a third, older period, from 500 kya to 1 million years ago, corresponding to the time of the most recent common ancestor with the Neanderthal and Denisovan lineages 24 , 26 . We note that the distribution goes as far back as 2.5 million years ago (see Fig. 1 a) in the case of HF variants, and even further back in the case of the derived variants with no HF cutoff. This could be due to our temporal prediction model choice (GEVA clock model, of which GEVA offers three options, as detailed in “ Methods ”), as changes over time in human recombination rates might affect the timing of older variants 20 , or to the fact that we do not have genomes for older Homo species. Some of these very old variants may have been inherited from them and lost further down Neanderthal/Denisovan lineages. Variant subset distributions In an attempt to see if specific subsets of variants clustered in different ways over the inferred time axis, we selected a series of evolutionary relevant sets of data publicly available, such as genome regions depleted of “archaic” introgression (so-called ‘deserts of introgression’) 27 , 28 , and regions under putative positive selection 29 , and mapped the HF variants from 19 falling within those regions. We also examined genes that accumulate more HF variants than expected given their length and in comparison to the number of mutations these genes accumulate on the Neanderthal/Denisovan lineages (‘length’ and ‘excess’ lists from 19 —see “ Methods ”). Finally, we also examined the temporal distribution of introgressed alleles 27 , 30 . A bimodal distribution is clearly visible in all the subsets except the introgression datasets (Fig. 2 b). Introgressed variants peak locally in the more recent period (0–100 kya). The distribution roughly fades after 250 kya, in consonance with the possible timing of introgression events 6 , 16 , 28 , 31 . As a case study, we focused on those introgressed variants associated with phenotypes highlighted in Table 1 of 32 . As shown in Fig. S3 , half of the variants cluster around the highest peak, but other variants may have been introduced in earlier instances of gene flow. We caution, though, that multiple (likely) factors, such as gene flow from Eurasians into Africa, or effects of positive selection affecting frequency, influence the distribution of age estimates and make it hard to draw any firm conclusions. We also note that the two introgressed variant counts, derived from the data of 27 , 30 , follow a significantly different distribution over time ( \(p<\) 2.2–16, Kolmogorov–Smirnov test) (Fig. 2 c). Finally, we examined the distribution of putatively introgressed variants across populations, focusing on low-frequency variants whose distributions vary when we look at African vs. non-African populations (Fig. S4 ). As expected, those variants that are more common in non-African populations are found in higher proportions in both of the Neanderthal genomes studied here, with a slightly higher proportion for the Vindija genome, which is in fact assumed to be closer to the main source population of introgression 33 . We detect a smaller contribution of Denisovan variants overall, which is expected on several grounds: given the likely more frequent interactions between modern humans and Neanderthals, the Denisovan individual whose genome we relied on is likely part of a more pronounced “outgroup”. Gene flow from modern humans into Neanderthals also likely contributed to this pattern. In the case of the regions under putative positive selection, we find that the distribution of variant counts has a local peak in the most recent period (0–100 kya) that is absent from the deserts of introgression datasets, pointing to an earlier origin of alleles found in these latter regions. Also, as shown in Fig. 2 d, the distribution of variant counts in these regions under selection shows the greatest difference between the two peaks of the bimodal distribution. Still, we should stress that our focus here is on HF variants, and that of course, not all HF variants falling in selective sweep regions were actual targets of selection. Figure S5 illustrates this point for two genes that have figured prominently in early discussions of selective sweeps since 5 : RUNX2 and GLI3 . While recent HF variants are associated with positive selection signals (indicated in purple), older variants exhibit such associations as well. Indeed some of these targets may fall below the 90% cutoff chosen in 19 . In addition, we are aware that variants enter the genome at one stage and are likely selected for at a (much) later stage 34 , 35 . As such our study differs from the chronological atlas of natural selection in our species presented in 36 (as well as from other studies focusing on more recent periods of our evolutionary history, such as 37 ). This may explain some important discrepancies between the overall temporal profile of genes highlighted in 36 and the distribution of HF variants for these genes in our data (Fig. S6 ). Having said this, our analysis recaptures earlier observations about prominent selected variants, located around the most recent peak, concerning genes such as CADPS2 38 (Fig. S7 ). This study also identifies a set of old variants, well before 300kya, associated with genes belonging to putative positively-selected regions before the deepest divergence of Homo sapiens populations 39 , such as LPHN3 , FBXW7 , and COG5 (Fig. S8 ). Finally, focusing on the brain as the organ that may help explain key features of the rich behavioral repertoire associated with Homo sapiens , we estimated the age of putative regulatory variants linked to the prefrontal (PFC), temporal (TC), and cerebellar cortices (CBC), using the large scale characterization of regulatory elements of the human brain provided by the PsychENCODE Consortium 40 . We did the same for the modern human HF missense mutations 19 . A comparative plot reveals a similar pattern between the three structures, with no obvious differences in variant distribution (see Fig. S9 ). The cerebellum contains a slightly higher number of variants assigned to the more recent peak when the proportion to total mapped variants is computed. This may relate to the more recent modifications reported for this brain region 41 , which contributed to the globularized shape of our brain(case). We also note that the difference of dated variants between the two local maxima is more pronounced in the case of the cerebellum than in the case of the two cortical tissues, whereas this difference is more reduced in the case of missense variants (Fig. S9 ). We caution, though, that the overall number of missense variants is considerably lower in comparison to the other three datasets. Gene Ontology analysis across temporal windows Figure 3 ( a ) Venn diagram of GO terms associated with genes shared across time windows. ( b ) Top GO terms per time window. Full size image In order to interpret functionally the distribution of HF variants in time, we performed enrichment analyses accessing curated databases via the gProfiler2 R package 42 . For the three time windows analyzed (corresponding to the recent peak: 0–300 kya; divergence time and earlier peak: 500 kya–1 mya; and time slot between them: 300 kya–500 kya), we identified unique and shared gene ontology terms (see Fig. 3 a,b; “ Methods ”). Notably, when we compared the most recent period against the two earlier windows together (from 300 kya to 1 mya), we found bone, cartilage, and visual system-related terms only in the earlier periods (hypergeometric test; adj. \(p<0.01\) ; Table S1 ). Further differences are observed when thresholding by an adjusted \(p<0.05\) . In particular, terms related to behavior (startle response), facial shape (narrow mouth) and hormone systems only appear in the middle (300–500 k) period (Table S2 ; Fig. S10 ). Unique gene ontology terms may point to specific environmental conditions causing the organism to react in specific ways. A summary of terms shared across the three time windows can be seen in Fig. S11 . Gene expression predictions To evaluate the expression profiles associated to our HF variant dataset (from 19 ), we made use of ExPecto 21 , a sequence-based tool to predict gene expression in silico (see description in “ Methods ”). We found a skewness towards more extreme negative values (downregulation) in brain-related tissues, which is not observed when analyzing all tissues jointly (as shown in quantile-quantile plots in Fig. S12 ). A series of Kruskal-Wallis test shows that, when either all or just brain-related tissues are considered, statistically significant differences in predicted gene expression values are found across the three time periods studied here (p = 2.2e−16 and p = 4.95e−12, respectively). Overall, the latest period (500 k–1 mya) reports the strongest predicted effect toward downregulation (see Fig. 4 A). Especially for brain-related terms, some structures show the highest sum of variant predicted expression (top downregulation): such as the Adrenal Gland, the Pituitary, Astrocytes, or Neural Progenitor Cells (see Fig. S13 ). Among these structures, the presence of the cerebellum in a period preceding the last major Out-of-Africa event is noteworthy (consistent with 41 ). Figure 4 ( A ) Sum of all directional mutation effects within 1 kb to the TSS per time window in 22 brain and brain-related tisues (red) and the the rest of tissues included by the ExPecto trained model as a control group (blue). Significant differences exist across time periods when non-brain and brain-related tissues are compared (Kruskal–Wallis test; \(p=2.2e-16\) ). ( B ) Genes with a high sum of all directional mutation effects, and cumulative directionality of expression values in brain tissues per time window. Full size image The authors of the article describing the ExPecto tool 21 suggest that genes with a high sum of absolute variant effects in specific time windows tend to be tissue or condition-specific. We explored our data to see if the genes with higher absolute variant effect were also phenotypically relevant (Fig. 4 B). Among these we find genes such as DLL4 , a Notch ligand implicated in arterial formation 43 ; FGF14 , which regulates the intrinsic excitability of cerebellar Purkinje neurons 44 ; SLC6A15 , a gene that modulates stress vulnerability through the glutamate system 45 ; and OPRM1 , a modulator of the dopamine system that harbors a HF derived loss of stop codon variant in the genetic pool of modern humans but not in that of extinct human species 19 . We also crosschecked if any of the variants in our high-frequency dataset with a high predicted expression value (RPKM variant-specific values at \(log>0.01\) ) were found in GWASs related to brain volume. The Big40 UKBiobank GWAS meta-analysis 46 shows that some of these variants are indeed GWAS top hits and can be assigned a date (see Table 1 ). Of note are phenotypes associated with the posterior Corpus Callosum (Splenium), precuneus, and cerebellar volume. In addition, in a large genome-wide association meta-analysis of brain magnetic resonance imaging data from 51,665 individuals seeking to identify specific genetic loci that influence human cortical structure 47 , one variant (rs75255901) in Table 1 , linked to DAAM1 , has been identified as a putative causal variant affecting the precuneus. All these brain structures have been independently argued to have undergone recent evolution in our lineage 41 , 48 , 49 , 50 , and their associated variants are dated amongst the most recent ones in the table. Table 1 Big40 Brain volume GWAS 46 top hits with high predicted gene expression in ExPecto ( \(log>0.01\) , RPKM), along with dating as provided by GEVA . Full size table Discussion Deploying GEVA to probe the temporal structure of the extended catalog of HF variants distinguishing modern humans from their closest extinct relatives ultimately aims to contribute to the goals of the emerging attempts to construct a molecular archaeology 52 and as detailed a map as possible of the evolutionary history of our species 53 . Like any other archaeology dataset, ours is necessarily fragmentary. In particular, fully fixed mutations, which have featured prominently in early attempts to identify candidates with important functional consequences 52 , fell outside the scope of this study, as GEVA can only determine the age of polymorphic mutations in the present-day human population. By contrast, the mapping of HF variants was reasonably good, and allowed us to provide complementary evidence for claims regarding important stages in the evolution of our lineage. This in and of itself reinforces the rationale of paying close attention to an extended catalog of HF variants, as argued in 19 . While we wait for more genomes from more diverse regions of the planet and from a wider range of time points, we find our results encouraging: even in the absence of genomes from the deep past of our species in Africa, we were able to provide evidence for different epochs and classes of variants that define these. But whereas different clusters can be identified, the emerging picture is very much mosaic-like in its character, in consonance with recent work 1 , 3 . In no way do we find evidence for earlier evolutionary narratives that relied on one or a handful of key mutations. Our analysis shows a bimodal distribution of the age of modern human-derived high-frequency variants (in consonance with the findings of 54 on a more limited set of variants ). The two peaks likely reflect, on the one hand, the point of divergence between Homo sapiens and other Homo species and, on the other, the period of population dispersal and replacement following the last major out of Africa event. Our work also highlights the importance of a temporal window right before 300 ky that may well correspond to a significant behavioral shift in our lineage, such as increased ecological resource variability 55 , and evidence of long-distance stone transport and pigment use 56 . Other aspects of our cognitive and anatomical make up emerged much more recently, in the last 150 k years, and for these our analysis points to the relevance of gene expression regulation differences in recent human evolution, in line with 57 , 58 , 59 . Lastly, our attempt to date the emergence of mutations in our genomes points to multiple episodes of introgression, whose history is likely to turn out to be quite complex. Methods Homo sapiens variant catalog We made use of a publicly available dataset 19 that takes advantage of the Neanderthal and Denisovan genomes to compile a genome-wide catalog of Homo sapiens -specific variation. The original complete dataset is available at . As described in the original article, this catalog includes “archaic”-specific variants and all loci showing variation within modern populations. The 1000 genomes project and ExAc data were used to derive frequencies and the human genome version hg19 as reference. As indicated in the original publication 19 , quality filters in the “archaic” genomes were applied (specifically: sites with less 5-fold coverage and more than 105-fold coverage for the Altai individual, or 75-fold coverage for the rest of “archaic” individuals were filtered out). In ambiguous cases, variant ancestrality was determined using multiple genome aligments 60 and the macaque reference sequence ( rheMac3 ) 61 . In addition to the full data, the authors offered a subset of the data that includes derived variants at a \(\ge\) 90% global frequency cutoff. Since such a cutoff allows some variants to reach less than 90% in certain populations, as long as the total is \(\ge\) 90%, we also considered including a metapopulation-wide variant \(\ge\) 90% frequency cutoff dataset to this study (Fig. S2 ). All files (including the original full and high-frequency sets and the modified, stricter high-frequency one) are provided in the accompanying code. Controls in 1 were obtained through a probabilistic permutation approach with sets of random variants (100 sets, 50,000 variants each). GEVA The Genealogical Estimation of Variant Age (GEVA) tool 20 uses a hidden Markov model approach to infer the location of ancestral haplotypes relative to a given variant. It then infers time to the most recent ancestor in multiple pairwise comparisons by coalescent-based clock models. The resulting pairwise information is combined in a posterior probability measure of variant age. We extracted dating information for the alleles of our dataset from the bulk summary information of GEVA age predictions. The GEVA tool provides several clock models and measures for variant age. We chose the mean age measure from the joint clock model, that combines recombination and mutation estimates. While the GEVA dataset provides data for the 1000 genomes project and the Simons Genome Diversity Project, we chose to extract only those variants that were present in both datasets. Ensuring a variant is present in both databases implicitly increases genealogical estimates (as detailed in Supplementary document 3 of 20 ), although it decreases the amount of sites that can be looked at. We give estimated dates after assuming 29 years per generation, as suggested in 62 . While other measures can be chosen, this value should not affect the nature of the variant age distribution nor our conclusions. Out of a total of 4,437,804 for our total set of variants, 2,294,023 where mapped in the GEVA dataset (51% of the original total). For the HF subsets, the mapping improves: 101,417 (74% of total) and 48,424 (69%) variants were mapped for the original high-frequency subset and the stricter, meta-population cutoff version, respectively. ExPecto In order to predict gene expression we made use of the ExPecto tool 21 . ExPecto is a deep convolutional network framework that predicts tissue-specific gene expression directly from genetic sequences. ExPecto is trained on histone mark, transcription factor and DNA accessibility profiles, allowing ab initio prediction that does not rely on variant information training. Sequence-based approaches, such as the one used by Expecto , allow to predict the expression of high-frequency and rare alleles without the biases that other frameworks based on variant information might introduce. We introduced the high-frequency dated variants as input for ExPecto expression prediction, using the default tissue training models trained on the GTEx, Roadmap genomics and ENCODE tissue expression profiles. gProfiler2 Enrichment analysis was performed using gProfiler2 package 42 (hypergeometric test; multiple comparison correction, ‘gSCS’ method; p values 0.01 and 0.05). Dated variants were subdivided in three time windows (0–300 kya, 300–500 kya and 500 kya–1 mya) and variant-associated genes (retrieved from 19 ) were used as input (all annotated genes for H. sapiens in the Ensembl database were used as background). Following 21 , variation potential directionality scores were calculated as the sum of all variant effects in a range of 1 kb from the TSS. Summary GO figures presented in Fig. S11 were prepared with GO Figure 63 . For enrichment analysis, the Hallmark curated annotated sets 64 were also consulted, but the dated set of HF variants as a whole did not return any specific enrichment. Code availability All the analysis here presented can be reproduced following the scripts in the following Github repository: .
The study of the genomes of our closest relatives, the Neanderthals and Denisovans, has opened up new research paths that can broaden our understanding of the evolutionary history of Homo sapiens. A study led by the UB has made an estimation of the time when some of the genetic variants that characterize our species emerged. It does so by analyzing mutations that are very frequent in modern human populations, but not in these other species of archaic humans. The results, published in the journal Scientific Reports, show two moments in which mutations accumulated: one around 40,000 years ago, associated with the growth of the Homo sapiens population and its departure from Africa, and an older one, more than 100,000 years ago, related to the time of the greatest diversity of types of Homo sapiens in Africa. "The understanding of the deep history of our species is expanding rapidly. However, it is difficult to determine when the genetic variants that distinguish us from other human species emerged. In this study, we have placed species-specific variants on a timeline. We have discovered how these variants accumulate over time, reflecting events such as the point of divergence between Homo sapiens and other human species around 100,000 years ago," says Alejandro Andirkó, first author of this article, which was part of his doctoral thesis at the UB. The study, led by Cedric Boeckx, ICREA research professor in the section of General Linguistics and member of the Institute of Complex Systems of the UB (UBICS), included the participation of Juan Moriano, UB researcher, Alessandro Vitriolo and Giuseppe Testa, experts from the University of Milan and the European Institute of Oncology, and Martin Kuhlwilm, researcher at the University of Vienna. Predominance of behavioral and facial-related variations The results of the research study also show differences between evolutionary periods. Specifically, they highlight the predominance of genetic variants related to behavior and facial structure—key characteristics in the differentiation of our species from other human species—more than 300,000 years ago, a date that coincides with the available fossil and archaeological evidence. "We have discovered sets of genetic variants which affect the evolution of the face and which we have dated between 300,000 and 500,000 years ago, the period just prior to the dating of the earliest fossils of our species, such as the ones discovered at the Jebel Irhoud archaeological site in Morocco," notes Andirkó. The researchers also analyzed variants related to the brain, the organ that can best help explain key features of the rich repertoire of behaviors associated with Homo sapiens. Specifically, they dated variants which medical studies conducted in present-day humans have linked to the volume of the cerebellum, corpus callosum and other structures. "We found that brain tissues have a particular genomic expression profile at different times in our history; that is, certain genes related to neural development were more highly expressed at certain times," says the researcher. Supporting the mosaic nature of the evolution of Homo sapiens These results complement an idea that is dominant in evolutionary anthropology: that there is no linear history of human species, but that different branches of our evolutionary tree coexisted and often intersected. "The breadth of the range of human diversity in the past has surprised anthropologists. Even within Homo sapiens there are fossils, such as the ones I mentioned earlier from Jebel Irhoud, which, because of their features, were thought to belong to another species. That's why we say that human beings have lived a mosaic evolution," he notes. "Our results," the researcher continues, "offer a picture of how our genetics changed, which fits this idea, as we found no evidence of evolutionary changes that depended on one or a several key mutations," he says. Application of machine learning techniques The methodology used in the study was based on a Genealogical Estimation of Variant Age method, developed by researchers at the University of Oxford. Once they had this estimation, they applied a machine learning tool to predict which genes have changed the most in certain time windows and which tissues these genes may have impacted. Specifically, they used ExPecto, a deep learning tool that uses a convolutional network—a type of computational model—to predict gene expression levels and function from a DNA sequence. "Since there are no data on the genomic expression of variants in the past, this tool is an approach to a problem that has not been addressed until now. Although the use of machine learning prediction is increasingly common in the clinical world, as far as we know, nobody has tried to predict the consequences of genomic changes over time," notes Andirkó. The importance of the perinatal phase in the brain development of our species In a previous study, the same UB team, together with the researcher Raül Gómez Buisán, used genomic information from archaic humans. In that study they analyzed genomic deserts, regions of the genome of our species where there are no genetic fragments of Neanderthals or Denisovans, and which, moreover, have been subjected to positive pressure in our species: that is, they have accumulated more mutations than would have been expected by neutral evolution. The researchers studied the expression of genes—i.e., which proteins code for different functions—found in desert regions throughout brain development, from prenatal to adult stages, covering sixteen brain structures. The results showed differences in gene expression in the cerebellum, striatum and thalamus. "These results bring into focus the relevance of brain structures beyond the neocortex, which has traditionally dominated research on the evolution of the human brain," says Juan Moriano. Moreover, the most striking differences between brain structures were found at prenatal stages. "These findings add new evidence to the hypothesis of a species-specific trajectory of brain development taking place at perinatal stages—the period from 22 weeks to the end of the first four weeks of neonatal life—that would result in a more globular head shape in modern humans, in contrast to the more elongated shape seen in Neanderthals," concludes Moriano.
10.1038/s41598-022-13589-0
Medicine
Loneliness predicts development of type 2 diabetes
Ruth A. Hackett et al, Loneliness and type 2 diabetes incidence: findings from the English Longitudinal Study of Ageing, Diabetologia (2020). DOI: 10.1007/s00125-020-05258-6 Journal information: Diabetologia
http://dx.doi.org/10.1007/s00125-020-05258-6
https://medicalxpress.com/news/2020-09-loneliness-diabetes.html
Abstract Aims/hypothesis Loneliness is associated with all-cause mortality and coronary heart disease. However, the prospective relationship between loneliness and type 2 diabetes onset is unclear. Methods We conducted a longitudinal observational population study with data on 4112 diabetes-free participants (mean age 65.02 ± 9.05) from the English Longitudinal Study of Ageing. Loneliness was assessed in 2004–2005 using the revised University of California, Los Angeles (UCLA) Loneliness Scale. Incident type 2 diabetes cases were assessed from 2006 to 2017. Associations were modelled using Cox proportional hazards regression, adjusting for potential confounders, which included cardiometabolic comorbidities. Results A total of 264 (6.42%) participants developed type 2 diabetes over the follow-up period. Loneliness was a significant predictor of incident type 2 diabetes (HR 1.46; 95% CI 1.15, 1.84; p = 0.002) independent of age, sex, ethnicity, wealth, smoking status, physical activity, alcohol consumption, BMI, HbA 1c , hypertension and cardiovascular disease. Further analyses detected an association between loneliness and type 2 diabetes onset (HR 1.41; 95% CI 1.04, 1.90; p = 0.027), independent of depressive symptoms, living alone and social isolation. Living alone and social isolation were not significantly associated with type 2 diabetes onset. Conclusions/interpretation Loneliness is a risk factor for type 2 diabetes. The mechanisms underlying this relationship remain to be elucidated. Graphical abstract Working on a manuscript? Avoid the common mistakes Introduction Loneliness is a negative emotion that occurs when an individual perceives that their social needs are not being met. It reflects an imbalance between desired and actual social relationships [ 1 ]. Survey data suggest that loneliness is a common experience, with a fifth of adults in the UK [ 2 ] and a third of adults in the USA [ 3 ] reporting feeling lonely sometimes. There has been increasing research focused on loneliness as a determinant of health. Meta-analytic evidence suggests that loneliness is a predictor of all-cause mortality, indicating that lonely individuals have a 22% greater risk of death when compared with non-lonely individuals [ 4 ]. Loneliness has a negative effect on cardiovascular health and has been associated with incident CHD [ 5 ]. This is of relevance in type 2 diabetes, as CHD is a frequent complication of the condition and a leading cause of death in this population [ 6 ]. It is plausible that deleterious cardiometabolic factors associated with loneliness could contribute to type 2 diabetes [ 7 ]. Loneliness is associated with ageing [ 2 , 3 ] and obesity [ 8 ], both of which are major risk factors for type 2 diabetes [ 9 ]. Further, evidence from large observational cohort studies indicates that loneliness is associated both cross-sectionally [ 8 ] and prospectively [ 10 ] with the metabolic syndrome. However, studies associating loneliness with HbA 1c have been less consistent [ 11 , 12 ]. To date, no study has prospectively associated loneliness with incident type 2 diabetes, although there is evidence of a cross-sectional association [ 13 , 14 ]. Some studies have investigated social isolation [ 15 , 16 , 17 , 18 , 19 ] or living alone [ 17 , 20 , 21 , 22 ] as risk factors for type 2 diabetes. However, it is important to note that loneliness is not synonymous with social isolation as it relates to the perceived quality rather than quantity of social connections [ 1 ]. Further, there is evidence that loneliness and isolation are differentially associated with health outcomes [ 23 , 24 ]. The majority of studies assessing social isolation [ 15 , 16 , 18 , 19 ] and living alone [ 17 , 21 , 22 ] as risk factors for diabetes have failed to observe an association when taking potential confounding factors (such as health behaviours) into account. The German MONICA/KORA (MONitoring of Trends and Determinants in CArdiovascular Disease/Kooperative Gesundheitsforschung in der Region Augsburg (Cooperative Health Research in the Region of Augsburg) cohort of over 8000 participants found prospective associations between social isolation [ 18 ] and living alone [ 20 ] with incident diabetes, but only in male participants. A more recent analysis of this cohort found that poor social network satisfaction, a measure of relationship quality, increased the risk of type 2 diabetes in men only [ 25 ]. Interestingly, this association was independent of both social isolation and living alone. The current study set out to address whether loneliness was a predictor of incident type 2 diabetes in a representative cohort of adults aged over 50 years living in England. We also aimed to assess whether social isolation and living alone were risk factors for type 2 diabetes. As the relationship between loneliness and social isolation is suggested to be weak to moderate for older people [ 24 ], we hypothesised that loneliness, social isolation and living alone would exert independent effects on type 2 diabetes risk. Further, it is important to consider the impact of depression as a possible confounding variable in the relationship between loneliness and type 2 diabetes. Previous research indicates that loneliness has a reciprocal relationship with depression [ 26 ]. Depression is also a possible pathway through which loneliness impacts cardiometabolic health [ 10 ], with a large body of evidence suggesting that depressed individuals are more likely to develop type 2 diabetes than those without depression [ 27 ]. Given this, we considered depressive symptoms in our analyses. Methods Participants The study used data from the English Longitudinal Study of Ageing (ELSA), a representative panel study of adults aged 50 and older living in England. Data collection began in 2002–2003 (wave 1), with follow-up waves biennially [ 28 ]. Self-reported questionnaire and interview data are collected at each wave and biological and anthropometric data are collected at alternate waves. Ethical approval for ELSA was obtained from the National Research Ethics Service. All participants provided informed consent. In the current study, we investigated the association between loneliness measured at wave 2 (2004–2005; the first wave in which loneliness was assessed) and incident type 2 diabetes from wave 3 (2006–2007) to wave 8 (2016–2017). Participants included in the analysis self-reported that they were free of diabetes/high blood sugar at baseline (2004–2005). The median follow-up time was 10 years. A total of 8780 participants took part in wave 2. Participants were included in our study if they had complete data on loneliness and covariates at baseline (2004–2005) and if they provided follow-up data on self-reported type 2 diabetes. Those with HbA 1c values in the diabetes range [ 9 ] (≥6.5%; 48 mmol/mol) at baseline were excluded. A flowchart of those included and excluded from the study can be found in Fig. 1 . Our analytical sample was 4112 participants. Fig. 1 Flow diagram of participants included and excluded from the analyses Full size image In comparison with those excluded from the analysis ( n = 4668), those included were significantly less lonely, and were more likely to be younger, wealthier and of white ethnicity ( p < 0.001). They were less likely to smoke, were more physically active and were less likely to have hypertension or CVD at baseline ( p < 0.001). They had a lower BMI on average ( p < 0.001) and were more likely to consume alcohol regularly than those excluded from the analysis ( p = 0.002). No sex differences were evident ( p = 0.098). Measures Predictor variable: loneliness We assessed loneliness with the three-item revised University of California, Los Angeles (UCLA) Loneliness Scale [ 29 ]. Participants rated items such as ‘ How often do you feel you lack companionship? ’ with response options of 1, ‘hardly ever/never’; 2, ‘some of the time’; and 3, ‘often’. Ratings were averaged to produce a score ranging from 1 to 3, with higher values indicating greater loneliness [ 23 ]. We also assessed loneliness as a continuous score (range 3–9) in supplementary analyses [ 8 , 30 ]. The Cronbach’s α of the scale was 0.82 in our sample. Outcome variable: type 2 diabetes incidence Time to self-reported type 2 diabetes was assessed between wave 3 (2006–2007) and wave 8 (2016–2017). At each wave, participants were asked whether a physician had given them a diagnosis of diabetes or high blood sugar since their last interview. Time of diagnosis was indexed as the wave at which diabetes/high blood sugar was first reported. Time to event was measured in months from wave 2 (2004–2005) to the follow-up wave when diabetes/high blood sugar was mentioned. For those not diagnosed with diabetes by wave 8, time to censoring was the time from wave 2 to drop out. Covariates The covariates included in our analyses were measured at baseline (2004–2005). Participants self-reported their age, sex (man/woman) and ethnicity (white/non-white). We controlled for household non-pension wealth, which has been found to be the most relevant indicator of socioeconomic position for this cohort [ 28 ]. Wealth was divided into quintiles across the entire wave 2 sample. Participants self-reported whether they smoked (non-smoker/smoker), their frequency of physical activity (light or none weekly/moderate or vigorous once a week/moderate or vigorous more than once a week) and their alcohol consumption (≥5 times a week, <5 times a week). Height (cm) and weight (kg) were objectively measured during the nurse visit at wave 2 and used to calculate BMI (kg/m 2 ). Participants self-reported whether they had received a doctor diagnosis of hypertension and this was combined with the objective nurse measure of blood pressure to create a binary variable (no/yes). We defined hypertension as systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg. Participants self-reported whether they had angina, myocardial infarction or stroke, and we used this information to generate a measure of prevalent CVD (no/yes). HbA 1c was objectively measured during the nurse visit and samples were analysed at the Royal Victoria Infirmary laboratory, Newcastle upon Tyne, UK. HbA 1c values are reported in Diabetes Control and Complication Trial units (%) and International Federation of Clinical Chemistry units (mmol/mol). Secondary predictor variables Depression Depressive symptoms were measured using the eight-item Centre for Epidemiological Studies Depression Scale (CES-D) [ 31 ], where higher scores indicate greater symptoms. Items included statements such as ‘ I felt depressed ’ and ‘ My sleep was restless ’. We excluded the CES-D item on loneliness to avoid direct overlap with the loneliness scale. A dichotomous response to each item (0 = ‘no’; 1 = ‘yes’) resulted in a total score ranging from 0 to 7. In line with previous work [ 23 ], a score ≥6 was used to define severe depressive symptoms. We also assessed depressive symptoms as a total score in supplementary analyses [ 30 ]. The internal consistency of the measure was acceptable (α = 0.76). Living alone and social isolation Participants self-reported whether they lived alone (no/yes). Social isolation was measured using an index based on the extent of contact within a person’s social network and their involvement with social organisations [ 23 , 30 ]. Participants were asked about frequency of contact with their children, other family and friends, with response options of ‘less than once a year/never’, ‘once or twice a year’, ‘every few months’, ‘once or twice a month’, ‘once or twice a week’ and ‘three or more times a week’. Participants received a point if they had less than monthly face-to-face or telephone contact with each of the three categories of social tie. Participants received another point if they did not participate in any social organisation (e.g. social or sports clubs, churches or residents’ groups). Total scores ranged from 0 to 4, with higher scores indicating greater isolation. Few participants received a score of 4, so we combined categories 3 and 4. Statistical analysis Descriptive characteristics of the sample are presented as either mean (SD) or number (percentage). The characteristics of those who did and did not develop type 2 diabetes were compared using t tests for continuous variables and χ 2 tests for categorical variables. Associations between loneliness and sample characteristics were assessed using Pearson’s correlations for continuous variables and univariate ANOVAs for categorical variables. We established that the proportional hazards assumption was not violated using log (−log [survival]) vs log (time) graphs. Following this, we used Cox proportional hazards regression to investigate the association between loneliness and type 2 diabetes incidence, controlling for age, sex, wealth, ethnicity, smoking, physical activity, alcohol consumption, BMI, hypertension, CVD and HbA 1c (Model 1). Loneliness was inserted as a continuous variable where the HR and 95% CIs represent a 1 U increase. In secondary analyses, additional covariates were added to the model to test the independent effect of loneliness on diabetes incidence. In Model 2, depression was added. In Model 3, living alone was included. In Model 4, social isolation was added. Model 5 was the final model and included loneliness, all covariates, depression, living alone and social isolation together as predictors of diabetes incidence. We conducted collinearity diagnostic tests to check for collinearity. Variable inflation factors were <1.26, suggesting collinearity was not present. For graphical purposes, total loneliness score (range 3–9) was dichotomised using a median split into low loneliness (scores of 3) and high loneliness (scores 4–9). Incident cases are plotted on a graph to reflect the time to diagnosis for these groups. We conducted a sensitivity analysis to address the possibility of reverse causality by excluding participants who developed diabetes within 2 years of baseline (wave 3; 2006–2007). In supplementary analyses, we examined whether there was a moderating effect of age, sex or ethnicity on association between loneliness and type 2 diabetes by adding interaction terms to Model 1. Age was entered as a mean-centred interaction term. We also checked whether the pattern of results changed when entering loneliness and depression as continuous scores. Analyses were conducted using IBM SPSS Statistics for Macintosh, version 24 (IBM, Armonk, New York, USA). Results Participant characteristics A total of 4112 participants took part in the study and, of these, 264 (6.42%) developed type 2 diabetes over the follow-up period. An overview of participant characteristics at baseline, along with a comparison of those who did and did not develop type 2 diabetes, can be found in Table 1 . Those who developed diabetes were significantly lonelier (1.42 ± 0.53) on average than those who did not develop diabetes (1.33 ± 0.47; p = 0.013). They were more likely to be male ( p = 0.001) and of non-white ethnicity ( p = 0.018), and to be less well off financially ( p = 0.001), than those who did not develop diabetes. Those in the diabetes group were significantly less likely to consume alcohol regularly ( p = 0.025) and were more likely to have hypertension ( p < 0.001) and CVD ( p = 0.005) at baseline than those in the non-diabetes group. They also had a higher BMI ( p < 0.001) and greater HbA 1c levels ( p < 0.001) on average. Those who developed diabetes reported significantly higher depressive symptoms at baseline (1.57 ± 1.97) than those who did not develop diabetes (1.20 ± 1.61; p = 0.003). The groups did not differ in age, smoking, physical activity, social isolation or living alone at baseline ( p > 0.073). Table 1 Participant characteristics (2004–2005) according to diabetes status (2006–2017) Full size table We investigated associations between loneliness and demographic and clinical characteristics. Loneliness was significantly positively associated with age ( r = 0.05, p < 0.001), HbA 1c ( r = 0.03, p = 0.027) and depressive symptoms ( r = 0.45, p < 0.001). Lonelier participants were more likely to be female ( F [1,1440] = 6.58; p = 0.010) and non-white ( F [1,1440] = 46.01; p < 0.001) than less lonely participants. Loneliness was associated with a greater likelihood of smoking ( F [1,1440] = 15.28; p < 0.001) and physical inactivity ( F [2,1409] = 22.51; p < 0.001), as well as a reduced likelihood of regular alcohol consumption ( F [1,1440] = 16.28; p < 0.001). Lonelier participants were also more likely to have CVD ( F [1,1440] = 22.91; p < 0.001) and to live alone ( F [1,1440] = 364.93; p < 0.001) than less lonely participants. No significant associations between loneliness and BMI, hypertension or social isolation were observed. Loneliness (2004–2005) and type 2 diabetes incidence (2006–2017) The findings from the Cox regression models can be found in Table 2 . Loneliness was a significant predictor of incident type 2 diabetes over the follow-up period (HR 1.46; 95% CI 1.15, 1.84; p = 0.002) independent of age, sex, ethnicity, wealth, smoking, physical activity, alcohol consumption, BMI, HbA 1c , hypertension and CVD (Model 1). As can be seen in Model 2, the association between loneliness and later type 2 diabetes was robust to adjustment for depressive symptoms (HR 1.42; 95% CI 1.10, 1.84; p = 0.008). Living alone (Model 3) and social isolation (Model 4) were not significant predictors of type 2 diabetes. Our final model (Model 5) shows the independent association between loneliness and type 2 diabetes onset (HR 1.41; 95% CI 1.04, 1.90; p = 0.027), controlling for a range of covariates, as well as depressive symptoms, living alone and social isolation. A one point increase in the averaged loneliness score was associated with a 41% increase in the hazard of type 2 diabetes onset (95% CI estimate between 4% and 90%). A graphical representation of the Model 5 findings can be found in Fig. 2 . The associations did not vary by age, sex or ethnicity (see electronic supplementary material [ESM] Table 1 ). Table 2 Cox proportional hazards regression of loneliness, living alone and social isolation (2004–2005) on diabetes incidence (2006–2017) Full size table Fig. 2 Survival curve of loneliness on type 2 diabetes incidence Full size image Sensitivity analysis: loneliness (2004–2005) and type 2 diabetes incidence (2006–2017) We conducted a sensitivity analysis excluding participants who reported type 2 diabetes diagnosis within 24 months of the baseline assessment. As can be seen in Table 3 , loneliness remained a significant predictor of incident diabetes (HR 1.54; 95% CI 1.11, 2.13; p = 0.009) independent of covariates, depressive symptoms, living alone and social isolation. We also assessed whether entering loneliness and depressive symptoms as continuous scores altered the results. The findings remained consistent when treating the measures in this way (see ESM Table 2 ). Table 3 Sensitivity analysis showing Cox proportional hazards regression of loneliness, living alone and social isolation (2004–2005) on diabetes incidence (2008–2017) Full size table Discussion To our knowledge, this study is the first to examine the association of loneliness with later type 2 diabetes incidence. Our findings show that loneliness is a robust predictor of type 2 diabetes incidence over 12 years of follow-up independent of a range of covariates, including sociodemographic factors, health behaviours and cardiometabolic comorbidities. This association was upheld when depressive symptoms were taken into account. We also assessed loneliness, social isolation and living alone simultaneously as predictors of type 2 diabetes incidence. In this analysis, loneliness remained an independent predictor of later type 2 diabetes. No significant associations for social isolation or living alone were observed. No previous study has prospectively associated loneliness with incident type 2 diabetes, although this relationship has been assessed cross-sectionally [ 13 , 14 ]. One analysis of 8593 older people living in Denmark found that loneliness was associated with diabetes in women only [ 13 ]. However, a larger cohort of over 20,000 Swiss nationals observed an association between loneliness and diabetes in both male and female participants [ 14 ]. Cross-sectional analyses cannot determine whether loneliness stimulates type 2 diabetes onset, or whether loneliness is an emotional manifestation of the strain of diabetes diagnosis on close social relationships. Our prospective results therefore add to the literature in establishing that loneliness is a predictor of type 2 diabetes incidence, independent of baseline HbA 1c . Given the observational nature of this study, causality cannot be inferred. Our sensitivity analysis excluding cases of type 2 diabetes reported within 2 years of baseline aimed to address the risk of reverse causality. The observation that the association between loneliness and incident type 2 diabetes remained after these more immediate cases were excluded adds weight to the temporal sequence. Social isolation or living alone were not independent risk factors for type 2 diabetes onset in this study. This result is in keeping with the majority of previous studies, which have also failed to observe an association between social isolation [ 15 , 16 , 18 , 19 ] or living alone [ 17 , 21 , 22 ] and type 2 diabetes incidence when taking sociodemographic factors, health behaviours and clinical characteristics into account. Our findings are in contrast with analyses from the MONICA/KORA Augsburg cohort, where prospective associations of social isolation [ 18 ] and living alone [ 20 ] with incident diabetes were observed in male participants. We did not observe a moderating effect of sex on the relationship between social isolation or living alone and diabetes incidence (data not shown). There were some differences in the measure of social isolation employed in the studies, which may have contributed to the diverging results. Both measures included frequency of contact with social ties and organisation membership. However, our index was unweighted and did not include living alone as we preferred to assess the predictive value of this factor independently. There is better concordance between the findings of the present study and a more recent analysis of the MONICA/KORA Augsburg cohort [ 25 ]. This study assessed perceived relationship quality by asking 6839 participants to rate their satisfaction with friends and relatives on a one item scale. Over 14 years of follow-up, men with lower social network satisfaction had a greater risk of type 2 diabetes than those with higher satisfaction ratings. This association was robust to adjustment for social isolation and living alone. Similarly, in the current study we found that loneliness was a predictor of incident type 2 diabetes, independent of social isolation or living alone. This finding highlights the need to examine loneliness, social isolation and living alone as distinct risk factors for poor health outcomes [ 23 , 24 ]. It also supports previous work suggesting that these factors may be only weakly related for older adults [ 24 ]. Depression is the mostly widely studied psychosocial risk factor for diabetes [ 7 ], and loneliness and depression are suggested to have a reciprocal relationship [ 26 ]. Therefore, we considered depressive symptoms as a potential confounder of the relationship between loneliness and type 2 diabetes risk. Our findings suggest that loneliness increases the risk of type 2 diabetes, independently of depressive symptomology. This is in keeping with the idea that loneliness and depression are distinct constructs [ 26 ]. The mechanisms through which loneliness serves to increase the risk of type 2 diabetes remain to be elucidated. Theoretical work in this area suggests that loneliness is characterised by maladaptive hypervigilance for social threats [ 1 ]. This cognitive bias leads lonely individuals to perceive the social world as threatening, leading to patterns of inappropriate social behaviour that may evoke negative responses from peers, which reinforce the bias. Poor health behaviours are suggested to be one pathway through which the maladaptive hypervigilance of loneliness can impact health [ 1 ]. In a previous analysis of the ELSA cohort, loneliness was associated with an increased likelihood of smoking and physical inactivity [ 30 ], as well as obesity [ 8 ]. However, most studies that have associated loneliness with ill health have taken these factors into account in their analyses [ 4 , 5 ]. Our findings were independent of smoking, physical inactivity, alcohol consumption and BMI. Another possibility is that direct biological mechanisms may be involved in associating loneliness with ill health [ 1 ]. Frequent activation of stress-related biological systems as a result of chronic loneliness could lead to ‘wear and tear’ on the body resulting in dysregulation across multiple biological systems. For example, loneliness has been associated with disturbances in cortisol in naturalistic [ 32 ] and experimental settings [ 33 ] in healthy samples. Cortisol plays an important mechanistic function related to type 2 diabetes [ 7 ] and dysregulation in daily cortisol output is predictive of new onset pre-diabetes and type 2 diabetes [ 34 ]. In samples with overt type 2 diabetes, loneliness has been associated with dysregulation in cortisol responses to acute laboratory stress [ 35 ]. Loneliness is also associated with inflammation [ 36 ], which is of relevance to type 2 diabetes as pooled evidence suggests that heightened inflammation is a risk factor for the condition [ 37 ]. Indeed, loneliness has been associated with heightened inflammation in laboratory settings in people with diagnosed type 2 diabetes [ 35 ]. Our findings must be considered in terms of strengths and weaknesses. Our sample was drawn from a longitudinal nationally representative cohort which allowed the examination of type 2 diabetes incidence over a relatively long follow-up period. The analyses took a variety of potential confounding variables into account and we included several measures of social integration to assess the impact of the quality and the quantity of social connections on type 2 diabetes risk. However, our study was not without limitations. Our data were observational and therefore we cannot infer causality. The strength of the association was small. Our measures were brief and likely do not capture the full complexity of experiences of loneliness, social isolation or living alone. Further, loneliness was only assessed at one timepoint, meaning our measure could reflect transient rather than persistent loneliness. However, theoretical work in the field suggests loneliness is relatively stable over time and may reflect a dispositional tendency [ 1 ]. Depression was not associated with diabetes incidence in our study, as previously reported in this cohort [ 38 ]. It is possible that our loneliness measure better reflects the English cultural expression of low mood than the depression measure used in this cohort. Our measure of type 2 diabetes was based on self-report rather than objective records; however, previous work suggests there is a high concordance between self-reported and clinically derived diabetes diagnoses [ 39 ]. The precise timing of diabetes onset was unknown. The assumption that interval survival times are exact can lead to biased estimates. Missing data are unavoidable in general population cohorts such as ELSA. We excluded participants with missing data. Those who were included were healthier, wealthier and less lonely than those who had dropped out, meaning selection bias due to non-random exclusion is possible. This may limit the generalisability of our findings. Finally, as there are few ethnic minority participants in ELSA, our findings may not generalise to non-white populations. The current study highlights loneliness as a risk factor for type 2 diabetes for the first time. Further work is required to understand the potential causal nature of this relationship, as well as underlying mechanisms. There has been increasing interest in designing interventions to alleviate loneliness, with the most promising results detected for studies addressing maladaptive social cognitions, particularly through the use of cognitive behavioural therapy [ 40 ]. In line with our results, prevention strategies should focus on the quality rather than the quantity of social relationships, as increasing social contact is unlikely to alleviate feelings of loneliness [ 40 ]. It remains to be discovered whether these types of interventions or policies to address loneliness in older people could help prevent the onset of type 2 diabetes. Data availability Data from the English Longitudinal Study of Ageing are freely available to download from the UK Data Service at . Abbreviations CES-D: Centre for Epidemiological Studies Depression Scale ELSA: English Longitudinal Study of Ageing MONICA/KORA: MONitoring of Trends and Determinants in CArdiovascular Disease/Kooperative Gesundheitsforschung in der Region Augsburg (Cooperative Health Research in the Region of Augsburg)
Published in the journal Diabetologia (the journal of the European Association for the Study of Diabetes [EASD]), the study shows that it is the absence of quality connections with people and not the lack of contact that predicts the onset of type 2 diabetes, suggesting that helping people form and experience positive relationships could be a useful tool in prevention strategies for type 2 diabetes. The results have implications in light of recent findings that people with diabetes are at greater risk of dying from COVID-19. The study indicates that prolonged loneliness may influence the development of diabetes, suggesting the experience of lockdown could potentially compound people's vulnerability in this pandemic if the loneliness continues for some time. Loneliness occurs when an individual perceives that their social needs are not being met and reflects an imbalance between desired and actual social relationships. A fifth of adults in the UK and a third of adults in the USA report feeling lonely sometimes. There is a growing interest in the role of loneliness in health and previous research has associated loneliness with increased risk of death and heart disease. This is the first study to investigate the experience of loneliness with later onset of type 2 diabetes. The study analyzed data from the English Longitudinal Study Ageing on 4112 adults aged 50 years and over which was collected at several times from 2002 to 2017. At the start of data collection all participants were free of diabetes and had normal levels of blood glucose. The study showed that over a period of 12 years 264 people developed type 2 diabetes. and the level of loneliness measured at the start of data collection was a significant predictor of the onset of type 2 diabetes later on in life. This relationship remained intact when accounting for smoking, alcohol, weight, level of blood glucose, high blood pressure and cardiovascular disease. The association was also independent of depression, living alone and social isolation. Lead author Dr. Ruth Hackett from the Institute of Psychiatry, Psychology & Neuroscience (IoPPN) King's College London said: "The study shows a strong relationship between loneliness and the later onset of type 2 diabetes. What is particularly striking is that this relationship is robust even when factors that are important in diabetes development are taken into account such as smoking, alcohol intake and blood glucose as well as mental health factors such as depression. The study also demonstrates a clear distinction between loneliness and social isolation in that isolation or living alone does not predict type 2 diabetes whereas loneliness, which is defined by a person's quality of relationships, does. She continued: "I came up with the idea for the research during UK lockdown for the COVID-19 pandemic as I became increasingly aware and interested in how loneliness may affect our health, especially as it is likely that many more people were experiencing this difficult emotion during this period." According to the study a possible biological reason behind the association between loneliness and type 2 diabetes could be the impact of constant loneliness on the biological system responsible for stress, which, over time affects the body and increases the risk for diabetes. "If the feeling of loneliness becomes chronic," explained Dr. Hackett. "Then everyday you're stimulating the stress system and over time that leads to wear and tear on your body and those negative changes in stress-related biology may be linked to type 2 diabetes development." Another explanation for the findings could be biases in our thinking that may perpetuate the association between loneliness and diabetes as when people feel lonely, they expect people will react to them negatively which makes it more difficult to form good relationships.
10.1007/s00125-020-05258-6
Nano
Imaging technique reveals strains and defects in vanadium oxide
Zachary Barringer et al, Imaging defects in vanadium(iii) oxide nanocrystals using Bragg coherent diffractive imaging, CrystEngComm (2021). DOI: 10.1039/D1CE00736J
http://dx.doi.org/10.1039/D1CE00736J
https://phys.org/news/2021-10-imaging-technique-reveals-strains-defects.html
Abstract Defects in strongly correlated materials such as V 2 O 3 play influential roles on their electrical properties. Understanding the defects' structure is of paramount importance. In this project, we investigate defect structures in V 2 O 3 grown via a flux method. We use AFM to see surface features in several large flake-like particles that exhibit characteristics of spiral growth. We also use Bragg coherent diffractive imaging (BCDI) to probe in 3 dimensions a smaller particle without flake-like morphology and note an absence of the pure screw dislocation characteristic of spiral growth. We identified and measured several defects by comparing the observed local displacement of the crystal, measured via BCDI to well-known models of the displacement around defects in the crystal. We identified two partial dislocations in the crystal. We discuss how defects of different types influence the morphology of V 2 O 3 crystals grown via a flux method. This article is part of the themed collection: Nanomaterials Transition metal oxides such as vanadium oxides are interesting for a variety of technical applications, from electrochemical anodes 1,2 to unique optical applications 3,4 and supercapacitors. 5 Vanadium( iii ) oxide (V 2 O 3 ) particularly has been of interest due to its temperature driven first-order metal–insulator phase transition that changes a variety of electronic and optical properties. 6 The properties of vanadium oxide systems have also been shown to strongly depend on morphology and growth conditions, making it a versatile material for many applications. 7,8 Defects have been shown to heavily influence a variety of both mechanical and electronic properties of nanoscale materials, which in turn influences their performance in devices. 9–13 In particular, dislocations – while uncommon in nanoparticles – have been shown to affect: lithiation in nanoparticle anodes in Li-ion batteries, 14 optical performance of light emitting diodes, 9 and mechanical deformation of nanostructures. 11,15 The combination of potential nanoscale device application and the dependence of device performance on the type and density of defects results in a need to further understand the formation and control of defects in vanadium oxides, and how defects can influence particle characteristic such as morphology or physical properties. In this paper we demonstrate a flux method for growing V 2 O 3 crystallites. We use transmission electron microscopy (TEM) to analyze the crystal orientation of the crystallites. We use Bragg coherent diffractive imaging (BCDI) to retrieve high-resolution volumetric information about a crystallite showing a significantly different shape than the flake-like particles observed in TEM and optical microscopy. With the BCDI measurements we identified a few partial dislocations with mixed screw and edge character. BCDI is a technique that circumvents the phase problem of X-ray analytics by over-sampling the diffraction pattern to reconstruct the phase of the diffracted X-rays. 14,16,17 It is used to get high-resolution volumetric information about strain and lattice distortion. In BCDI one reconstructs the phase of the light scattered by a set of crystallographic planes satisfying the Bragg condition. This recovered phase contains information about distortions to the crystal lattice in the direction normal to the diffracting planes. BCDI has been used extensively to map Bragg electronic density, displacement, ferroelectric domains and strain within the volume of crystals with nanoscale resolution; 18–25 the best reported resolution to date is 4–9 nm. 26 It has been shown that BCDI can detect signatures of defects by measuring the disruptions to the long-range order of the crystal. 27–30 By examining these long-range effects it is possible to identify defects below the resolution limit of the technique, such as dislocation cores. We use BCDI in conjunction with AFM, TEM, and optical microscopy to analyze defects in V 2 O 3 crystallites grown via flux method. We report on the nature of defects in two different crystallites with different shapes: a flake-like crystallite with features typical of crystals grown via a screw dislocation driven growth mechanism, 31,32 and a smaller particle showing a more complex defect structure, featuring several mixed dislocations. We argue that differences in morphology of these particles are potentially a consequence of the different defect structures of the crystals. 1 Methods V 2 O 3 crystals were grown by flux method. A 5 ml graphite crucible containing 0.2 g of V 2 O 5 powder and 3.28 g of anhydrous KCl (the mole ratio of V 2 O 5 : KCl is 1 : 10) was placed in an alumina tube inside a furnace. The tube was evacuated to 0.1 Torr and then filled with H 2 (5%)/Ar gas. A steady flow of 50 sccm H 2 (5%)/Ar was maintained throughout the growth at ambient pressure. The crucible was heated at 20 °C/min to 900 °C and kept for 10 hours and then lowered at 0.5 °C/min to ambient. After growth, the contents of the crucible were washed in water (to dissolve the KCl) and black shiny crystals were obtained. The as-grown crystals were analyzed with optical microscopy, AFM, and BCDI. The optical microscopy was performed using a Nikon Eclipse Ti–S inverted optical microscope to determine the approximate size and shape of the crystals. The AFM measurements were conducted with a MultiModeTM AFM in order to study the topography of the particles. The BCDI was performed at the I13-1 beamline of the Diamond Light Source in order to more closely investigate defects indicated by the AFM measurements. The diffraction patterns used for BCDI were collected as illustrated in Fig. 1 . A monochromatic beam of energy 11.0 keV and bandwidth (Δ E / E ) = 1 × 10 −4 was focused onto our sample with randomly dispersed (00l)-oriented crystallites. A Fresnel zone plate with diameter of 400 μm and outer zone width of 150 nm was used to focus the beam. The sample was positioned slightly defocused and downstream of the focusing spot of the Fresnel zone plate, and the final spot size was approximately 2 μm full-width at half-maximum and carried a divergence of approximately 50 × 25 μrad. The detector was an Excalibur photon counting direct X-ray detector utilizing a Medipix3 chip, with 55 μm × 55 μm pixels. The detector was placed in the vicinity of the V 2 O 3 (006) Bragg diffraction peak at a distance of approximately 3 m from the sample. The sample was scanned to reveal Bragg spots emanating from individual V 2 O 3 crystallites. Well isolated Bragg peaks with fringe oscillations indicative of coherent illumination were utilized for this study. Once the Bragg condition was optimized and the sample was positioned to the approximate center of rotation for the sample stage, the stage was rocked in increments of about 0.001° while collecting the diffraction patterns. These few hundred individual diffraction patterns elucidate the 3D structure of the Bragg peak, which encodes the Bragg electronic population and lattice displacement for the reflection. Fig. 1 (a) Diagram of the experimental setup used the BCDI measurements in this work. Coherent X-rays from a synchrotron source are directed to the sample using a Fresnel zone plate (not shown) and is diffracted by a V 2 O 3 crystallite. The constructive interference patterns of the diffracted X-rays are recorded while rocking the sample in theta in steps of about 0.001° about the Bragg condition for the (006) peak, effectively measuring the three-dimensional Bragg peak. When the Bragg peak is sufficiently over-sampled, it is possible to apply established phase retrieval algorithms to solve for the complex wave function of the scattered X-rays; a schematic representing the basic form of these phase retrieval algorithm is shown in (b). The retrieved complex scattered wave-function contains information on the Bragg electronic density and lattice displacement of the V 2 O 3 crystallite. BCDI is a method for solving the phase problem of X-ray diffraction measurements. In the far field, and assuming the kinematical scattering limit, the scattered amplitude can be expressed as the Fourier transform of a strained crystal. The electron density of such strained crystal is represented by a complex-valued Bragg electron density. When diffracted X-rays are measured, the intensity, I, of the measured X-ray is proportional to the square of this amplitude, as shown in eqn (1) . (1) where ρ is the real valued electronic density, 0 is the reciprocal lattice vector for a given reflection, ( ) is the displacement field, and = f − i is the momentum transfer vector. This shape function (the integrand) is an expression for the location of each atom in the crystal. This basis has an imaginary component that reflects the displacement of the atom from the ideal position of a given crystal projected onto the reciprocal lattice vector. However, the phase component is lost in the measured intensity of an actual X-ray experiment. We algorithmically recover this phase by utilizing an oversampled Bragg peak and repeatedly alternating between real and reciprocal space, using inverse and forward Fourier transforms. The algorithm randomly generates a guess of the phase information for the diffraction pattern, and imposes the square-root of the measured intensity values as the amplitude, while the crystal is confined to exist within a finite space (the support). The phase of the diffracted X-ray beam was retrieved using a combination of hybrid input–output and error reduction described by Fienup. 33 Once the phase of the wavefront is retrieved, it is inverse Fourier transformed into a real space complex-valued volume representing the crystallite. The amplitude corresponds to the electronic density population of the Bragg planes, and the phase is the projection of the displacement field onto the momentum transfer vector of the scattering event. The real electronic density reflects the periodicity of the crystallite and the phase is proportional to the displacement of the core electrons from their ideal lattice positions. It is important to note that for highly strained crystals, the kinematic theory cannot always provide adequate results and dynamical approach generalized by Takagi and Taupin 34,35 should be applied. A detailed overview of the dynamical diffraction theory, 36,37 and BCDI 16,22,38 is given in a number of works available. 2 Results The optical microscopy image in Fig. 2a shows three V 2 O 3 crystals in different shapes and sizes. The left crystal has a flake-like shape (thin), the right-bottom one has particle-like shape (thick), while the right-up one has a flake and a particle merged together. The angle between the crystal edges are approximately 60 or 120 degree in small crystals (right-up and right-bottom), indicating a hexagonal structure. However, for the large crystal (left), the shape is irregular but there are 120 degree angled step edges on the surface. The lateral size of the crystals are on the order of tens of microns. The TEM image in Fig. 2b shows a thin flake. It is difficult to transfer and locate the same flake from the TEM grid to a substrate for BCDI experiment, however, these samples are likely to have the same crystal structure. Also, they may have different defect (dislocation) densities due to different supersaturation at different regions in the crucible during the growth. The corresponding fast Fourier transform (FFT) image in Fig. 2c reveals the α -corundum structure (metal phase) and high crystalline quality of this flake with the [001] axis normal to its surface. Similar geometries have been observed in other materials systems where growth conditions favor spiral crystal growth. 12,31 Fig. 2 Morphology, crystal structure and growth mechanism of V 2 O 3 microcrystals. (a) Optical image of V 2 O 3 flake-like microcrystals. (b) TEM image of a V 2 O 3 flake. (c) Electron fast Fourier transform (FFT) image with a zone axis [001] of the V 2 O 3 flake in (b), conforming the single crystal nature of the flake with the [001] axis normal to its surface. (d) and (e) AFM image of a V 2 O 3 flake, revealing a screw dislocation driven growth mechanism. (e) Is an enlarged view (dashed square) of the screw core in (d). (f) Height profile of the red line in (d), showing an average step height of 4 nm (about three unit cells of V 2 O 3 ). (g) and (h) Schematic of the screw dislocation driven growth. The screw dislocation in a nucleus creates a step edge and promotes the growth (curved gray arrow) (g) and eventually a dislocation spiral is formed (h). (i) Schematic of atomic structure of the screw core in V 2 O 3 crystal. The initial step edge is indicated as white atoms. The preliminary results showing a suspected dislocation made this material sample an interesting candidate for coherent X-ray imaging. Originally, the sample was selected for Bragg ptychography characterization, another coherent X-ray imaging technique where a sample larger than the beam size is imaged by measuring the diffraction pattern from multiple, overlapping regions of the crystallite. This was due to the size and shape of the particles observed in previous measurements; however, during the experiment we observed a diffraction pattern of a smaller particle – in the size range where BCDI is effective – so we collected data and analyzed this seemingly anomalous particle. Coherent X-ray imaging is well suited for this application since it creates a map of the atomic displacement with nanoscale resolution. While the resolution is too low to resolve individual atomic positions, it is possible to detect and identify dislocations based on the long-range effect they have on the atomic positions compared to the perfect crystal. The BCDI reconstruction of the small crystallite showed that the measured crystal was hexagonal, with lateral size around 300 nm, and a height of about 1 micron. The Bragg electron density at the top of the crystal was very low compared to the base of the crystallite. This could potentially be due to the low contribution to the Bragg peak from a very defective portion of the crystal. The phase of this low electron density region has several discontinuities and large gradients, which supports the idea of a very defective crystal. Due to the low Bragg electron density and complicated phase structure, we avoid making claims about the specific crystallographic features present in this portion of the crystal, and it is excluded from the tracing of dislocation cores shown in Fig. 3b . Fig. 3 (a) Shows the diffraction peak used for coherent imaging reconstruction. (b) Shows the outline of the dislocation cores in the base of the particle, based on the phase of the reconstructed crystal. The displacement is directly proportional to the phase of the reconstructed complex crystal, as shown in eqn (1) . The yellow solid lines represent the cores for the mixed and partial dislocations. (c) and (d) Show the location in the particle of the cross sections shown in (e) and (f) respectively; the color of the images represents the phase of the crystal. The z -coordinate in (c) and (d) points towards [006] direction, and the z -coordinate in (e) and (f) points towards [001] direction. These phase maps show signatures of dislocations, with dislocation lines approximately along [001]. By converting phase to displacement we can determine the Burgers vector of the dislocation. (g) and (h) Show radial plots of the displacement of the crystal around the dislocation cores, with an inset showing the pixels for one of the radii used for the fitting process. In the lower region of the crystal, there are several discontinuities in the phase, accompanied by regions of low electronic density. These features are signatures of defects. We used the algorithm published by Ulvestad et al. to estimate the path of dislocations through the crystal. 27 The results of this dislocation tracing are shown in Fig. 3b . This algorithm identifies the path of dislocations by tracing the singularity of the phase around dislocations cores. This technique is unable to identify the path of a dislocation if it lacks this phase signature. Discontinuities in the traced dislocations are likely due to the Burgers vector becoming perpendicular to the momentum transfer of the scattering event used for the reconstruction. This is noteworthy because dislocations must either terminate at the surface of the crystallite or form closed loops, 39 so this is necessarily an incomplete picture of the dislocations in the crystallite. The results of this algorithm suggest that the defect structure of this small nanoparticle is much more complex than was suspected for the flake-like particle. We attempted to characterize some of the dislocations by fitting the phase to standard models for displacement around an edge dislocation, as previously demonstrated. 14 The displacement around a dislocation depends on the type of dislocations. Assuming the elastic limit, the displacement around screw dislocations far from the dislocation core can be described as: 39 (2) All of the displacement is in a single direction, along the direction of the Burgers vector and dislocation line. Similarly, the displacement around an edge dislocation can be analytically expressed with two components. One component that is perpendicular to the plane of extra atoms and parallel to the Burgers vector: 14 (3) and a component parallel to the extra plane of atoms and perpendicular to the Burgers vector: 14 (4) A displacement field indicative of a dislocation can be seen in Fig. 3e . This feature is not well fit by the equations for edge or screw dislocations; rather its angular dependence is like that of a mixed dislocation, which can be modelled with a combination of edge and screw components: 39 = e + s . (5) The displacement was fit at several radii to ensure that the dislocation had been rigorously identified, as previously demonstrated by Ulvestad et al. 14 One angular plot for this mixed dislocations demonstrating the fit of the model is shown in Fig. 3g . The dislocation was well fit by a Burgers vector of , where a is the lattice parameter, with a dislocation line along [001]. The data was well fit by this dislocation, with a root-mean-square-error (RMSE) of 0.035. It is noteworthy that there is a reported partial dislocation in the R c crystal structure with a Burgers vector of This mixed dislocation appears . 40,41 This mixed dislocation appears to occur where we would expect a screw dislocation to occur in spiral assisted growth. The lack of a pure, perfect screw dislocation here may help explain the discrepancy in particle shape between the large flakes observed in AFM and optical microscopy measurements and the smaller particle measured with BCDI. The displacement in another portion of the crystal, shown in Fig. 3f , was fit using the same technique. One of the circular profiles plotted is shown in Fig. 3h . Using several radii to rigorously identify the dislocation, the Burgers vector was again identified as . The quality of fit for this dislocation is marginally lower than the previous dislocation, with an RMSE of 0.055. This is possibly due to the neighboring dislocations influencing the displacement in the region. In a similar analysis performed by Ulvestad et al. they introduced a correction for neighboring systems of edge dislocations; 14 this correction was not replicated due to the asymmetrical nature of the dislocation system indicating that the neighboring dislocations my not be equal and opposite, and the proximity of the dislocation to the edge of the crystallite making identification using concentric radial plots. The nature of the dislocations identified in BCDI may provide some insight into the stark contrast between the particle identified in the BCDI measurement and the particles observed with optical and AFM measurements. The presence of a partial dislocation implies the existence of a stacking fault near the base of the crystal; stacking faults have been shown to reduce the energy in thin films, 42 but were only seen in films up to 8–10 nm – much thinner than the particle observed here. The nature of these dislocations could lend insight into the varied morphology of the grown crystallites. Further investigation with BCDI or other techniques could help fully elucidate the role mixed dislocations play in the stunted growth we observed in this system. 3 Conclusions In summary, we used BCDI to probe the 3D defect structure of a particle, and found a complex defect structure that may have influenced crystal morphology. Our results hint at the role of mixed, partial dislocations in crystal growth. BCDI is a promising tool for investigating the complex role defects have on crystallite morphology. The impact of defect density and type on particle morphology remains a topic of further investigation. Also of potential further interest is what impact these defects or corresponding geometry have on the key properties of V 2 O 3 , such as the metal–insulator transition. Conflicts of interest There are no conflicts to declare. Acknowledgements This work was supported by the US Department of Defense, Air Force Office of Scientific Research under Award No. FA9550-14-1-0363 (Program Manager: Dr. Ali Sayir) and funds from Rensselaer Polytechnic Institute. The authors would like to acknowledge Diamond Light Source for time on beamline I13-1 under proposal number MT20381-1. J. J. and J. S. acknowledge the Air Force Office of Scientific Research under award number FA9550-18-1-0116. E. F. and J. S. acknowledge the National Science Foundation under award no. 2024972.
Researchers led by Edwin Fohtung, an associate professor of materials science and engineering at Rensselaer Polytechnic Institute, have developed a new technique for revealing defects in nanostructured vanadium oxide, a widely used transition metal with many potential applications including electrochemical anodes, optical applications, and supercapacitors. In the research—which was published in an article in the Royal Chemical Society journal CrystEngComm, and also featured on the cover of the edition—the team detailed a lensless microscopy technique to capture individual defects embedded in vanadium oxide nanoflakes. "These observations could help explain the origin of defects in structure, crystallinity, or composition gradients observed near grain boundaries in other thin-film or flake technologies," said Fohtung, an expert in novel synchrotron scattering and imaging techniques. "We believe that our work has the potential to change how we view the growth and non-destructive three-dimensional imaging of nanomaterials." Vanadium oxide is currently used in many technological fields such as energy storage, and can also be used in constructing field-effect transistors owing to metal insulating transition behavior that can be adjusted with an electric field. However, strain and defects in the material can alter its functionality, creating the need for non-destructive techniques to detect those potential flaws. The team developed a technique based on coherent X-ray diffraction imaging. This technique relies on a type of circular particle accelerator known as a synchrotron. Synchrotrons work by accelerating electrons through sequences of magnets until they reach almost the speed of light. These fast-moving electrons produce very bright intense light, predominantly in the X-ray region. This synchrotron light, as it is named, is millions of times brighter than light produced from conventional sources and 10 billion times brighter than the sun. Fohtung and his students have successfully used this light to develop techniques and capture minute matter such as atoms and molecules and now defects. When used to probe crystalline materials, this technique is known as Bragg coherent diffraction imaging (BCDI). In their research, the team used a BCDI approach to reveal nanoscale properties of electron densities in crystals, including strain and lattice defects. Fohtung worked closely with Jian Shi, a Rensselaer associate professor of materials science and engineering. They were joined in the research on "Imaging defects in vanadium(III) oxide nanocrystals using Bragg coherent diffractive imaging" by Zachary Barringer, Jie Jiang, Xiaowen Shi, and Elijah Schold at Rensselaer, as well as researchers at Carnegie Mellon University.
10.1039/D1CE00736J
Medicine
Experimental 'blood test' accurately screens for PTSD
Molecular Psychiatry (2019). DOI: 10.1038/s41380-019-0496-z Journal information: Molecular Psychiatry
http://dx.doi.org/10.1038/s41380-019-0496-z
https://medicalxpress.com/news/2019-09-experimental-blood-accurately-screens-ptsd.html
Abstract Post-traumatic stress disorder (PTSD) impacts many veterans and active duty soldiers, but diagnosis can be problematic due to biases in self-disclosure of symptoms, stigma within military populations, and limitations identifying those at risk. Prior studies suggest that PTSD may be a systemic illness, affecting not just the brain, but the entire body. Therefore, disease signals likely span multiple biological domains, including genes, proteins, cells, tissues, and organism-level physiological changes. Identification of these signals could aid in diagnostics, treatment decision-making, and risk evaluation. In the search for PTSD diagnostic biomarkers, we ascertained over one million molecular, cellular, physiological, and clinical features from three cohorts of male veterans. In a discovery cohort of 83 warzone-related PTSD cases and 82 warzone-exposed controls, we identified a set of 343 candidate biomarkers. These candidate biomarkers were selected from an integrated approach using (1) data-driven methods, including Support Vector Machine with Recursive Feature Elimination and other standard or published methodologies, and (2) hypothesis-driven approaches, using previous genetic studies for polygenic risk, or other PTSD-related literature. After reassessment of ~30% of these participants, we refined this set of markers from 343 to 28, based on their performance and ability to track changes in phenotype over time. The final diagnostic panel of 28 features was validated in an independent cohort (26 cases, 26 controls) with good performance (AUC = 0.80, 81% accuracy, 85% sensitivity, and 77% specificity). The identification and validation of this diverse diagnostic panel represents a powerful and novel approach to improve accuracy and reduce bias in diagnosing combat-related PTSD. Introduction Combat-related post-traumatic stress disorder (PTSD) has a lifetime prevalence of between 10.1%–30.9% in U.S. veterans of the Vietnam and subsequent conflicts, including the Iraq and Afghanistan wars [ 1 , 2 , 3 , 4 ]. PTSD is precipitated by experiencing or witnessing actual or threatened death, serious injury, or violence, and has symptoms that include re-experiencing, avoidance, negative thoughts, or moods associated with the traumatic event and hyperarousal (DSM-5 [ 5 ]). There is limited understanding of the biological processes underlying the core features of PTSD and associated psychiatric and somatic comorbidity [ 6 ]. Limited progress in the discovery of biological markers of PTSD has hampered accurate diagnosis, early identification of cases, staging and prognosis, stratification, personalized treatment, and new drug development. Additionally, individuals meeting diagnostic criteria for PTSD represent a heterogeneous group, as evidenced by differences in symptomatology, course, and treatment response [ 7 ]. Currently, case identification is limited by heavy reliance on self-reported symptoms for a disorder in which many trauma survivors under-report symptoms because of stigma, and some over-report symptoms for financial or other gains. Personalized treatment selection is limited by errors of omission (failing to identify individuals who would likely benefit from a specific behavioral or biological treatment) and errors of commission (treating individuals who are unlikely to benefit from a specific treatment), in part because of the lack of validated diagnostic and prognostic markers. Previous PTSD biomarker studies have primarily focused on using gene expression for predicting risk and diagnosis [ 8 , 9 , 10 , 11 ]. These studies have demonstrated moderate success in identifying predictive and diagnostic markers, but have been limited due to small sample sizes, as well as the focus on an individual molecular data type. In cancer, multi-site, integrated multi-omic studies have shown great promise in generating novel insights into disease mechanism, diagnostic and predictive markers, and signals of progression and stratification [ 12 , 13 , 14 ]. These studies have included high-throughput ‘omics data such as genomics, transcriptomics, proteomics, methylomics, lipidomics and metabolomics [ 15 ]. By employing a systems biology framework, multi-omic datasests provide the ability to understand the underlying disease network-associated biological processes [ 16 ]. The systems biology approach aims to characterize a large and diverse set of molecules within an illness or individual by examining entire biological systems, not just individual components, allowing the assessment of interactions among levels of cellular pathology, ranging from DNA to circulating metabolites [ 17 , 18 , 19 ]. This approach has the potential to provide a more comprehensive characterization of illnesses, to track underlying biological dysregulation before clinical symptoms develop or worsen, to lead to the identification of improved diagnostic markers, and to allow for the discovery of novel targets for treatment [ 20 ]. In 2012, the Department of Defense initiated a multi-site “PTSD Systems Biology Consortium”, which applied multiple ‘omics technologies to the same sample of combat-exposed PTSD and control participants. The goals of the PTSD Systems Biology Consortium included developing a reproducible panel of blood-based biomarkers with good sensitivity and specificity for PTSD diagnosis. Here, we present identification and validation of a set of multi-omic biomarkers for diagnosing warzone-related PTSD. Materials and methods Study inclusion criteria General inclusion criteria included being an Operation Enduring Freedom (OEF) and/or Operation Iraqi Freedom (OIF) male veteran between 20 and 60 years old, being able to understand the protocol and sign written informed consent, and meeting criteria for either PTSD-positive or PTSD-negative groups. PTSD-positive participants were defined as participants who met DSM-IV PTSD criteria for current warzone-related PTSD for at least 3 months duration, as indexed by the Clinician-Administered PTSD Scale (CAPS), with a minimum total score ≥ 40, which was calculated by summing each symptom on frequency and intensity ratings. Full criteria for DSM-IV diagnosis of PTSD was also met for all PTSD-positive participants. PTSD-negative controls were combat-exposed veterans that were negative for lifetime combat or civilian PTSD and had a current CAPS total score < 20. All study participants were exposed to DSM-IV PTSD Criterion A trauma during deployment. Detailed recruitment, enrollment, and exclusion criteria are listed in the Supplemental Material and Methods . Clinical assessment measures The Structured Clinical Interview for DSM (SCID) was used to determine whether participants met DSM-IV diagnostic criteria for mood, anxiety, psychotic, and substance use disorders [ 21 ]. The CAPS was used to determine combat-related PTSD status, as well as the severity of current PTSD symptoms (past month is the “CAPS current”) and the severity of the most severe lifetime episode of combat-related PTSD (“CAPS lifetime”) [ 22 ]. Molecular assays Blood samples were assayed for many molecular species, including genetics, methylomics, proteomics, metabolomics, immune cell counts, cell aging, endocrine markers, microRNAs (miRNAs), cytokines, and more. DNA methylation was quantified using two approaches: a genome-wide unbiased approach, and a targeted sequencing-based approach. The genome-wide methylation approach quantified methylation using the Illumina Infinium HumanMethylation450K BeadChip array (Illumina Inc., CA). Using targets generated from this genome-wide approach, as well as other hypotheses generated from literature, a smaller set of methylation sites were evaluated by targeted sequencing via Zymo Research (Zymo Research, CA). Plasma miRNAs were evaluated using small RNA sequencing, and processed using sRNAnalyzer [ 23 ]. Proteins were evaluated using three methods: peptide quantification using selected reaction monitoring (SRM), quantification of six neurodegenerative disease-related markers using the Human Neurodegenerative Disease Panel 1, and quantification of serum levels of BDNF using a BDNF ELISA assay. Non-targeted metabolomics analysis was conducted using three platforms: ultrahigh performance liquid chromatography/tandem mass spectrometry (UHPLC/MS/MS 2 ) optimized for basic species, UHPLC/MS/MS 2 for acidic species, and gas chromatography/mass spectrometry (GC/MS). Additional data types, including routine clinical lab values and physiological measurements, were collected using standard procedures. Details on all molecular assays and blood draw information are contained in the Supplementary Materials (Table S2 ). Results Participant recruitment and multi-omic data generation A set of three cohorts totaling 281 samples from male combat veterans from OEF/OIF conflicts were recruited as part of a larger study designed to identify biomarkers for PTSD diagnosis using a combination of clinical, genetic, endocrine, multi-omic, and imaging information (Fig. 1 ). Participants were recruited in three cohorts: discovery, recall, and validation (Fig. 2a and Table 1 ). The discovery cohort (cohort 1) consisted of 83 PTSD and 82 trauma-exposed control participants who met the inclusion and exclusion criteria (described in Materials and Methods and Supplementary Material ). All participants completed clinical interviews and blood draws. After assessment of data quality, 77 PTSD and 74 trauma-exposed control samples were available with all completed blood marker assays. This discovery cohort was used to generate an initial pool of candidate biomarkers. Participants from the discovery cohort were invited back for clinical re-evaluation and a blood draw approximately three years after their initial evaluation. This cohort of recalled subjects (recall cohort, cohort 2), included 55 participants from the initial discovery cohort. Some of these participants showed PTSD symptom and status changes based on clinical assessment (Fig. 2b ). In addition, some participants no longer met the original inclusion/exclusion criteria for the study; these participants had symptoms intermediate between the PTSD and control groups, in some cases meeting criteria for subthreshold PTSD. The 55 recall participants included 15 PTSD, 11 subthreshold PTSD, and 29 control participants. The third cohort, an independent group of 26 PTSD and 26 control participants, became the validation cohort (cohort 3), used for validating the final set of PTSD biomarkers. Fig. 1 Overview of PTSD biomarker identification approach—details of cohort recruitment, and biomarker identification, down-selection, and validation Full size image Fig. 2 Overview of molecular datasets and cohort symptom severity. a Flow diagram for participant recruitment and enrollment. Participant eligibility was determined through a phone pre-screen and a baseline diagnostic clinical interview. Eligible participants completed fasting blood draws for multi-omic molecular assays. Participants in the initial discovery cohort were invited to return for follow-up in the recall cohort. Some participants returned with symptom changes, including “subthreshold” PTSD symptoms (below original study inclusion criteria). b Trajectory of PTSD symptoms in recalled participants. CAPS total for current symptoms at baseline (T0) and follow-up (T1) for each participant are connected. Participants who remained in the PTSD + group at both time points are shown in red. Participants who remained in the PTSD- group are shown blue. Participants with PTSD status changes are shown in gray, including participants who became “subthreshold” PTSD cases. c Distribution of molecular data types at three stages of biomarker identification: full exploratory dataset (All Data), reduced set of 343 potential biomarkers (candidate set) and the final panel of 28 biomarker (final set). Methylation and GWAS data represents 99% of initial data screen due to high-throughput arrays. Other molecular data types are well represented in the second and final stages of biomarker identification and selection Full size image Table 1 Summary of cohort demographics and clinical symptoms Full size table PTSD cohorts and multi-omic datasets To identify a minimally invasive PTSD diagnostic panel, blood-based multi-omics and other analytes were assayed for each individual (and during both visits for recalled participants), including DNA methylation, proteomics, metabolomics, miRNAs, small molecules, endocrine markers, and routine clinical lab panels. Additionally, physiological measures were recorded and nonlinear marker combinations were computed. Using a strategy described in the next sections, a robust and diverse 28-member biomarker panel for diagnosing PTSD was identified from this pool of more than one million markers (Fig. 2c ). Three-stage biomarker identification and down-selection from exploratory set of multi-omic data We used a “wisdom of crowds” approach to identify candidate PTSD biomarkers from the large set of measured blood analytes. Utilizing domain area expertize of multiple researchers, as well as multiple algorithms and methodologies, collective intelligence has the potential to identify successful candidate biomarkers from a large dataset, particularly when knowledge is limited. Collective intelligence and “wisdom of crowds” approaches are often used in financial modeling and predictions [ 24 ], have been evaluated in medical decision-making [ 25 ], and are the motivation for ensemble classification methods, which have been shown to outperform individual classifiers [ 26 ]. From a diverse set of data-driven, hypothesis-driven, hybrid, and other approaches (Table S3 ), we identified a set of candidate diagnostic panels, totaling 343 unique potential biomarkers (Step 2 from Fig. 1 and Table S4 ). These approaches included COMBINER [ 27 ], polygenic risk [ 28 , 29 ], as well as traditional Support Vector Machine with Recursive Feature Elimination (SVM-RFE), random forest, and other classification algorithms, and feature selection approaches, including p -value, q -value, and fold-change filtering. Details of these algorithms are listed in the Supplementary Material . To filter and refine the pool of candidate biomarkers, we used data from recalled participants (recall cohort, cohort 2). Many of these returning participants experienced symptom changes over the 3.3 ± 0.9 years (mean ± sd) between the initial and follow-up evaluation. CAPS totals for recalled participants at both time points are shown in Fig. 2b . The panel was refined using the recall cohort along with a two-stage down-selection approach to select the final set of PTSD biomarkers (Steps 4–5 from Fig. 1 ). The two-stage down-selection process is based on the following methodology. In the first stage, poor performing candidate biomarkers were removed one-by-one based on the largest average AUC of the remaining biomarker set (Step 4, Fig. 1 ). The trajectory of AUC scores in the recall cohort is shown in Supplementary Fig. 1A , showing the average AUC at each step of the one-by-one elimination. The biomarker set with the largest average AUC prior to the final performance decline was selected, resulting in 77 remaining biomarkers. To further reduce the number of features in the panel, we implemented a second stage of down-selection, based on random forest variable importance (Fig. 1 , Step 5). Using the recall cohort, the remaining 77 biomarkers were sorted based on random forest variable importance (Supplementary Fig. 1B ). We retained biomarkers with importance >30% of the maximum importance score for the final biomarker panel ( n = 28). The dynamics and distribution of these 28 biomarkers in the discovery and recall cohorts is shown in Supplementary Figs. 2 and 3 . Validation of a robust, multi-omic PTSD biomarker panel After the two-stage feature reduction strategy, the final biomarker set consisted of 28 features, including methylation, metabolomics, miRNA, protein, and other data types. A random forest model trained on the combined cohorts 1 and 2 predicted PTSD status in an independent validation set (cohort 3) with an area under the ROC curve (AUC) of 0.80 (95% CI 0.66–0.93, Fig. 3a ). Using the point closest to (0,1) on the ROC curve (shown in Fig. 3a ), the model was validated with an accuracy of 81%, sensitivity of 85%, and specificity of 77%. The PTSD participants in the validation cohort had CAPS scores ranging from 47–114. We found that predicted PTSD scores from the random forest model for these cases were correlated with total CAPS ( r = 0.59, p = 0.001), indicating the current biomarker model predicts not only disease status, but potentially PTSD symptom severity of cases (Fig. 3b ). In addition, predicted PTSD scores were moderately correlated with DSM-IV re-experiencing, avoidance, and hyperarousal symptoms ( r = 0.44–0.53, Supplementary Fig. 4 ), suggesting that the identified molecular markers are not specific to a single symptom cluster, but to overall symptoms. Fig. 3 Validation of biomarker panels. a ROC curve for identified biomarker panel (28 markers), illustrating good performance in an independent validation dataset (26 cases, 26 controls). Shaded region indicates 95% confidence interval, determined by 2000 bootstrapping iterations. Operating point closest to (0,1) on ROC curve used for calculating sensitivity, specificity, and accuracy. b Predicted probability of PTSD based on trained random forest model using a biomarker panel of 28 features. In PTSD participants, predicted PTSD probability is correlated with PTSD symptom severity, measured by CAPS ( r = 0.59, p < 0.01). c Random forest variable importance of the final 28 biomarkers. Variable importance was determined using biomarker model training data (cohorts 1 and 2). The top 10 biomarkers, based on random forest variable importance, contain multiple data types, including methylation markers (cg01208318, cg20578780, and cg15687973), physiological features (heart rate), miRNAs (miR-133a-1-3p, miR-192-5p, and miR-9-1-5p), clinical lab measurements (insulin and mean platelet volume), and metabolites (gammaglutamyltyrosine). d Correlation between PTSD biomarkers. Pearson correlation coefficients were computed in the combined set of all three cohorts. The final set of identified biomarkers show small clusters of moderately correlated features, primarily grouped by molecular data type (proteins, miRNAs, and methylation markers). e Biomarker panel performance evaluation during panel refinement, across molecular data types, and in nonlinear features. The validation AUC improves after biomarker down-selection and model refinement. The final biomarker panel validates with greater AUC over the initial biomarker candidate pool (343 markers, AUC = 0.74), and stage one refined panel (77 markers, AUC = 0.75). The final multi-omic panel also outperforms each individual molecular data type. Performance metrics for nonlinear feature combinations, Global Arginine Bioavailability Ratio (GABR) and lactate/citrate. Both nonlinear combinations outperform their individual components in AUC (0.60 vs. 0.51 and 0.55 vs. 0.52 in GABR and lactate/citrate, respectively). Error bars indicate 95% confidence interval, determined by 2000 bootstrapping iterations. f Validation performance by ethnicity, and in the presence of major depressive disorder (MDD). Validation performance in Hispanic participants was higher than other ethnicities (non-Hispanic White, non-Hispanic Black, non-Hispanic Asian). PTSD cases with comorbid MDD ( n = 9) are easily distinguishable from all combat-exposed controls ( n = 26), with AUC = 0.92, while PTSD cases without comorbid MDD ( n = 17) are only moderately distinguishable from controls ( n = 26), with AUC = 0.73 Full size image Overall, the set of identified PTSD biomarkers contains many molecular data types (DNA methylation, miRNAs, proteins, metabolites, and others), with signals primarily including under-expressed proteins and miRNAs, and signatures of both DNA hyper- and hypomethylation. Of the 28 markers comprising the final panel, 16 markers had consistent fold-change directions in all three cohorts (Table 2 ). Five of the final 28 markers were retained during panel refinement even though the fold-change direction was inconsistent between the discovery and recall cohorts, indicating that these features may contain relevant PTSD signal that is not purely measured by group differences in mean. A post hoc analysis of the biomarker panel performance without these inconsistent features resulted in decreased validation performance (AUC = 0.74 and 0.71 when using only markers with consistent fold-change directions across the discovery and recall cohorts (23 markers), and all three cohorts (16 markers), respectively). Table 2 Overview of biomarker signals in each of the three cohorts Full size table Using random forest variable importance, the top 10 biomarkers from the final 28-marker panel included five of the six molecular data types: DNA methylation, physiological, miRNAs, clinical lab measures, and metabolites (Fig. 3c ). These data types contribute primarily uncorrelated signals, with only small clusters of moderate to highly correlated biomarkers from three data types: proteins, miRNAs, and DNA methylation (Fig. 3d ). Through the biomarker identification and down-selection process, two intermediate biomarker sets were identified, consisting of 343 and 77 candidate biomarkers. Trained random forest models on these biomarker sets validated with slightly lower AUCs than the final biomarker panel (AUCs of 0.74, 0.75, and 0.80 in the 343, 77, and 28 biomarker panels; Fig. 3e ). The consistent validation AUC indicates robust signal in these sets of candidate biomarkers, without loss of signal during down-selection from 343 to 28 features. The final panel of 28 markers consisted of six different data types: routine clinical lab markers, metabolites, DNA methylation marks, miRNAs, proteins, and physiological measurements. The combined panel out-performed all six panels composed of each individual data type (Fig. 3e ), demonstrating the power of combining different types of markers in a diverse biomarker panel, capable of capturing the complexities of PTSD. Two biomarker features included in our final panel are computed, nonlinear metrics: Global Arginine Bioavailability Ratio (GABR, defined as arginine/[ornithine + citrulline]) and lactate/citrate. These computed ratios outperform their combined individual components in predictive performance, indicating biologically-driven nonlinear features may enhance low signals (Fig. 3e ). In addition, these ratios begin to alleviate single-sample normalization issues that need to be addressed for clinical use of a biomarker panel. Evaluation of clinical and demographic factors The cohorts recruited for this study are diverse in terms of ethnicity, educational background, clinical symptoms, overall health, and comorbid diseases and conditions. The heterogeneity of the participants included in these three cohorts, including race, age, and clinical comorbidities, as well as PTSD severity are shown in Table 1 . To evaluate the performance of this biomarker panel in the context of participant demographics and other clinical factors, we computed biomarker performance in stratified subsets of the validation cohort. While biomarker performance was highest in Hispanic participants (AUC = 0.95), we observed no statistically significant differences in AUC across ethnicities (Fig. 3f ). Multiple studies have examined the increased prevalence and greater symptom severity of PTSD in Hispanic populations [ 47 , 48 ], which may correspond to stronger biological signals, leading to the differences in AUC. In the validation cohort, 35% of PTSD cases also met the criteria for major depressive disorder (MDD). Using the identified biomarker panel and model, these PTSD + /MDD + cases could be distinguished from all controls with an AUC of 0.92, while the PTSD + /MDD – could only be distinguished from controls with an AUC of 0.73 (Fig. 3f ). Similarly, predicted PTSD scores were more strongly correlated with PTSD symptom severity in PTSD + /MDD + participants than in PTSD + /MDD– participants, with r = 0.64 and r = 0.37, respectively (Supplementary Fig. 5 ). This decrease in prediction accuracy and correlation with PTSD symptoms in the absence of comorbid MDD indicates a potential overlap of biological signals for MDD and PTSD that should be explored further. Discussion This study presents the identification and validation of a biomarker panel for the diagnosis of combat-related PTSD. The panel consists of 28 features that perform well in identifying PTSD cases from combat-exposed controls in a male, veteran population (81% accuracy). Some of the biomarkers have been linked to PTSD previously, including elevated heart rate [ 36 ] and decreased level of coagulation factors [ 10 ], and other included markers have been linked to MDD, anxiety, and other comorbid conditions, including platelet volume [ 43 , 44 ], insulin resistance [ 41 , 49 ], alterations in the SHANK2 gene [ 30 ], and PDE9A expression [ 31 ] (Table 2 ). In particular, the circulating miRNAs selected in the panel reflect the diverse pathology and comorbidities present in PTSD populations, including connections to metabolic diseases and cardiovascular conditions. The miR-133-3p, a member of myomiRs that are highly abundant in muscle, including cardiac muscle, has been implicated in cardiomyocyte differentiation and proliferation [ 50 ]. The circulating miR-133-3p level has been linked to various cardiovascular disorders, including myocardial infarction, heart failure, and cardiac fibrosis [ 51 , 52 ]. The miR-9-5p is enriched in brain [ 40 ] and known as a regulator for neurogenesis. It is also involved in heart development and heart hypertrophy [ 53 ]. The miR-192 is highly abundant in the liver and circulating miR-192-5p levels have been associated with various liver conditions as well metabolic diseases such as obesity and diabetes [ 37 , 38 ]. The circulating miR-192 level has also been used as a biomarker for ischemic heart failure [ 54 ]. In addition to molecular markers, our approach selected heart rate as a contributor to the PTSD diagnostic panel. More than two decades ago, heart rate differences were observed between eventual PTSD cases and controls during emergency room visits and at 1-week follow-ups after trauma [ 36 ]. While these differences did not persist for longer time points in Shalev’s study, we observed significant mean group differences for heart rates in two of the three cohorts from this study, a number of years following trauma exposure ( p < 0.01 for discovery and validation cohorts). Heart rate alone predicts diagnosis of PTSD in the validation cohort with 69% accuracy. Of note, removing heart rate from our biomarker panel did not result in significantly decreased model performance (molecular-only panel without heart rate still achieves 75% accuracy). Following the heart rate analysis, we evaluated all other biomarkers contained in the panel individually. Three other markers achieved at least 60% accuracy in the validation cohort: gammaglutamyltyrosine, insulin, and cg01208318. However, using any of these markers individually resulted in greater variance in validation accuracy, based on 2000 bootstrapping iterations. Additionally, we note that the most important markers selected during model refinement (based on Random Forest Variable Importance, Supplementary Fig. 1B ), were not the top-performing individual markers in the validation cohort. Without an additional validation cohort, validation performance cannot be used to hand-select top-performing individual markers. During additional rounds of panel validation and development, individual markers and smaller subsets of this biomarker panel should be evaluated. Strengths and limitations The cohorts recruited for this study were subject to strict inclusion and exclusion criteria, intentionally creating a pool of moderate to severe cases of combat-related PTSD to compare with asymptomatic controls among men deployed to Iraq and/or Afghanistan. To understand the clinical utility of the proposed biomarker panel, further validation is required in other PTSD populations, including active duty soldiers, populations with civilian trauma, female cohorts, and carefully phenotyped populations with and without many conditions commonly comorbid with PTSD. This study design may have allowed for the clearest and strongest signals of combat-related PTSD to emerge, but will need additional validation in cohorts of individuals with chronic PTSD (>10 years), individuals who recover from PTSD, and those with intermediate PTSD symptoms (CAPS from 20–40), where the current model performance may be decreased. Additionally, this study used DSM-IV criteria for diagnosing PTSD to ensure consistency across all cohorts. Hoge et al. [ 55 ] determined that 30% of combat veterans who meet DSM-IV diagnostic criteria for PTSD do not meet DSM-5 criteria for PTSD. The impact of using DSM-5 should be evaluated for this specific set of biomarkers in future cohorts. Many studies have emphasized the high rates of PTSD comorbidity with other conditions, including depression [ 56 ], anxiety [ 57 ], alcoholism and substance abuse [ 58 ], cardiovascular disease [ 59 ], diabetes [ 60 ], and others. A robust PTSD biomarker panel should be (i) specific to PTSD and not any of these or other comorbidities, and (ii) able to detect PTSD in both the presence and absence of these comorbid conditions. To further identify potential confounders, additional samples including MDD without PTSD, diabetes with and without PTSD, and other conditions should be studied to evaluate the specificity of the panel further. In an exploratory search of more than one million markers, we assayed a range of molecular data types, including DNA methylation marks, proteins, miRNAs, and metabolites. Owing to quality control and other limitations, several molecular data types were incomplete and therefore excluded from biomarker identification and refinement. These included gene expression, immune cell counts, and cytokine assays. Some of these assays were completed for the discovery cohort, and were included in early approaches for candidate biomarker selection. Any identified biomarker candidates from these assays were removed prior to down-selection and validation due to lack of data in recall and validation cohorts. The presence of these markers in the discovery phase may have influenced the selection of candidate biomarkers for some of the machine learning approaches. However, the exclusion of these datasets was not based on biomarker validation performance and therefore could not have affected the final accuracy and performance of the 28-marker panel. In summary, we have presented a robust multi-omic panel for predicting combat-related PTSD diagnosis in male veteran populations. These 28 biomarkers include features from DNA methylation, proteins, miRNAs, metabolites, and other molecular and physiological measurements. In an independent validation cohort, we predicted PTSD diagnosis with 81% accuracy, 85% sensitivity, and 77% specificity, indicating a blood-based screening or diagnostic tool is promising for identifying PTSD, particularly in males with warzone-related PTSD. Disclaimer The views, opinions and/or findings contained in this report are those of the authors and should not be construed as an official Department of the Army position, policy or decision, unless so designated by other official documentation. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. Data availability Molecular, clinical, and demographic datasets for all three cohorts are available through the SysBioCube [ 61 ], at .
An artificial intelligence tool—which analyzed 28 physical and molecular measures, all but one from blood samples—confirmed with 77 percent accuracy a diagnosis of posttraumatic stress disorder (PTSD) in male combat veterans, according to a new study. Led by NYU School of Medicine, Harvard John A. Paulson School of Engineering and Applied Sciences, and the U.S. Army Medical Research and Development Command, the study describes for the first time a blood-based biomarker panel for diagnosis of warzone-related PTSD. Published online September 9 in the journal Molecular Psychiatry, the measures included genomic, metabolic, and protein biomarkers. "While work remains to further validate our panel, it holds tremendous promise as the first blood test that can screen for PTSD with a level of accuracy useful in the clinical setting," says senior study author Charles R. Marmar, MD, the Lucius N. Littauer Professor and chair of the Department of Psychiatry at NYU School of Medicine. "If we are successful, this test would be one of the first of its kind—an objective blood test for a major psychiatric disorder." There are currently no FDA-approved blood tests, for instance, for depression or bipolar disorder, says Marmar. The new study embodies a longstanding goal in the field of psychiatry: to shift mental health toward standards like those used in cardiology or cancer, for instance, in which lab tests enable accurate diagnoses based on physical measures (biomarkers) instead of on self-reporting or interviews with inherent biases. Those with PTSD experience strong, persistent distress when reminded of a triggering, traumatic event. According to a World Health Organization survey, more than 70 percent of adults worldwide have experienced a traumatic event at some point in their lives, although not all develop the condition. Twenty Eight Out of a Million For the current study, 83 male, warzone-exposed veterans of the Iraq and Afghanistan conflicts with confirmed PTSD, and another 82 warzone-deployed veterans serving as healthy controls, were recruited from the Manhattan, Bronx and Brooklyn Veterans Affairs (VA) Medical Centers, as well as from other regional VA medical centers, veterans' service organizations, and the community. The researchers tested nearly one million features with current genomic and other molecular tests and narrowed them to 28 markers. By measuring a large number of unbiased quantities, the team sought to determine which of them were associated with an accurate PTSD symptom diagnosis. Using a combination of statistical techniques, the study authors narrowed the best measures from a million to 343 to 77, and then finally to 28, with the final group outperforming the larger groups in prediction accuracy. Some of this winnowing was accomplished using machine learning, mathematical models trained with data to find patterns. The team then applied their "PTSD blood test" to an independent group of veterans to see how well their new tool matched the diagnoses made previously using standard clinical questionnaires like the Clinician Administered PTSD Scale (CAPS). This comparison yielded the 77 percent accuracy figure. "These molecular signatures will continue to be refined and adapted for commercialization," says co-senior study author Marti Jett, Ph.D., chief scientist in Systems Biology for the US Army Medical Research & Development Command (USAMRDC), within the US Army Center for Environmental Health Research (CIV USACEHR). "The Department of Health Affairs within the Department of Defense is considering this approach as a potential screening tool that could identify service members, before and after deployment, with features of unresolved post-traumatic stress." Those identified would be referred for their specific issues (sleep disruption, anger management, etc.), which is available at most military bases, adds Jett. The current study did not seek to explain the disease mechanisms related to the final markers, but rather to blindly pick those that did the best job of diagnosing PTSD. That said, the group of best-performing markers included the activity levels of certain genes, amounts of key proteins in the blood, levels of metabolites involved in energy processing, as well as levels of circulating microRNAs (miRNAs), snippets of genetic material known to alter gene activity and tied to heart diseases and features of PTSD. The one indicator not measured by blood test was the heart rate variability. "These results point toward many biochemical pathways that may guide the future design of new drugs, and support the theory that PTSD is a systemic disease that causes genetic and cellular changes well beyond the brain," says corresponding author Frank Doyle, Ph.D., dean of Harvard John A. Paulson School of Engineering and Applied Sciences, one of the research study's sites. Previous studies of genetic predictors of PTSD risk have shown strong performance in younger, active duty populations, says author Kelsey Dean, Ph.D., a member of Doyle's group at Harvard. This suggests that such biomarkers may be able to signal for PTSD at its earliest ages, and so be useful in prevention. For future research, studies of populations beyond male veterans will be needed to better understand the clinical utility of the proposed biomarker panel.
10.1038/s41380-019-0496-z
Biology
Preventing deformed limbs: New link found between physical forces and limb development
Anisotropic stress orients remodelling of mammalian limb bud ectoderm, Nature Cell Biology (2015) DOI: 10.1038/ncb3156 Journal information: Nature Cell Biology
http://dx.doi.org/10.1038/ncb3156
https://phys.org/news/2015-04-deformed-limbs-link-physical-limb.html
Abstract The physical forces that drive morphogenesis are not well characterized in vivo , especially among vertebrates. In the early limb bud, dorsal and ventral ectoderm converge to form the apical ectodermal ridge (AER), although the underlying mechanisms are unclear. By live imaging mouse embryos, we show that prospective AER progenitors intercalate at the dorsoventral boundary and that ectoderm remodels by concomitant cell division and neighbour exchange. Mesodermal expansion and ectodermal tension together generate a dorsoventrally biased stress pattern that orients ectodermal remodelling. Polarized distribution of cortical actin reflects this stress pattern in a β-catenin- and Fgfr2-dependent manner. Intercalation of AER progenitors generates a tensile gradient that reorients resolution of multicellular rosettes on adjacent surfaces, a process facilitated by β-catenin-dependent attachment of cortex to membrane. Therefore, feedback between tissue stress pattern and cell intercalations remodels mammalian ectoderm. Main It has long been recognized that physical forces underlie embryonic shape changes 1 . New insights based on theory and experiment are progressively decorating this concept with exciting details 2 . However, many aspects of how forces relate to cell behaviours and how interplay between different tissues physically shapes embryonic structures remain unclear, especially among vertebrates. The limb bud derives from the lateral plate and is initially composed of a mesodermal core that is surrounded by a single cell layer of ectoderm. Mesodermal growth initiates limb development and is characterized by oriented cell behaviours that promote elongation of the proximodistal (PD) axis 3 , 4 , 5 , 6 . It has been postulated that early ectoderm and the AER, a stratified epithelial signalling centre that is essential for outgrowth and pattern formation 7 , might help to maintain a narrow dorsoventral (DV) bud axis 8 , 9 . AER formation requires, in part, a signal relay between mesodermal and ectodermal cells that activates canonical Wnt and Fgfr2 signalling in ectoderm 10 , 11 , 12 , 13 , 14 . The importance of these pathways is underscored by mutations that cause limb deficiencies in humans including tetra-amelia and Apert syndrome 15 , 16 . In the mouse embryo, AER precursors derive from a broad domain of primarily ventral ectoderm that transitions from cuboidal to columnar morphology 17 and converges just ventral to the DV boundary 18 , 19 , 20 , 21 . Although it is not clear what underlies this convergence, one possibility suggested previously is that coordinated cell rearrangements drive AER progenitors to move towards the DV compartment boundary 22 . Here we present evidence that supports this concept, and show that both mesodermal and ectodermal forces contribute to the ectodermal stress pattern that guides multicellular remodelling. Our findings suggest that β-catenin and Fgfr2 in part mediate cellular responses to tissue forces. RESULTS Cell topology and intercalation of prospective AER progenitors Cell intercalation is associated with non-hexagonal cell topology 23 . Using whole-mount immunostaining, we observed that ectodermal cells in the pre-AER limb field (20 som. (somite stage)) of the mouse embryo exhibited a wide range of topologies ( Fig. 1a ). The distribution of cell interfaces (number of cell neighbours) was centred on five and is shifted to the left compared with a common distribution centred on six interfaces that was previously described for invertebrate and earlier vertebrate embryos 24 ( Fig. 1b ). This finding suggests that some cell behaviours in the mouse embryo may be distinct to those observed in other metazoa. During early limb development (20 som.), ectodermal cells within the limb field became elongated along the DV axis ( Fig. 1a ), suggesting that they were under tension. By live imaging 18–20 som. CAG::H2B–EGFP reporter embryos that ubiquitously express nuclear EGFP (ref. 25 ), we found that cell division orientation was biased dorsoventrally in limb field and non-limb lateral plate ectoderm ( Fig. 1c ). By themselves, these cell behaviours would expand the DV axis of at least the ectodermal layer. Therefore, compensatory cell rearrangements may be important to accommodate new cells without distorting tissue shape. Figure 1: Cell topology and intercalation of AER progenitors. ( a ) Confocal section of rhodamine–phalloidin-stained pre-AER (20 som.) entire limb field (from somite 7 to 11, ∼ 250 ectodermal cells) demonstrating variable and non-hexagonal cell topologies as well as DV elongation of some ectodermal cells (quantified in Supplementary Figs 4 and 5 ). ( b ) Distribution of number of cell neighbours among 18–20 som. limb bud ectodermal cells. ( c ) Polar plot representing metaphase-to-telophase transition angles of limb bud ectoderm cells ( n = 3, 35–40 cell divisions (all cell divisions/2 h time-lapse video) per 18–20 som. embryo). ( d – f ) Confocal projection of the ectodermal Tcf/Lef::H2B–Venus reporter in pre-overt initiation limb field (16 som.; d ), early initiating limb field (18 som.; e ), and post-initiation limb field (22 som.; f ) (blue: DAPI). ( g ) Percentage of Tcf/Lef::H2B–Venus-positive cells in the limb field versus lateral plate in 18–20 som. embryos ( n = 3 embryos; P = 0.0022 (Student’s t -test)). ( h ) Percentage of pHH3-positive cells relative to total cells versus percentage of pHH3-positive cells relative to Tcf/Lef::H2B–Venus-positive cells in the limb field in 18–20 som. embryos ( n = 3 embryos; P = 0.26 (Student’s t -test)). ( i ) Confocal projection of the ectodermal Tcf/Lef::H2B–Venus reporter in an AER-forming limb bud (32 som.). ( j , k ) Meandering index ( j ) and DV displacement ( k ) among ubiquitously expressed H2B–GFP (representing total cells) versus Tcf/Lef::H2B–Venus (representing AER progenitors) cells in 18–20 som. embryos ( n = 20 cells in 3 embryos for each condition; ( j ) P = 0.86, ( k ) P = 0.74 (Student’s t -test)). ( l ) Time-lapse series of a 20 som. limb bud ectoderm expressing Tcf/Lef::H2B–Venus near the DV boundary. Dashed lines highlight regional tissue constriction. ( m ) Model of AER progenitor intercalation just ventral to the DV boundary (red line). Scale bars indicate 10 μm ( a , l ), 50 μm ( d , e , i ), 100 μm ( f ). Error bars indicate s.e.m. Full size image As canonical Wnt signalling is essential for AER formation, we employed the transgenic nuclear reporter Tcf/Lef::H2B–Venus to monitor canonical Wnt activation in limb bud cells 26 . This reporter was activated infrequently in the limb field before overt limb initiation (16 som., <E9.0; Fig. 1d ). The number of Tcf/Lef -positive cells increased in limb field ectoderm once bud growth was underway (>18 som., ∼ E9.0; Fig. 1e, f ), but not in non-limb lateral plate ectoderm ( Fig. 1g ). The proportion of phospho-histone H3 (pHH3)-stained cells was similar between Tcf/Lef -positive and -negative cells ( Fig. 1h and Supplementary Fig. 1a ), suggesting that this increase was not due to a proliferative advantage but rather to differentiation. Tcf/Lef -positive cells were initially found in a broad DV domain (as are AER progenitors in the chick embryo 22 ) and, consistent with previous lineage tracing of AER progenitors in mouse 17 , 20 , became biased to the ventral surface between the 18 and 22 som. stages ( Fig. 1f ) before accumulating in the nascent AER (32 som., ∼ E10.0; Fig. 1i ). This ventral cell compaction is comparable to changes in the domain of Fgf8 expression (an AER marker) 7 and was proposed in a previous model 20 . Although the Tcf/Lef signal is not an indelible label for AER progenitors, we noted that it was not selectively extinguished among dorsal cells during 1–3 h live imaging sessions. Rather, cells moved along the DV axis ( Supplementary Video 1 ). Interestingly, Tcf/Lef -positive and -negative cells travelled and meandered (displacement/total distance travelled) 27 to a similar extent ( Fig. 1j, k ), indicating that preferential migration does not explain the accumulation of AER cells near the DV boundary. Instead, ectodermal sheets gradually converged, suggesting that ectodermal cells were planar polarized. At the site of the prospective AER that is just ventral to the DV boundary in the mouse in our estimation, tracked cells interdigitated in time-lapse videos ( Fig. 1l, m and Supplementary Videos 2 and 3 ). Therefore, oriented DV movement and intercalation of cells accompanies formation of the AER. Planar polarity of pre-AER ectodermal cells Polarized accumulation of filamentous (F) actin and/or non-muscle myosin type II 28 , 29 can orient cell movements. Using the program SIESTA (ref. 30 ) we found that basolateral cortical F-actin was enriched at ectodermal anterior–posterior (AP) interfaces in a broad DV region in the 20 som. pre-AER limb bud and is consistent with the DV axis of cell intercalation ( Fig. 2a, b ). Cells with polarized actin became progressively confined to the DV midline ( Fig. 2c ) and nascent AER as shown at the 34 som. stage ( Fig. 2d ). To examine the importance of polarized actin, we performed roller culture of whole mouse embryos in the presence of the Rac1 inhibitor NSC23766 (ref. 31 ). This compound abolished actin polarity, diminished the degree of elongated and anisotropic cell topologies and inhibited cell movements ( Supplementary Fig. 1b and Supplementary Videos 4 and 5 ). Organized cell behaviours therefore require Rac1-dependent actin. Unexpectedly, distributions of myosin IIB, IIA and phospho-myosin light chain (pMLC) were largely cortical but not polarized at any stage leading up to AER formation ( Supplementary Fig. 1c–e ). It is possible that an atypical myosin is polarized here, or that polarized cortical actin is sufficient to bias myosin motor activity. Figure 2: Planar polarity of pre-AER ectodermal cells. ( a ) Confocal xz (top) and xy (middle and bottom) sections of rhodamine–phalloidin-stained 20 som. limb bud ectoderm highlight the basal region where actin was polarized. ( b ) Relative fluorescence intensity of actin at cell interfaces was analysed using SIESTA software and plotted from 0–90° representing DV to AP interfaces of 18–20 som. embryos ( P = 0.03 (total interfaces) and P = 0.05 (Tcf + ve interfaces) (Student’s t -test)). Shown are total cell interfaces versus Tcf/Lef::H2B–Venus-positive cell interfaces ( n = 5 embryos for each condition; comparison of total interfaces versus Tcf/Lef::H2B–Venus-positive cell interfaces P > 0.05 for all angle bins (Student’s t -test)). ( c , d ) Confocal image of the basal layer of the nascent AER (between dashed lines) at 25 som. ( c ) and 34 som. ( d ) showing actin (red) and Tcf/Lef::H2B–Venus (green; c ) or AER marker CD44 (green; d ; ref. 55 ). ( e ) Confocal section of a 22 som. limb bud expressing myr–Venus . Dashed lines indicate limb bud area. The outlined area is shown magnified in the middle and right panels. Shown are apical and basal sections of the same region. White arrowheads indicate membrane protrusions. ( f ) Confocal time series of a 22 som. limb bud expressing myr–Venus , showing protrusive activity along the DV axis (indicated by yellow arrowheads). Intercalation is observed between two cells marked with white asterisks. Scale bars indicate 50 μm ( e ), 20 μm ( a , c – e (magnifications)), 10 μm ( f ). Error bars indicate s.e.m. Asterisks indicate P < 0.05 (75–90° bin versus 0–15° bin). Full size image If pure Drosophila -like contractile intercalation secondary to planar polarized actomyosin underlay early AER formation, we would have expected AP cell interfaces to gradually shorten 30 , 31 . However, activation of the transgenic live actin reporter R26R:Venus–actin 32 in ectoderm using an early ectoderm-specific Cre recombinase, Crect 33 , revealed that, although cell interface lengths oscillated, average AP cell interfaces did not shorten progressively during intercalation at the DV midline (average 25 som. stage AP cell interface length at T = 0 min and T = 120 min: 7.44 μm ± 0.34 μm (s.e.m.) and 6.75 μm ± 0.5 μm (s.e.m.), respectively; P = 0.260). Maintenance of long AP interface lengths was also apparent in static images of the prospective AER ( Fig. 2c, d ). To investigate the possibility that Xenopus -like cell crawling 34 contributes to AER progenitor intercalation, we examined transgenic animals in our colony in which expression of the membrane reporter CAG::myr–Venus (ref. 35 ) had become mosaic. We identified dorsal and ventral protrusions among pre-AER ectodermal cells that spanned the lateral membrane from apical to basal levels ( Fig. 2e ). Interestingly, live observation demonstrated that protrusive activity took place concurrently with cell intercalations ( Fig. 2f and Supplementary Video 6 ; consistent with Fig. 1j, k ). We therefore propose that, similar to mouse neural plate 36 , ectoderm remodels through cell rearrangements that are oriented by planar polarized actin and facilitated by protrusive behaviour. Mesodermal growth anisotropically stresses ectoderm during limb initiation As tension induces structural changes of F-actin 37 , we studied whether mesodermal growth that initiates limb development causes tension in the overlying ectoderm. We employed three-dimensional (3D) finite-element modelling of the initial 17 som. limb field by incorporating actual lateral plate ectoderm dimensions, Young’s modulus that we measured using live atomic force microscopy (AFM) indentation, and viscoelastic parameters calculated from previously reported compression data 38 ( Supplementary Fig. 1f–h ). Previous analyses demonstrated that mesodermal cell polarities, division planes and movements in the axial plane are oriented towards the nearest ectoderm 4 , 5 , 10 . Therefore, mesodermal growth in the limb field was modelled as pressure normal to the under-surface of ectoderm ( Supplementary Fig. 1i ). Mesodermal pressure resulted in an ectodermal stress pattern that was dorsoventrally biased followed by stress relaxation due to viscoelasticity ( Fig. 3a and Supplementary Video 7 ). This result is explained by the elongate shape of the lateral plate because focal stress in the limb field would be less easily dissipated along the short DV axis relative to the long AP axis. Figure 3: Mesodermal growth anisotropically stresses ectoderm during limb initiation. ( a ) Finite-element simulated principal stresses that are attributable to mesodermal growth at limb initiation (17 som.). Red, green and blue arrows indicate maximum, middle and minimum principal stresses respectively. Tension is biased along the DV axis. ( b ) Illustration indicating flank micro-injection in non-limb lateral plate mesoderm. ( c ) Proportion of Venus-positive nuclei in PEG-injected or collagen-injected flank ectoderm compared with control penetrated but uninjected flank ectoderm of 18–21 som. Tcf/Lef::H2B–Venus embryos ( n = 3 embryos per condition, P = 0.0018 (collagen)-; P = 7.2 × 10 −6 (PEG)-injected versus control (Student’s t -test)). ( d ) Relative fluorescence intensity of actin at cell interfaces was quantified using SIESTA software and plotted over 90° ( n = 3 embryos per condition; P = 0.0021 (collagen)-; P = 0.025 (PEG)-injected versus control (Student’s t -test)). ( e ) Confocal images of control and collagen-injected 21 som. Tcf/Lef::H2B–Venus flank ectoderm. Shown are z sections (top panels) and xy sections visualizing actin (red), H2B–Venus (green) and DAPI (blue). ( f ) Limb initiation model. Mesodermal growth at limb initiation (blue; white arrows indicate direction of mesodermal growth) is sufficient to anisotropically stress the overlying ectoderm owing to the elongate shape of the lateral plate, resulting in accumulation of actin (red) at AP cell interfaces. Scale bar, 10 μm ( e ). Error bars indicate s.e.m. Asterisks indicate P < 0.05 ( d ), P < 0.01 ( c ). Full size image To determine whether expansion of mesoderm is sufficient to polarize ectodermal actin in vivo , we micro-injected collagen or polyethylene glycol (PEG) hydrogel into non-limb lateral plate mesoderm to mimic limb initiation before 4 h in roller culture ( Fig. 3b ). Injection, but not sham needle penetration of the contralateral side, was sufficient to upregulate Tcf/Lef::H2B–Venus activation and to polarize cortical actin along AP cell interfaces ( Fig. 3c–e ). This upregulation was not associated with increased ectodermal cell proliferation ( Supplementary Fig. 1j ), suggesting that it was attributable to enhanced differentiation. Biased distribution of actin therefore reflects DV stress pattern and is a function of tissue geometry ( Fig. 3f ). Cell division precipitates cell neighbour exchange and oriented remodelling of pre-AER ectoderm To study dynamic relationships between neighbouring cells, we employed transgenic cell membrane reporters for live imaging. These included CAG::myr–Venus 35 that labelled all cells and the conditional mT/mG reporter 39 that permitted conversion of red to green fluorescence using Crect . We identified numerous tetrads and multicellular rosettes that remodel dynamically in pre-AER dorsal and ventral ectoderm ( Fig. 4a, d and Supplementary Video 4 ). In marked contrast to Drosophila and other metazoa in which daughter cells maintain a shared interface most of the time 24 , we found that mitosis in the mouse embryo commonly precipitated cell neighbour exchange. Immediately post-division, daughter cells severed their common interface to permit two adjacent cells to form a new interface ( Fig. 4a and Supplementary Video 8 ). This process is similar to T1 exchange that involves four non-dividing cells in Drosophila 28 . A T1 exchange alters local tissue shape by adding one cell diameter to the axis along which cells separate and subtracting one cell diameter from the orthogonal axis in which cells move together. Daughter cells also intercalated among their neighbours ( Fig. 4b and Supplementary Video 9 ) and precipitated multicellular rosette formation ( Fig. 4e and Supplementary Video 10 ). Rosette remodelling is analogous to T1 exchange but involves five or more cells that transiently join at a central apex 29 . Rosettes were a prominent feature as 26%–44% (18–20 som. n = 5 embryos) of cells were included in a rosette at a given time in both the limb field and interlimb ectoderm ( Fig. 4c ), and resolved through dissolution of a central apex as in Drosophila 29 ( Fig. 4d and Supplementary Video 11 ). Rosettes in non-limb lateral plate and at the base of the bud resolved in a directionally biased fashion that would promote convergence of the embryo DV axis and extension of its long AP axis. In contrast, rosettes in the limb bud beyond the flank largely both formed and resolved along the PD axis ( Fig. 4f ). We reasoned that reorientation of rosette resolution along the PD axis of limb bud growth may be due to cell extrinsic cues. Figure 4: Cell division precipitates cell neighbour exchange and oriented remodelling of pre-AER 18–22 som. ectoderm. ( a , b ) Confocal time series of mitotic 18 som. ectodermal cells expressing CAG::myr–Venus demonstrates post-cell division (blue) T1-like neighbour exchange ( a ) and daughter cell intercalation among neighbours ( b ). ( c ) Outline of cell interfaces in 20 som. limb bud ectoderm with multicellular rosettes highlighted in yellow. ( d ) Time series demonstrates 18 som. rosette resolution. ( e ) The daughter cell (red) in the 18 som. limb ectoderm contributes to rosette formation following cell division. ( f ) Polar plots represent how the long axes of rosettes in different spatial regions of 18–22 som. embryos remodel during 2–3 h time-lapse videos. Long axes of rosettes are plotted at the beginning (forming or formed rosettes) and end (resolving or resolved rosettes; 49 rosettes examined, n = 5 embryos). Scale bars, 10 μm. Full size image Ectodermal tension augments DV stress pattern and orients rosettes in pre-AER limb bud In several contexts, cell intercalations are driven by cell intrinsic forces generated by polarized actomyosin contraction 28 , 29 , 40 , 41 . In contrast to rosettes found in the Drosophila germband 29 , neither actin nor myosin II (IIA, IIB, pMLC) was polarized in dorsal or ventral ectoderm beyond the zone of DV intercalation at the prospective AER despite dynamic remodelling of rosettes ( Supplementary Fig. 1k ). Also unlike the germband, apical/subapical actin was not polarized ( Supplementary Fig. 1l ). In some contexts such as the Drosophila wing, cell extrinsic forces drive cell rearrangements 42 . We explored whether extrinsic tissue forces might orient the stress pattern using a finite-element model of the post-initiation (22 som.) limb bud ( Supplementary Fig. 1m, n ). At this later stage with an established bud present, mesodermal pressure alone resulted in a relatively isotropic stress pattern and predicted bulbous tissue deformation ( Fig. 5a, b and Supplementary Video 12 ). We then simulated AER progenitor intercalation as opposite pulling forces at the DV boundary using piconewton values that are within the range of physiological intercellular forces measured by others 43 . This pulling force generated a dorsoventrally biased stress pattern and more realistic maintenance of a narrow DV limb bud axis ( Fig. 5c, d and Supplementary Video 13 ; deformation analyses were used here to help validate the ectodermal stress pattern rather than to establish determinants of nuanced 3D tissue shape). We examined the effect of varying the magnitude of this pulling force twofold and found, as expected, that relative stress gradients remained linearly consistent with the magnitude of force ( Supplementary Fig. 1o, p ). Overall, this model suggests that a combination of mesodermal growth and ectodermal tension theoretically generates a stress pattern that is dorsoventrally biased and of greatest magnitude at the prospective AER. Figure 5: Tension augments the DV stress pattern and orients rosettes in pre-AER ectoderm. ( a ) Left, finite-element simulated principal stresses (red arrows) attributable to mesodermal growth alone at the 22 som. stage. Surface-plane DV:AP principal stress ratios approach 1:1. Right, simulated maximum principal stress field due to mesodermal growth. ( b ) Simulated deformation predicts bulbous tissue expansion. ( c ) Left, finite-element simulated principal stresses due to pulling forces secondary to cell intercalation at the DV boundary. Surface-plane DV:AP principal stress ratios >3:1. Right, simulated maximum principal stress field due to pulling forces at DV boundary generates tensile gradient along the PD axis. ( d ) Simulated deformation predicts maintenance of a narrow DV limb bud axis. In a and c , red, green and blue arrows indicate maximum, middle and minimum principal stresses, respectively. ( e ) AFM cantilevered tip at different PD levels. ( f ) AFM measurements of proximal, middle and distal regions of initiating limb buds were used to calculate Young’s modulus ( n = 5 embryos; P = 0.0491 (distal versus proximal; Student’s t -test)). ( g , h ) Limb bud ectodermal cells expressing R26R::mTmG; Crect before and after ablation of AP ( g ) or DV ( h ) interfaces. Lower panels, kymographs of vertex displacement over time (6 s intervals). Yellow arrows highlight cell vertices. ( i ) Distance between two vertices adjacent to cut interface in AP and DV interfaces ( P = 0.0008). ( j ) Peak retraction velocities of ablated AP and DV interfaces ( P = 0.0001); ( i , j ) n ∼ 18 ablations for each of eight 19–23 som. embryos. ( k ) Distance between two vertices of proximal AP or proximal DV cut interfaces ( P = 0.0442). ( l ) Peak retraction velocities of ablated proximal AP and proximal DV interfaces ( P = 0.1208); ( k , l ) n ∼ 15 ablations for each of four 19–23 som. embryos. (( i – l ) Student’s t -test with Holm’s correction.) ( m ) R26R::mTmG; Crect limb bud ectoderm before and 3 min after ablation. Red lines indicate sites of ablation. ( n ) Rosette AP/PD aspect ratio (length AP /length PD ) measured before and after ablation of a region distal to the rosette ( n = 6 embryos 21–26 som.; P = 0.019 (Student’s t -test)). ( o ) Pre-AER model. Although rosette resolution at the base of the bud occurs along the AP axis (lower rosette), intercalation of AER progenitors (green) generates a tensile gradient that redirects rosette resolution along the PD axis (top rosette). Error bars indicate s.e.m. Scale bars, 10 μm ( g , h , m ), 100 μm ( e ). The asterisks indicate P < 0.01 ( j ); P < 0.05 ( n ). Full size image To measure the magnitude of actual ectodermal stiffness, we examined cultured mouse embryos using AFM. Young’s modulus, a measure of the stiffness of an elastic material, was greatest near the distal tip at the DV midline, and diminished towards the proximal base of the bud ( Fig. 5e, f ). We also undertook laser ablation of individual 21–22 som. ectodermal cell interfaces. Initial recoil velocity, which is predicted to be proportional to tension 44 , was substantially greater following ablation of AP interfaces near ( ∼ 5 cell diameters from) the AER relative to those near ( ∼ 5 cell diameters from) the base of the bud ( Supplementary Fig. 1q, r ). These data suggest that a tensile gradient emanates from the prospective AER. To determine directionality of ectodermal tension, we compared retraction velocities of orthogonal interfaces in different locations. Near the prospective AER, ablated AP interfaces retracted substantially faster than those of DV interfaces ( Fig. 5g–j and Supplementary Videos 14 and 15 ), indicating the presence of anisotropic tension. Consistent with spatial differences that we had observed in axes of resolving rosettes, anisotropy was diminished near the base of the bud and in non-limb lateral plate ectoderm ( Fig. 5k, l and Supplementary Fig. 1s, t ). Tension that we measured using laser ablation was ∼ 1.4-fold different along the PD axis ( Fig. 5j, l and Supplementary Fig. 1r ), whereas finite-element simulation predicted a difference of ∼ 1.5–1.55-fold ( Supplementary Fig. 1o, p ). This difference may exist in part because the model does not incorporate viscoelastic effects of cell rearrangements. To determine whether DV tension emanates from the prospective AER, we disrupted 3–5 cell interfaces in a linear fashion parallel to the prospective AER. This procedure diminished retraction velocities of single AP interfaces proximal to the disruption ( Supplementary Fig. 1u, v ). We used linear ablation also to examine whether tension is necessary to orient rosette resolution. Immediately following ablation, PD long axes of rosettes proximal to the ablation were shortened, suggesting that distally based tension was resisting rosette contraction. Also, whereas rosettes in the limb bud normally resolved along the PD axis ( Fig. 4f ), ablation resulted in resolution along the orthogonal (AP) axis ( Fig. 5m, n and Supplementary Video 16 ). Therefore, distally biased tension is necessary to orient rosette resolution along the axis of limb bud growth ( Fig. 5o ). Ectodermal β-catenin is required to polarize actin and orient cell behaviour in response to stress Unexpectedly, most single and compound planar cell polarity (PCP) pathway mouse mutants do not lack an AER nor exhibit marked early limb bud phenotypes despite convergent extension defects of the long embryo axis and of other organ systems 5 , 6 , 45 , 46 , 47 . Consistent with mutant data, select markers of PCP such as Frizzled 6 and Dishevelled 3, and apical–basal polarity markers Par-1 and Par-3 were not polarized among ectodermal cells ( Supplementary Fig. 2a, d ). We also examined zebrafish PCP mutants that exhibit marked shortening of their body axis secondary to convergent-extension defects. Neither maternal-zygotic Vangl2 ( trilobite ) 48 , nor Wnt5 ( pipetail ) 49 , nor atypical protein kinase C ( heart and soul ) 50 mutants exhibited gross pectoral fin anomalies ( Supplementary Fig. 2e–k ). Therefore, as for the Drosophila germband 51 , evidence that early limb bud morphogenesis is regulated by the PCP pathway is lacking, although it is possibly masked by redundancy. To investigate the function of the canonical Wnt pathway in ectoderm, we conditionally deleted β-catenin using Crect , which is activated before limb initiation ( Supplementary Fig. 3a ). In conditional mutants, affected embryos survived to at least E18.5 and, although the limb bud initiated, it failed to progress beyond a shallow saddle shape that lacked an AER ( Fig. 6a, b ) and did not develop skeletal elements beyond the scapula and pelvis ( Supplementary Fig. 3b ). As expected, conditional mutants exhibited ectoderm-specific loss of membrane-associated β-catenin ( Supplementary Fig. 3c ) and marked reduction of ectodermal Tcf/Lef::H2B–Venus reporter activity ( Supplementary Fig. 3d, e ). Proliferation and apoptosis were not affected in either ectoderm or mesoderm at the pre-AER, 22 som. stage ( Supplementary Fig. 3f–j ). Moreover, mesodermal expression of Fgf10 was maintained in 22 som. mutant limb buds despite the presence of a clear phenotype in mutants ( Supplementary Fig. 3k ). Fgf10 expression was attenuated at a later stage when its expression presumably becomes dependent on ectodermal feedback 14 ( Supplementary Fig. 3l ). These data suggest that the early (22 som. E9.25) mutant phenotype we observed ( Supplementary Fig. 3k ) was not obviously attributable to failure of mesodermal growth although mesodermal apoptosis at a later stage (35 som.) has been associated with a similar limb bud phenotype 10 . An alternate possibility was that organized cell behaviours were compromised. Figure 6: Ectodermal β-catenin is required to polarize actin and orient cell behaviour in response to stress. ( a , b ) Optical projection tomography of E9.75 (30–31 som.) forelimb buds in wild type ( a ) and β - cat f / f ; Crect mutants ( b ). Dorsal views are shown; anterior is up. ( c ) Confocal image of a rhodamine–phalloidin-stained β - cat f / f ; Crect mutant embryo showing a basal ectodermal section (compare with Fig. 2a ). ( d ) Relative fluorescence intensity of actin at cell interfaces (SIESTA). Wild type n = 5 18–20 som. embryos, β - cat f / f ; Crect n = 3 19–22 som. embryo, P = 0.0066 (60–75° bin), P = 0.0004 (75–90° bin; Student’s t -test). ( e ) Time series of mitotic cells in initiating β - cat f / f ; Crect mutant limb bud ectoderm expressing R26R::mTmG . ( f ) Proportion of daughter cells that underwent intercalation in wild-type and β - cat f / f ; Crect mutant limb buds ( P = 0.034; Student’s t -test). ( g ) Proportion of Type 1, 2 and 3 interfaces 24 in wild-type and β - cat f / f ; Crect mutant limb buds ( P = 0.0042 (Student’s t -test); wild type: n = 30 mitotic cells, 5 embryos; mutant: n = 25 cells, 2 embryos). Schematic representation of Type 1, 2 and 3 interfaces (right). ( h ) Confocal time series of a rosette in the central region of a β - cat f / f ; Crect mutant limb bud ectoderm expressing R26R::mTmG . ( i ) Axes of rosette remodelling in a β - cat f / f ; Crect mutant limb bud ectoderm (42 rosettes, n = 2 embryos). ( j ) Confocal images of collagen-injected β - cat f / f ; Crect mutant flank ectoderm (compare with Fig. 3e ). Shown are z section (top panel) and xy sections visualizing actin (red) and DAPI (blue). ( k ) Relative fluorescence intensity of actin at cell interfaces (SIESTA; n = 3 19–21 som. embryos per condition, P = 0.0046 75–90° bin mutant versus control (Student’s t -test)). ( l ) Distance between two vertices attached to either AP or DV cut interfaces in β - cat f / f ; Crect mutant limb bud ectoderm ( P = 0.43). ( m ) Peak retraction velocities of ablated AP and DV interfaces in β - cat f / f ; Crect mutant limb bud ectoderm ( P = 0.36; ( l , m ) n ∼ 15 ablations over four 21–25 som. embryos for each condition (Student’s t -test with Holm’s correction)). ( n ) AFM measurements of proximal, middle and distal regions of initiating wild-type and β - cat f / f ; Crect mutant limb buds were used to calculate Young’s modulus ( n = 5 control embryos, n = 3 mutant embryos; P = 0.0435 (distal control versus mutant; Student’s t -test)). ( o ) Simulated deformation of an early bud based on mutant Young’s modulus of 0.042 kPa (compare with Fig. 5b ). Scale bars, 10 μm ( c , e , h , j ), 200 μm ( a , b ). Error bars indicate s.e.m. Asterisk indicates P < 0.05. Full size image In conditional β-catenin mutants, cell elongation was diminished ( Supplementary Fig. 4a, b ), distribution of cortical actin was not biased ( Fig. 6c, d ), and DV intercalation of midline cells did not occur ( Supplementary Fig. 4c ). Daughter cell neighbour exchange events were less frequent ( Fig. 6e, f and Supplementary Video 17 ), and the proportion of daughter cells that retained a common interface increased fivefold ( Fig. 6g ), more closely resembling that of other metazoa 24 . Rosettes were still present in mutants, but were oriented randomly ( Fig. 6h, i and Supplementary Video 18 ). We examined whether the canonical Wnt pathway is required to polarize actin in response to physical stress. Injection of the same volume of collagen generated a similar bulge in conditional β-catenin mutants as in wild-type embryos, but failed to polarize actin ( Fig. 6j, k ). These findings suggest that β-catenin is required to transduce stress as shown in other contexts 52 . Laser ablation in conditional β-catenin mutants revealed that ectodermal stress was no longer directionally biased in the absence of β-catenin ( Fig. 6l, m ) and a distal-high to proximal-low tensile gradient was lacking by AFM ( Fig. 6n and Supplementary Fig. 4d, e ). Interestingly, deformation analysis using Young’s modulus measured from β-catenin mutant ectoderm (0.042 kPa) predicted a peculiar saddle shape that is qualitatively similar to the actual shape of conditional β-catenin mutants ( Fig. 6o , compare with 6b ). Despite being oversimplified, this simulation suggests that a key function of β-catenin is to help establish anisotropic mechanical tissue properties. Direct and indirect functions of β-catenin and Fgfr2 Transmission of force between cells requires cell–cell adhesion, cortical tension and cortex-to-membrane attachment 53 . Despite loss of membrane-associated β-catenin, E-cadherin, a key mediator of cell–cell adhesion, was present at the cell membrane in conditional β-catenin mutants ( Supplementary Fig. 4f ). To examine cortical function, we activated R26R:Venus–actin 32 in ectoderm using Crect . We expected this reporter to label relatively recently polymerized actin in 20 som. stage embryos because Crect is activated robustly just before the 17 som. stage ( ∼ E9.0; Supplementary Fig. 3a ). Unlike the continuous meshwork of cortical actin in wild-type ectoderm, conditional β-catenin mutants exhibited distinct rings of actin with intervening gaps that suggested cortices were separated from membranes and cortices between cells were uncoupled ( Fig. 7a ). Cortical ring morphology was reflected by a diminished number of vertices (cell ‘corners’ that are visibly shared with neighbouring cells) among β-catenin mutant cells (0.40 ± 0.025 (s.e.m.), n = 2) relative to wild-type cells (2.31 ± 0.075 (s.e.m.); P = 0.0017, n = 2). Consistent with cortex–membrane separation, oscillatory cell interface contractions 30 exhibited a diminished rate of change and dampened amplitude in conditional β-catenin mutants ( Fig. 7b, c and Supplementary Videos 19 and 20 ). Therefore, β-catenin mechanically couples cells by promoting cortex to membrane attachment, a function that is attributable to its well-recognized role in linking the cytoskeleton to E-cadherin 52 . Figure 7: Direct and indirect functions of β-catenin and Fgfr2. ( a ) Confocal images of control β - cat f / f and β - cat f / f ; Crect mutant limb bud ectoderm at the 25 and 29 som. stage expressing R26R::Venus–actin . Yellow arrows indicate sites of cortical separation. ( b ) Rate of change of interface length from time-lapse videos of control and β - cat f / f ; Crect mutant limb bud ectoderm expressing R26R::Venus–actin , normalized to maximum interface length. Shown are representative curves from 4 interfaces. ( c ) Peak amplitude of oscillation of AP and DV interfaces in control β - cat f / f and β - cat f / f ; Crect mutant limb bud ectoderm ( n = 32 interfaces for each condition; P = 4.2 × 10 −5 (AP) P = 1.0 × 10 −5 (DV; Student’s t -test)). ( d ) Relative fluorescence intensity of actin at cell interfaces in limb bud ectoderm of embryos that were treated with IWR-1 or vehicle control (dimethylsulphoxide (DMSO)) for 6 h was quantified using SIESTA software and plotted over 90° ( n = 3 19–22 som. embryos per condition; P = 0.0031 (45–60° bin), P = 0.035 (60–75° bin), P = 0.015 (75–90° bin) (Student’s t -test)). ( e ) Confocal images of control β - cat f / f and β - cat f / f ; Crect mutant ectoderm visualizing CD44 (green) and DAPI (blue). ( f ) Relative fluorescence intensity of actin at cell interfaces of embryos treated with IWP-2 or vehicle control (DMSO) for 6 h (SIESTA; n = 3 19–22 som. embryos per condition; P = 0.31 (75–90° bin) (Student’s t -test)). ( g ) Optical projection tomography image of E9.75 Fgfr2 f / f ; Crect mutant forelimbs. Dorsal views, anterior is up. ( h ) Relative fluorescence intensity of actin at cell interfaces (SIESTA; wild type n = 5 embryos, Fgfr2 f / f ; Crect n = 3, P = 0.00015 (60–75° bin), P = 4.2 × 10 −5 (75–90° bin), (Student’s t -test)). ( i ) Proportion of Type 1, 2 and 3 interfaces 24 in wild-type and Fgfr2 f / f ; Crect mutant limb buds ( P = 0.093 (Student’s t -test); wild type: n = 30 mitotic cells, 5 embryos; mutant: n = 11 cells, 3 embryos). ( j ) Axes of rosette remodelling in Fgfr2 f / f ; Crect mutant limb bud ectoderm (20 rosettes, n = 2 embryos). ( k ) Meandering index (displacement/total distance travelled) of Tcf-positive nuclei near the DV boundary of wild-type and Fgfr2 f / f ; Crect mutant limb buds, P = 3.5 × 10 −4 (Student’s t -test). ( l ) DV displacement of Tcf-positive nuclei near the DV boundary of wild-type and Fgfr2 f / f ; Crect mutant limb buds, P = 0.02 (Student’s t -test). (For k and l , n = 20 cells over 2 h in three 21–25 som., embryos for each.) ( m ) Fgfr2 f / f ; Crect mutant limb bud ectoderm expressing R26R::Venus–actin . Scale bars, 10 μm ( a , e , m ), 200 μm ( g ). Error bars indicate s.e.m. Asterisk indicates P < 0.05 ( d , h , l ) and P < 0.01 ( c , k ). Full size image β-catenin might also help to polarize actin distribution indirectly through transcriptional regulation. To examine whether acute inactivation of the canonical Wnt pathway affects ectodermal cell polarity, we treated mouse embryos in roller culture with IWR1, which stabilizes the β-catenin destruction complex 54 . This inhibitor downregulated expression of Axin2 ( Supplementary Fig. 4g ) and abolished polarized distribution of actin following 6 h ( Fig. 7d ), but not 2 h ( Supplementary Fig. 4h ) of treatment, suggesting that an indirect mechanism is required to maintain actin polarity. Consistent with a transcriptional role, we found that expression of CD44, a transmembrane protein that marks the AER (ref. 55 ) and is downstream of the canonical Wnt pathway 56 , was lost in β-catenin conditional mutants ( Fig. 7e ). Loss of CD44 expression in ventral pre-AER ectoderm suggests that AER progenitors lost or never acquired appropriate identity, and represents an additional potential cause of cortical separation. We then treated embryos with IWP2, a pan-Wnt ligand secretion inhibitor 54 . Actin polarity was not significantly affected despite diminished Dishevelled protein phosphorylation ( Fig. 7f and Supplementary Fig. 4h ), suggesting that stress-related Tcf/Lef activation is ligand-independent. These findings suggest that at least some transcriptional functions of β-catenin may contribute to ectodermal remodelling. As Fgfr2 participates in a feedback loop together with the canonical Wnt pathway in ectoderm 11 , we conditionally deleted floxed Fgfr2 57 from ectoderm using Crect . The most important distinguishing feature of conditional Fgfr2 mutants was that they retained membrane-associated β-catenin ( Supplementary Fig. 5a ) despite partial reduction of Tcf/Lef::H2B–Venus reporter activity ( Supplementary Fig. 5b ). As expected, phospho-(p) ERK, an indicator of Fgf signalling, was markedly diminished in ectoderm ( Supplementary Fig. 5c ). Like conditional β-catenin mutants, conditional Fgfr2 mutant buds exhibited a shallow saddle shape that failed to progress ( Fig. 7g and Supplementary Fig. 5d ) despite normal proliferation and apoptosis in ectoderm and mesoderm ( Supplementary Fig. 5e–h ), intact E-cadherin ( Supplementary Fig. 5i ), and mesodermal Fgf10 expression at pre-AER stages ( Supplementary Fig. 5j ). DV cell elongation was diminished ( Supplementary Fig. 5k, l ), actin was not polarized at limb initiation ( Fig. 7h ) and cells did not accumulate at the DV boundary ( Supplementary Fig. 5m ). Expression of CD44, which is also downstream of the Fgf pathway 58 , was lost ( Supplementary Fig. 5n ). Phosphorylation of β-catenin Tyr 654 that is associated with reduced association of β-catenin with E-cadherin 59 was not enhanced in conditional Fgfr2 mutants ( Supplementary Fig. 5o ). This finding supports the concept that ectodermal functions of Fgfr2 were largely independent of membrane-associated β-catenin at early limb bud stages. Of course, it is possible that β-catenin and Fgfr2 pathways are acting entirely by influencing junctional proteins, although some of these may be indirect targets. CD44, for example, promotes migratory cell behaviour by presenting Fgf to neighbouring cells and links the extracellular environment to the cytoskeleton 55 . Overall, Fgfr2 mutant embryos exhibited many of the same features as conditional β-catenin mutants, findings that support their function in a common pathway. However, some cell neighbour exchange events were subtly distinct in conditional Fgfr2 mutants. Daughter cells shared a long interface less frequently than in β-catenin mutants, and rosettes exhibited a greater degree of orientation along the AP axis although they did not effectively reorient along the DV axis ( Fig. 7i, j , compare with 6g, i ). Tcf/Lef::H2B–Venus -positive cells were also relatively mobile in conditional Fgfr2 mutants ( Supplementary Video 21 ) but meandered to a greater extent than wild-type cells ( Fig. 7k, l ) and failed to accumulate in the prospective AER ( Supplementary Fig. 5m ). We propose that intact membrane-associated β-catenin and persistent cortical attachment in Fgfr2 mutants ( Fig. 7m ) underlies this improved ability to undertake neighbour exchange in comparison to β-catenin mutants. DISCUSSION It has been suggested that integration of local forces generated by cells into a global force pattern feeds back to individual cells to refine their behaviour and determine ultimate tissue shape 2 . Our findings substantiate this concept by showing that initial mesodermal growth anisotropically stresses ectoderm as a function of tissue geometry, thereby polarizing ectodermal cells along the DV axis. Intercalation of those cells at the DV boundary generates further tension in the ectodermal plane that reorients rosette resolution near the prospective AER. Therefore, mesoderm and ectoderm cooperatively generate a stress pattern that is mediated by β-catenin and Fgfr2 to orient ectodermal cell rearrangements. It has been postulated that AER position might be defined by a border of ectodermal BMP activity because both loss and gain of BMP function results in failure of AER formation 7 , 12 , 13 or by apposition of pre-specified dorsal and ventral ectodermal compartments 7 , although AER formation can be dissociated from DV polarity 20 . A related possibility raised by our findings is that ectodermal sheets that are polarized to remodel along the DV axis gradually deliver a subset of cells (definitive AER progenitors) to the DV boundary owing to physical constraints. This process is reminiscent of ‘convergent thickening’ as described for Xenopus blastopore closure 60 . Mouse ectoderm exhibits some cell behaviours that are divergent with respect to those of invertebrates and other vertebrates 24 . For tissues in which rapid cell division and rearrangement are concomitant, daughter cell intercalation may increase tissue fluidity to dissipate energy and facilitate orderly tissue shape change. Rosette remodelling facilitates directional tissue shape change 29 , 36 , 61 and buffers disequilibrium during morphogenetic movements 62 . Evidence here supports the concept that rosette formation is facilitated, in part, by cell-intrinsic planar-polarized actin and protrusive activity. Following cell division, rosette formation may buffer transient cell packing disequilibrium in mouse ectoderm. The axis of rosette resolution, on the other hand, is oriented by cell extrinsic stress. By improving the resolution with which we can quantify physical parameters during development, we will refine models of how cell behaviours generate embryonic shapes. □ Methods Live imaging. Live image acquisition was performed as described previously 6 . Briefly, embryos were submerged in 50% rat serum in DMEM (Invitrogen) in a 25 mm imaging chamber. Cheese cloth was used to immobilize the embryo and position the initiating limb bud directly against the coverglass. Embryos were imaged in a humidified chamber at 37 °C in 5% CO 2 . Time-lapse images were acquired on a Zeiss LSM510 META confocal microscope at ×20 or ×40 magnification or a Quorum spinning-disc confocal microscope at ×20 magnification. Images were processed with Volocity software or ImageJ. Representative images are shown from at least 3 independent experiments for each condition, and unless otherwise indicated, from at least 3 independent cohorts. No statistical method was used to predetermine sample size. Experiments were not randomized. Investigators were not blinded to allocation during experiments and outcome assessment. Rosette resolution angles were measured by first assigning a rostrocaudal reference axis taken from a low ×10 confocal magnification view of the embryo flank. Rosettes were identified manually, frame-by-frame. The angle between the long axis of the ellipse outlined by each rosette and the reference axis was documented at the beginning of a given video and on resolution. Laser ablation. Laser ablation was performed as described previously 63 with modifications to optimize for live mouse embryo culture. Briefly, mTmG; Crect embryos were placed in a 25 mm imaging chamber containing 50% rat serum in DMEM, and immobilized with cheese cloth. An N 2 Micropoint laser (Andor Technology) set to 365 nm was used to ablate cell interfaces. Images were acquired on an Andor Revolution XD spinning-disc confocal microscope attached to an iXon Ultra897 EMCCD camera (Andor Technology) using a ×60 oil-immersion lens (Olympus, NA 1.35). Vertices were identified manually using SIESTA software 30 . Annotated vertices were tracked and initial retraction velocities were calculated using an algorithm developed in Matlab (Mathworks)/DIPImage (TU Delft). Sample variances were compared using an F -test, and mean values were compared using Student’s t -test with Holm’s correction. To compare time series, the areas under the curves were used as the test statistic. Images were processed using ImageJ. Rosette aspect ratio was determined using the Fit Ellipse tool in ImageJ. Error bars indicate standard error of the mean and the P value was calculated using Student’s t -test. Inhibitor treatment. For Wnt inhibition, embryos were treated in roller culture 64 with 100 μM IWR-1, 50 μM IWP-2, or DMSO control in 50% rat serum in DMEM for 6 h, and then fixed in 4% paraformaldehyde overnight at 4 °C. For actin polymerization inhibition, embryos were treated in roller culture with 100 μM NSC23766 in 50% rat serum in DMEM for 3 h before live imaging in inhibitor-containing media. PEG and collagen injections. E9.25 embryos were collected and placed in 6 cm dish coated with 2% agarose containing 5% FBS in DMEM. Embryos were immobilized using pulled glass needles to pin the head to the agarose. A second needle was pinned to the agarose and the tails of the embryos were carefully placed between the needle and the agarose, taking care not to puncture or damage the tissue. Embryos were injected with a 0.6 mg ml −1 collagen or polyethylene glycol (PEG) diacrylate ( M n 575) solution (10% solution containing 0.02% 2,2-dimethoxy-2-phenylacetophenone (DMPA) as a photocuring agent) containing 1 μg μl −1 rhodamine–dextran in DMEM. For PEG injections embryos were placed under 365 nm ultraviolet light for 40 s to allow gel formation. The targeted region of injection was the lateral plate posterior to the forelimb bud, approximately at the level of somites 15–18. Embryos were then incubated in roller culture in 50% rat serum in DMEM for 4 h and fixed in 4% paraformaldehyde overnight at 4 °C. Quantification of polarized actin distribution. Single confocal slices taken 2 μm above the basal surface of ectoderm cells stained with rhodamine–phalloidin were analysed using SIESTA software. Cell interfaces were manually identified, and average fluorescence intensities were calculated for all interfaces and grouped into 15° angular bins, using SIESTA software, with the 0°–15° bin representing interfaces that are parallel with the AP axis (DV interfaces) and the 75°–90° bin representing interfaces that are parallel with the DV axis (AP interfaces). Average fluorescence intensity values for each bin were normalized to average fluorescence intensity of DV interfaces (0°–15° angular bin). Error bars indicate standard error of the mean and P values were calculated using Student’s t -test. Quantification of cell behaviours. Metaphase-to-telophase transition angles were measured as described previously 6 . Angles of cell orientation were measured manually as the long axis of cells grouped in 15° bins over a 90° angular range. GFP-positive nuclei were quantified using average fluorescence intensity of GFP normalized to average fluorescence intensity of DAPI. Oscillation of cortical actin contractions was quantified by measuring the length of interfaces labelled with Venus–actin frame-by-frame over 2 h and normalized to the maximum length of each interface. Relative rates of change of interface length were plotted over time and average peak amplitude was calculated for each interface. Meandering index and DV displacement were quantified using ImageJ for a 2 h time course. Meandering index was calculated as displacement/total distance travelled. Error bars represent standard error of the mean and the P value was calculated using Student’s t -test. Mouse lines. CAG::myr–Venus 35 ; CAG::H2B-GFP (Jackson Laboratory, B6.Cg-Tg(HIST1H2BB/EGFP)1Pa/J); Tcf/Lef::H2B–Venus 26 ; mTmG (Jackson Laboratory, Gt(ROSA)26Sor tm4(ACTB–tdTomato,–EGFP)Luo /J); R26R–Venus–actin (Acc. No. 32 ); Crect 33 ; ZEG (Jackson Laboratory, Tg(CAG – Bgeo/GFP)21/Lbe/J); β-catenin flox (Jackson Laboratory, B6.129-Ctnnb1 tm2Kem /KnwJ ); Fgfr2 flox (Jackson Laboratory, STOCK Fgfr2 tm1Dor /J). Genotyping primers are available on the Jackson Laboratory website for each mouse line. All mouse lines are outbred to CD1, with the exception of β-catenin flox and Fgfr2 flox , which are C57BL/6J background. To generate mutant embryos, flox/flox females carrying the appropriate fluorescent reporter were bred to flox/ + ; Crect males. All animal experiments were performed in accordance with protocols approved by the Hospital for Sick Children Animal Care Committee. Whole-mount immunofluorescence. Embryonic day (E) 9.0–10.0 mouse embryos were fixed overnight in 4% paraformaldehyde in PBS followed by 3 washes in PBS. Embryos were permeabilized in 0.1% Triton X-100 in PBS for 20 min and blocked in 5% normal donkey serum (in 0.05% Triton X-100 in PBS) for 1 h. Embryos were incubated in primary antibody for 5 h at room temperature, followed by overnight incubation at 4 °C. Embryos were washed in 0.05% Triton X-100 in PBS (4 washes, 20 min each), and then incubated in secondary antibody for 3–5 h at room temperature. Embryos were washed (4 washes, 20 min each), followed by a final wash overnight at 4 °C, and stored in PBS. Images were acquired using a Quorum spinning-disc confocal microscope at ×10, ×20 or ×40 magnification, and image analysis was performed using Volocity software and ImageJ. Antibodies. β-catenin (BD Biosciences 610153, mouse, 1:200); E-cadherin (BD Biosciences 610181, mouse, 1:250 immunofluorescence, 1:1,000 immunoblotting); myosin IIB (Covance PRB-445P, rabbit, 1:500); myosin IIA (Covance PRB-440P, rabbit, 1:500); phospho-myosin light chain 2 (Thr18/Ser19) (Cell Signaling 3671, rabbit, 1:250); phospho-ERK1/2 (Thr202/Tyr204) (Cell Signaling 4370, rabbit, 1:500); CD44 (eBioscience 14-0441, rat, 1:500); phospho-histone H3 (Cell Signaling 9706, mouse, 1:250); caspase3 (BD Bioscience 559565, rabbit, 1:250); Dishevelled 3 (Santa Cruz sc-8027, mouse, 1:200); Frizzled 6 (R&D Systems AF1526, goat, 1:200); phospho- β-catenin (Abcam ab24925, mouse, 1:200); Dishevelled 2 (Santa Cruz sc-10B5, mouse, western blotting 1:500); Par1 (Abcam ab77698, mouse, 1:200); Par3 (Millipore 07-330, rabbit, 1:200); rhodamine–phalloidin (Invitrogen, 1:1,000). All secondary antibodies were purchased from Jackson Immunoresearch and used at 1:1,000 dilutions. Whole-mount in situ hybridization. Whole-mount in situ hybridization was performed as described previously 64 . Wild-type and mutant littermates, and IWR-1-treated and control DMSO-treated embryos were processed identically in the same assay for comparison. Axin2 (ref. 65 ) and Fgf10 (ref. 66 ) riboprobes were previously described. Western blotting. Embryos treated with IWP-2 or DMSO control were lysed in PLC lysis buffer (50 mM HEPES pH 7.5, 150 mM NaCl, 1.5 mM MgCl 2 , 1 mM EGTA, 10% glycerol, 1% Triton X-100, protease inhibitors (Roche)). Proteins were separated by SDS–PAGE, transferred to PVDF membranes (PerkinElmer), and incubated with primary antibodies overnight at 4 °C. Immunoblots were developed using HRP-conjugated secondary antibodies (Santa Cruz) and ECL (PerkinElmer). Optical projection tomography. E9.5 mouse embryos were collected and fixed in 4% paraformaldehyde overnight at 4 °C. Optical projection tomography (OPT) was performed essentially as previously published. The OPT system was custom-built and is fully described elsewhere 67 . The three-dimensional (3D) data sets were reconstructed from auto-fluorescence projection images acquired over a 10 min scan time at an isotropic voxel size of 3.85 μm. The 3D surface renderings of the OPT data were generated by Amira software, version 5.3.3 (VSGG). Atomic force microscopy. Embryos were incubated in 50% rat serum in DMEM on a 35 mm dish in which 2% agarose was poured around the perimeter. Embryos were immobilized to the agarose with pulled glass needles pinned through the flank adjacent to the limb field. Embryos were examined using a commercial AFM (BioScope Catalyst, Bruker) mounted on an inverted optical microscope (Nikon Eclipse-Ti). Force-indentation measurements were undertaken using a spherical tip at distinct locations categorized as distal, middle and proximal limb bud with an indentation rate of 1 Hz. Spherical tips were made by assembling a borosilicate glass microsphere (radius: 5–10 μm) onto an AFM cantilever using epoxy glue. The cantilever (MLCT-D, Bruker) had a nominal spring constant of 0.03 N m −1 . The trigger force applied to the embryo limb bud was consistently 200 pN, which helped to exclude erroneously high Young’s moduli arising from the influence of the underlying cell layers. Hence, no explicit correction for finite sample thickness effects was made here, and no evidence of depth-dependent stiffening was observed. The Hertz model was applied to the force curves to estimate the Young’s modulus and contact point, which were further used to convert the force curves into stress-indentation plots. We repeated indentation at the same location of the limb bud five times and observed no significant change in Young’s moduli. As Young’s modulus calculated from the Hertz model is sensitive to the spring constant, cantilever spring constants were calibrated each time before running the experiment by measuring the power spectral density of the thermal noise fluctuation of the unloaded cantilever. Detailed methods regarding use of a spherical tip and data analysis are described elsewhere 68 . Finite-element modelling. Finite-element modelling allowed us to focus on the mechanical behaviour of the ectodermal layer as a continuum with homogeneous viscoelastic material properties, rather than as individual cells. Viscoelastic behaviour was modelled using the Maxwell–Wiechert model 69 . The ectodermal tissue layer was assigned instantaneous elastic moduli of 0.085 kPa for wild type and 0.042 kPa for β-catenin mutant based on measurements taken using standard AFM indentation methods 68 and Poisson’s ratio of 0.4 (ref. 70 ). Viscous relaxation of ectodermal modulus was calculated based on limb bud compression relaxation data reported previously 38 and was implemented in ANSYS as two-pair Prony relaxation with relative moduli of 0.1 and 0.4 and relaxation time constants of 8 s and 45 s, respectively. Two types of mechanical load were considered: mesoderm growth was modelled as pressure (0.01 Pa) normal to the inner surface of the ectodermal pocket (analyses were performed for both shallow (17 som.) and tall (22 som.) models ( Supplementary Fig. 1i, m, n )); AER progenitor intercalation at the DV boundary was modelled as equal and opposite pulling forces (2 pN) parallel to the DV axis at the junction of dorsal/ventral sides of the limb bud model (this analysis was performed only for the tall model ( Supplementary Fig. 1n )). Two boundary conditions were used: for all simulations, four sides of the sheet were fixed in all six DOFs ( U x = U y = U z = U Rx = U Ry = U Rz = 0), as the ectoderm at those locations was confined by connection to adjacent tissue; for simulation of AER progenitor cell intercalation, a frictionless support underneath the limb bud pocket was added that reflects mesodermal support normal to the ectodermal plane and allows in-plane tangential displacement. Geometries in each simulation were discretized using ten-node tetrahedral elements (C3D10) and the magnitudes of various loads used for simulation were within the physiological range of intercellular forces 43 . By varying simulated pulling forces at the prospective AER, we showed, as expected, that stress and deformation magnitude were linearly proportional to the load applied. Qualitative characteristics of stress pattern such as direction and ratio of principal stresses were similar despite different load magnitudes ( Supplementary Fig. 1o, p ). As the viscoelastic properties we inferred from previous compression data probably represent material tissue properties rather than dynamic cell rearrangements that might dissipate stress over time, our transient viscoelastic simulations probably do not accurately reflect fluid-like tissue physics over longer timescales. Stress dissipation due to cellular spatial rearrangements or relaxation can possibly explain why our transient FEA model predicted higher stress anisotropy in both distal and proximal limb bud regions compared with our laser ablation data that implied only the distal limb bud region exhibited significant tension bias. Nonetheless, finite-element modelling provided insight into how specific biological phenomena such as mesodermal cell growth and ectodermal cell intercalation influence the stress pattern across the limb field.
University of Toronto engineers and a pediatric surgeon have joined forces to discover that physical forces like pressure and tension affect the development of limbs in embryos—research that could someday be used to help prevent birth defects. The team, including U of T mechanical engineer Yu Sun (MIE), U of T bioengineer Rodrigo Fernandez-Gonzalez (IBBME) and SickKids Hospital's Dr. Sevan Hopyan, used live imaging and computer models to study the links between mechanical forces, changes in cell shape and cell movement in the embryo. Their study—published this week in Nature Cell Biology—used cutting-edge techniques to gain valuable insight into the fundamental processes of arm and leg development. Mapping-out the growth of 'proto-limbs' An embryo starts out shaped like a ball, then grows to create complex shapes like limbs. In early embryonic development, cells divide into three layers: the ectoderm, which forms the nervous system, skin and sensory organs; the mesoderm, which produces the skeleton, muscles and most of the major organs, and; the endoderm, which turns into the body's respiratory tract and elimination systems. In the study, the team looked at cell behaviours in the ectoderm that promote limb development. They used unique tools, including micro-chiseling ablating lasers, atomic force microscopes and layer-by-layer computer models, to explore the early stages of limbs in unprecedented detail. They discovered that as cells divide and develop, the way they communicate with each other and the pressure resulting from movements of the three cell layers can impact how well limb buds—the early stages of what become arms or legs—are formed. "We found amazing evidence on how mechanical forces regulate the remodeling of cells in the ectoderm layer and how the stress field changes when the ectoderm changes its shape as it develops," says Professor Sun. Prior to this work, scientists and engineers didn't have the tools and techniques to understand changes of shapes on a tissue scale and on small groups of cells. Thanks to their findings, the researchers know that two major cell layers, the ectoderm and mesoderm, speak to each other both mechanically and biochemically, that is, through molecules shuttling back and forth. This communication is linked to changes in the embryo. Engineering insights from the world of the cell "The idea that two tissues are mechanically interacting and that such interaction affects cellular behaviour is really exciting to see," says Fernandez-Gonzalez. To measure mechanical forces, the authors used techniques borrowed from the world of manufacturing and engineering, including the use of a laser to cut interfaces between cells. "If you hold a rubber band between your hands and I cut it while it's loose, nothing happens," says Fernandez-Gonzalez. "But if you stretch the rubber band, your hands snap back when I cut it. That's essentially what happens with cell boundaries," he explains. "We know some of the genes that are important in the structure of the embryo for development to proceed, but we didn't know how those pathways were linked with movement in the cells," says Hopyan. A path to preventing limb defects While their study was done on a highly fundamental level, the team says it will allow them and others to take important further steps like measuring forces in and between cells. The study also paves the way for the possibility of creating better simulations of cell remodeling and the early development of limbs. "This research could someday be used in potential medical applications to prevent limb deformations," says Hopyan. The work is one of the first times a research team has applied biophysical methods to the study of cell and tissue mechanics in live mammals. Possible long-term outcomes in this research field could result in a drug that could alter mechanical stress on cells in embryos, repairing what would otherwise have become a deformed limb.
10.1038/ncb3156
Physics
The cart before the horse: A new model of cause and effect
Albert C. Yang et al, Causal decomposition in the mutual causation system, Nature Communications (2018). DOI: 10.1038/s41467-018-05845-7 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-05845-7
https://phys.org/news/2018-09-cart-horse-effect.html
Abstract Inference of causality in time series has been principally based on the prediction paradigm. Nonetheless, the predictive causality approach may underestimate the simultaneous and reciprocal nature of causal interactions observed in real-world phenomena. Here, we present a causal-decomposition approach that is not based on prediction, but based on the covariation of cause and effect: cause is that which put, the effect follows; and removed, the effect is removed. Using empirical mode decomposition, we show that causal interaction is encoded in instantaneous phase dependency at a specific time scale, and this phase dependency is diminished when the causal-related intrinsic component is removed from the effect. Furthermore, we demonstrate the generic applicability of our method to both stochastic and deterministic systems, and show the consistency of causal-decomposition method compared to existing methods, and finally uncover the key mode of causal interactions in both modelled and actual predator–prey systems. Introduction Since the philosophical inception of causality by Galilei 1 and Hume 2 that cause must precede the effect in time, the scientific criteria for assessing causal relationships between two time series have been dominated by the notion of prediction, as proposed by Granger 3 . Namely, the causal relationship from variable A to variable B is inferred if the history of variable A is helpful in predicting the value of variable B , rather than using information from the history of variable B alone. Granger causality is based on the time dependency between cause and effect 4 . As discussed by Sugihara et al. 5 , Granger causality is critically dependent on the assumption that cause and effect are separable 3 . While the separability is often satisfied in linear stochastic systems where Granger causality works well, it might not be applicable in nonlinear deterministic systems where separability appears to be impossible because both cause and effect are embedded in a non-separable higher dimension trajectory 6 , 7 . Consequently, Sugihara et al. 5 proposed the convergent cross-mapping (CCM) method based on state-space reconstruction. In this context, cause and effect are state dependent, and variable A is said to causally influence variable B , although counterintuitive, if the state of variable B can be used to predict the state of variable A in the embedded space, and this predictability improves (i.e., converges) as the time series length increases. Existing methods of detecting causality in time series are predominantly based on the Bayesian 8 concept of prediction. However, cause and effect are likely simultaneous 9 . The succession in time of the cause and effect is produced because the cause cannot achieve the totality of its effect in one moment. At the moment when the effect first manifests, it is always simultaneous with its cause. Moreover, most real-world causal interactions are reciprocal; examples include predator–prey relationships and the physiologic regulation of body functions. In this sense, predictive causality may fail because the attempt to estimate the effect with the history of cause is compromised as the history of the cause is already simultaneously influenced by the effect itself, and vice versa. Another constraint of the generalised prediction framework is that it requires a priori knowledge of the extent of past history that may influence and predict the future, such as the time lag between cause and effect in Granger’s paradigm, or the embedding dimensions in state-space reconstructions such as CCM. Furthermore, a causality assessment is incomplete if it is based exclusively on time dependency or state dependency. Time series commonly observed in nature, including those from physiologic system or spontaneous brain activity, contain oscillatory components within specific frequency bands 10 , 11 . Identification of frequency-specific causal interaction is essential to understand the underlying mechanism 12 , 13 . Furthermore, the application of either linear Granger causality or the nonlinear CCM method alone is insufficient to accommodate the complex causal compositions typically observed in real-world data blending with oscillatory stochastic and deterministic mechanisms. Here, we present a causal-decomposition analysis that is not based on prediction, and more importantly, is neither based on time dependency nor state dependency, but based on the instantaneous phase dependency between cause and effect. The causal decomposition essentially involves two assumptions: (1) any cause–effect relationship can be quantified with instantaneous phase dependency between the source and target decomposed as intrinsic components at specific time scale, and (2) the phase dynamics in the target originating from the source are separable from the target itself. We define the cause–effect relationship between two time series according to the covariation principle of cause and effect 1 : cause is that which put, the effect follows; and removed, the effect is removed; thus, variable A causes variable B if the instantaneous phase dependency between A and B is diminished when the intrinsic component in B that is causally related to A is removed from B itself, but not vice versa. To achieve this, we use the ensemble empirical mode decomposition (ensemble EMD) 14 , 15 , 16 to decompose a time series into a finite number of intrinsic mode functions (IMFs) and identify the causal interaction that is encoded in instantaneous phase dependency between two time series at a specific time scale. We validate the causal-decomposition method with both stochastic and deterministic systems and illustrate its application to ecological time series data of prey and predators. Results Illustration of the causal-decomposition method Figure 1 depicts how the causal decomposition can be used to identify the predator–prey causal relationship of Didinium and Paramecium 17 . Briefly, we decomposed the time series of Didinium and Paramecium into two set of IMFs, and determined the instantaneous phase coherence 18 between comparable IMFs from the two time series (Fig. 1a ). Orthogonality and separability tests were performed to determine the ensemble EMD parameter (i.e., added noise level) that minimises the nonorthogonal leakage and root-mean-square of the correlation between the IMFs, thereby ensuring the orthogonality and separability of the IMFs (Fig. 1d, e ). Subsequently, we removed one of the IMFs (e.g., IMF 2) from Paramecium (Fig. 1b ; subtract IMF 2 from the original Paramecium signal) and redecomposed the time series. We then calculated the phase coherence between the original IMFs of Didinium and redecomposed IMFs of Paramecium . This decomposition and redecomposition procedure was repeated for IMF 2 of Didinium (Fig. 1c ) and generalised to all IMF pairs. This procedure enabled us to examine the differential effect of removing a causal-related IMF on the redistribution of phase dynamics in cause-and-effect variables. The relative ratio of variance-weighted Euclidian distance between the phase coherence of the original IMFs (i.e., Fig. 1a ) and redecomposed IMFs (i.e., Fig. 1b, c ) is therefore an indicator of causal strength (Fig. 1f ), where a ratio of 0.5 indicates either no causality is detected or no difference in causal strength in the case of reciprocal causation, and a ratio approaching 0 or 1 indicates a strong causal influence from either variable A or variable B , respectively. Fig. 1 Causal-decomposition analysis. a Ensemble empirical mode decomposition (EEMD) analysis of Didinium (blue line) and Paramecium (red line) time series yields five Intrinsic Mode Functions (IMFs) (i.e., stationary components) and a residual trend (i.e., non-stationary trend). Each IMF operated at distinct time scales. Phase coherence values between comparable IMFs are shown at the right side of the panel. b Removal of an IMF (e.g., IMF 2) from Paramecium with redecomposition leads to a decreased phase coherence between the original Didinium IMFs and redecomposed Paramecium IMFs. c Repeating the same procedure in the Didinium time series resulted in a smaller decrease in phase coherence between the redecomposed Didinium IMFs and the original Paramecium IMFs. The causal strengths between Didinium and Paramecium can be estimated by the relative ratio of variance-weighted Euclidian distance of the phase coherence between ( b ) and a (for Didinium ), and between c and a (for Paramecium ). The ability of EEMD to separate time series depends on the orthogonality and separability of the IMFs with added noise, which can be evaluated by ( d ) nonorthogonal leakages and e the root-mean-square of correlations between pairwise IMFs. The strategy of choosing the added noise level in the EEMD is to maximise the separability (minimise the root-mean-square of pairwise correlation values among IMFs <0.05) while maintaining acceptable nonorthogonal leakages (<0.05). A noise level r at 0.35 standard deviations of the time series was used in this case. f Generalisation of causal decomposition to each IMF uncovers a causal relationship from Didinium (blue bar) to Paramecium (red bar) in IMF 2 but not in the other IMFs, indicating a time scale-dependent causal interaction in the predator–prey system Full size image Application to deterministic and stochastic models Figure 2 depicts the causal-decomposition analysis in both deterministic 5 and stochastic 10 models given in Eqs. 9 and 10 . The IMF with a causal influence identifies the key mechanism of the model data in stochastic (Fig. 2a ) and deterministic (Fig. 2b ) systems. These results indicate that the causal-decomposition method is suitable for separating causal interactions not only in the stochastic system, but also in the deterministic model where non-separability is generally assumed in the state space. Furthermore, we validated and compared the causal decomposition with existing causality methods in uncorrelated white noise with varying lengths, showing the consistency of causal decomposition in a short time series and under conditions where no causal interaction should be inferred (Fig. 3a ). In addition, we assessed the effect of down-sampling (Fig. 3b ) and temporal shift (Fig. 3c ) of a time series on causal decomposition and existing methods, showing that causal decomposition is less vulnerable to spurious causality due to sampling issues 3 and is independent of temporal shift, which is significantly confounded with the predictive causality method 19 . Fig. 2 Stochastic and deterministic model evaluation. Application of causal decomposition to a stochastic system 10 and b deterministic system 5 (ensemble empirical mode decomposition; EEMD parameter r = 0.15 for both cases). A causal influence was identified in intrinsic mode function (IMF) 2 in both systems, capturing the main mode of signal dynamics in each system (e.g., a lag order of 2 between the IMFs in a , and chaotic behaviour of the logistic model in b ). The causal decomposition is not only able to handle noisy data in the stochastic model, but it can also identify causal components in the deterministic model with the aid of EEMD in separating weakly coupled chaotic signals into identifiable IMFs. Data lengths: a 1000 data points; b 400 data points Full size image Validation of causal-decomposition analysis We generated 10,000 pairs of uncorrelated white noise time-series observations with varying lengths ( L = 10–1000) and calculated causality based on various methods (Fig. 3a ). Causal decomposition exhibited a consistent pattern of causal strengths at 0.5 (the error bar denotes the standard error of causality assessment here and in the other panels), indicating that no spurious causality was detected, even in the case of the short noise time series. Causality in the CCM methods was indicated by the difference in correlations obtained from cross-mapping the embedded state space. In the case of uncorrelated white noise, the difference of correlation should be approximately zero, indicating no causality. However, the CCM method detects spurious causality with differences of up to 0.4 in the crossmap correlations in the short time series, and the difference between the correlations decreased as the signal length increased. A high percentage or intensity of spurious causality was also observed in Granger’s causality and mutual information from the mixed embedding (MIME) method 20 . Fig. 3 Validation of causal-decomposition method. a The finite length effect on causality assessment. We generated 10,000 pairs of uncorrelated white noise time-series observations with varying lengths ( L = 10–1000) and calculated causality based on causal decomposition, convergent cross mapping (CCM), Granger causality, and mutual information from mixed embedding (MIME) method 20 . Causal decomposition exhibited a consistent pattern of causal strengths at 0.5 (the shaded error bar denotes the standard error of causality assessment here and in the other panels), indicating that no spurious causality was detected, whereas spurious causality was observed in the CCM, Granger’s causality, and MIME method. b Effect of down-sampling on the various causality methods. The stochastic and deterministic models shown in Fig. 2 are used (the corresponding colour for each variable is shown in the figure). The time series were down-sampled by a factor 1 to 10. The down-sampling procedure destroyed the causal dynamics in both models and made causal inference difficult in predictive causality analysis 19 . Causal-decomposition analysis revealed a consistent pattern of the absence of causality when the causal dynamics were destroyed as the down-sampling factor was >2. c Effect of temporal shift on the various causality methods. Temporal shift (both lagged or advanced up to 20 data points) was applied to both the stochastic and deterministic time series. Causal decomposition exhibited a stable pattern of causal strength independent of a temporal shift up to 20 data points, while the predictive causality methods are sensitive to temporal shift Full size image Next, we assessed the effect of down-sampling on the various causality methods (Fig. 3b ). The stochastic and deterministic models shown in Fig. 2 are used (the corresponding colour for each variable is shown in the figure). The time series were down-sampled by a factor 1 to 10. For Factor 1, the time series were identical to the original signals. The down-sampling procedure destroyed the causal dynamics in both models and made causal inference difficult in predictive causality analysis 19 . Causal-decomposition analysis revealed a consistent pattern of the absence of causality when the causal dynamics were destroyed as the down-sampling factor was >2. However, spurious causality was detected with the predictive causality methods when the signals were down-sampled. Finally, we evaluated the effect of temporal shift on the causality measures (Fig. 3c ). Temporal shift (both lagged or advanced up to 20 data points) was applied to both the stochastic and deterministic time series. Causal decomposition exhibited a stable pattern of causal strength independent of a temporal shift up to 20 data points. CCM reduced its crossmap ability to detect causa interaction in the bi-directional deterministic system as temporal shift increased in either direction, and is unable to show differences in crossmap ability in the anterograde temporal shift in stochastic system. As anticipated, Granger’s causality showed the opposite patterns of causal interaction in anterograde and retrograde temporal shift in both deterministic and stochastic system. MIME lost its predictability when the temporal shift is beyond 5 data points and was inconsistent in stochastic system. Quantifying predator and prey relationship Figure 4 shows the results of applying causal decomposition to ecosystem data from the Lotka Volterra predator–prey model 21 , 22 (Eq. 11 ; Fig. 4a ), wolf and moose data from Isle Royale National Park 23 (Fig. 4b ), and the Canada lynx and snowshoe hare time series reconstructed from historical fur records of Hudson’s Bay Company 24 (Fig. 4c ). The causal decomposition invariantly identifies the dominant causal role of the predator in the IMF, which is consistent with the classic Lotka Volterra predator–prey model. Previously, the causality of such autonomous differential equation models was understood only in mathematical terms because there is no prediction-based causal factor 25 , yet our results indicated that the causal influence of this model can be established through the decomposition of instantaneous phase dependency. Fig. 4 Causal decomposition of predator–prey data. a Lotka Volterra predator–prey model. b Wolf and moose time series from Isle Royale National Park in Michigan, USA 23 . c Canada lynx and snowshoe hare time series reconstructed from historical fur records of Hudson’s Bay Company 24 . The intrinsic mode functions (IMFs) shown in the figure correspond to significant causal interactions identified in each observation ( a IMF 4, b IMF 3, c IMF 2). Ensemble EMD parameter: a r = 0.4, b r = 0.3, c r = 0.3 Full size image Comparison of causal assessment in ecosystem data Figure 5 shows the comparison of causality assessment in these predator and prey data using different methods. In general, results showed that neither the Granger nor CCM methods consistently identify predator–prey interactions in these data, indicating that the predator–prey relationship does not exclusively fit either the stochastic or deterministic chaos paradigms. The CCM result showed a top–down causal interaction between lynx and hare, and Didinium and Paramecium interactions 17 , which the latter was consistent with the data presented by Suigihara et al. 5 However, CCM method could not be used to detect causal interaction in the Lotka Volterra predator–prey model, and it exhibited a cross-over of correlations in the wolf and moose data. Granger’s causality detected top–down causal interaction in the Lotka Volterra predator–prey model and wolf and moose data, but the bottom-up causal interaction was observed in Didinium and Paramecium data, which the latter was also observed in the supplementary data in Sugihara et al. 5 The inconsistency in causal strength was also observed in the results obtained with the MIME method. Fig. 5 Comparison of causal assessment in ecosystem data with existing methods. The ecosystem data include Lotka Volterra predator–prey model 21 , 22 (first row), Didinium and Paramecium data 45 (second row), wolf and moose data from the United States Isle Royale National Park 23 (third row), and lynx and hare data from trading records obtained from Hudson’s Bay Company 24 (fourth row). The results were derived from convergent cross mapping (CCM) (embedding dimension = 3), Granger’s and mutual information from mixed embedding (MIME) methods. The colour of lines and bars indicate the causal strength of a given predator (blue) or prey (red). In CCM, the difference in correlation values between predator and prey indicates the direction of causal direction. In Granger causality, the F -test was used to assess the causal strength and the vertical dashed line denotes the significance threshold with P < 0.05. In MIME method, the relative causal strength was represented by difference in mutual information Full size image Discussion An interdisciplinary problem of detecting causal interactions between oscillatory systems solely from their output time series has attracted considerable attention for a long time. The motivation of causal-decomposition analysis is that the inference of causality that is largely dependent on the temporal precedence principle is of concern. In other words, observing the past with a limited period is insufficient to infer causality because that history is already biased. Instead, we followed another fundamental criterion of causal assessment proposed by Galilei 1 —covariation of cause and effect: cause is that which put, the effect follows; and removed, the effect is removed. In this statement, however, the prediction of time series based on the past history is neither required or implied. Therefore, the complex dynamical process between cause and effect should be delineated through the decomposition of intrinsic causal components inherited in causal interactions. It is noteworthy that our approach is essentially different by combing EMD with existing causality methods, such as assessing Granger’s causality between paired IMFs of economic time series 26 , applying CCM to detect the nonlinear coupling of decomposed brain wave data 27 , or measuring time dependency between IMFs decomposed from stock market data 28 . The decomposition of time series with EMD alone may improve the separability of intrinsic components embedded in the time series data, but does not avoid the constraints inherited from the existing prediction-based causality methods. Furthermore, our approach does not neglect the temporal precedence principle, but emphasises the instantaneous relationship of causal interaction, and is thus more amenable to detecting simultaneous or reciprocal causation, which is not fully accounted for by predictive methods. Because our causal strengths measurement is relative, it detects differential causality rather than absolute causality. Differential causality adds to the philosophical concept of mutual causality that all causal effects are not equal, and it may fit the emerging research data better than linear and unidirectional causal theories do. In addition, causal decomposition using EMD fundamentally differs from the spectral extension of Granger’s causality 29 in that the latter involves the prior knowledge of history (e.g., autoregressive model order) and is susceptible to non-stationary artefacts. Furthermore, without resorting to frequency-domain decomposition, EMD bypasses the linear and stationary assumptions, and the limitation of uncertainty principle imposed on data characteristics as in Fourier analysis, and results in more precise phase and amplitude definition 30 . The operational definition of causal decomposition is in accordance with Granger’s assumption on separability 3 but in a more complete form. We note that such definition is distinct from non-separability assumed by CCM. Clearly, CCM is developed under the constraints of perfect deterministic system, in which the state of cause is encoded in effect that is not separable from effect itself. The state-space reconstruction approach such as CCM may be applicable to certain ecosystem data, such as predator and prey interactions, in which they represent non-separable components of the ecosystem 31 , but is unlikely to generalise to all causal interactions being studied 32 . It is noteworthy that the effect of temporal shift on the CCM shown in Fig. 3c is relevant to the extended CCM to detect time-delayed causal interactions 33 . The extended CCM has been shown to capture bi-directional causal interactions in the deterministic system. However, in the real-world data, the time-delayed causal interaction has to be achieved by the arbitrary temporal shift of time series data, and the interpretation of such results is still of concern, as demonstrated in our Fig. 3c . Several limitations should be considered in interpreting the causal strength presented in this paper. First, the causal decomposition represents a form of statistical causality and does not imply the true causality, which requires the inclusion of all variables to conclude the existence of causal relationship 3 . Second, the causal decomposition is limited to the pairwise measurement in the current form, but we do not exclude the possibility of the extension of the current method to multivariate systems (e.g., functional brain networks) with the employment of multivariate EMD 34 , 35 in the future. In that case, we have to define and work with the absolute causal strength matrix. Then the redecomposition would be from one to many. Although the causal principle remains the same, the computation would be time consuming. The use of EMD overcomes the difficulty of signal decomposition in nonlinear and non-stationary data, and it is applicable to both stochastic and deterministic systems in that the intrinsic components in the latter remain separable in the time domain. Furthermore, the central element in causal-decomposition analysis is the decomposition and redecomposition procedure, and we do not exclude the use of other signal decomposition methods 36 to detect causality in a similar manner. Therefore, the development of causal decomposition is not to complement existing methods, but to explore the use of covariation principle of cause and effect for assessing causality. With the potential of the extension of ensemble EMD to multivariate EMD 34 , 35 , we anticipate that this causal decomposition approach will assist with revealing causal interactions in complex networks not accounted for by current methods. Methods Causal relationship based on instantaneous phase dependency We define the cause–effect relationship between Time Series A and Time Series B according to the fundamental criterion of causal assessment proposed by Galilei 1 : cause is that which put, the effect follows; and removed, the effect is removed; thus, variable A causes variable B if the instantaneous phase dependency between A and B is diminished when the intrinsic component in B that is causally related to A is removed from B itself, but not vice versa. $${\mathrm{Coh}}\left( {A,B\prime } \right) < {\mathrm{Coh}}\left( {A,B} \right)\sim {\mathrm{Coh}}\left( {A\prime ,B} \right)$$ (1) where Coh denotes the instantaneous phase dependency (i.e., coherence) between the intrinsic components of two time series, and the accent mark represents the time series where the intrinsic components relevant to cause effect dynamics were removed. The realisation of this definition requires two key treatments of the time series. First, the time series must be decomposed into intrinsic components to recover the cause–effect relationship at a specific time scale and instantaneous phase. Second, a phase coherence measurement is required to measure the instantaneous phase dependency between the intrinsic components decomposed from cause–effect time series. Empirical mode decomposition To achieve this, we decompose a time series into a finite number of IMFs by using the ensemble EMD 14 , 15 , 16 technique. Ensemble EMD is an adaptive decomposition method originated from EMD (i.e., the core of Hilbert–Huang Transform) for separating different modes of frequency and amplitude modulations in the time domain 14 , 15 . Briefly, EMD is implemented through a sifting process to decompose the original time-series data into a finite set of IMFs. The sifting process comprises the following steps: (1) connecting the local maxima or minima of a targeted signal to form the upper and lower envelopes by natural cubic spline lines; (2) extracting the first prototype IMF by estimating the difference between the targeted signal and the mean of the upper and lower envelopes; and (3) repeating these procedures to produce a set of IMFs that were represented by a certain frequency–amplitude modulation at a characteristic time scale. The decomposition process is completed when no more IMFs could be extracted, and the residual component is treated as the overall trend of the raw data. Although IMFs are empirically determined, they remain orthogonal to one another, and may therefore contain independent physical meanings 15 , 37 . The IMF decomposed from EMD enables us to use Hilbert transform to derive physically meaningful instantaneous phase and frequency 14 , 29 . For each IMF, they represent narrow-band amplitude and frequency-modulated signal S ( t ), and can be expressed as $$S\left( t \right) = A\left( t \right){{\cos}}\emptyset \left( t \right)$$ (2) where instantaneous amplitude A and phase ∅ can be calculated by applying the Hilbert transform, defined as S H = \(\frac{1}{\pi }{\int} {\frac{{S(t\prime )}}{{t - t\prime }}{\mathrm{d}}t\prime }\) ; A ( t ) = \(\sqrt {S^2\left( t \right) + S_H^2(t)}\) ; and ∅ ( t ) = \({\mathrm{arctan}}\left( {{\textstyle{{S_H\left( t \right)} \over {S\left( t \right)}}}} \right)\) . The instantaneous frequency is then calculated as the derivative of the phase function ω( t ) = d ∅ ( t )/d t . Thus, the original signal X can be expressed as the summation of all IMFs and residual r , $$X(t) = \mathop {\sum}\nolimits_{j = 1}^k {A_j\left( t \right) \, {{\exp}}\left( {i\mathop {\scriptstyle\int }\omega_{j}(t){\mathrm{d}}t} \right) + r}$$ (3) where k is the total number of IMFs, A j ( t ) is the instantaneous amplitude of each IMF; and ω j ( t ) is the instantaneous frequency of each IMF. Previous literature have shown that IMFs derived with EMD can be used to delineate time dependency 38 or phase dependency 37 , 39 , 40 , 41 , 42 in nonlinear and non-stationary data. The ensemble EMD 15 , 16 , 43 is a noise-assisted data analysis method to further improve the separability of IMFs during the decomposition and defines the true IMF components S j ( t ) as the mean of an ensemble of trials, each consisting of the signal plus white noise of a finite amplitude. $$S_j\left( t \right) = \lim _{{\mathrm{N}} \to \infty }\mathop {\sum}\nolimits_{k = 1}^N {\left\{ {S_j\left( t \right) + r \times w_k(t)} \right\}} $$ (4) where w k ( t ) is the added white noise, and k is the k th trial of the j th IMF in the noise-added signal. The magnitude of the added noise r is critical to determining the separability of the IMFs (i.e., r is a fraction of a standard deviation of the original signal). The number of trials in the ensemble N must be large so that the added noise in each trial is cancelled out in the ensemble mean of large trials ( N = 1000 in this study). The purpose of the added noise in the ensemble EMD is to provide a uniform reference frame in the time–frequency space by projecting the decomposed IMFs onto comparable scales that are independent of the nature of the original signals. With the ensemble EMD method, the intrinsic oscillations of various time scales can be separated from nonlinear and non-stationary data with no priori criterion on the time–frequency characteristics of the signal. Hence, the use of ensemble EMD could complement the constraints of separability in Granger’s paradigm 44 and potentially capture simultaneous causal relationships not accounted for by predictive causality methods. Orthogonality and separability of IMFs Because r is the only parameter involved in the causal-decomposition analysis, the strategy of selecting r is to maximise the separability while maintaining the orthogonality of the IMFs, thereby avoiding spurious causal detection resulting from poor separation of a given signal. We calculated the nonorthogonal leakage 14 and root-mean-square (RMS) of the pairwise correlations of the IMFs for each r with an increment of 0.05 in the uniform space between 0.05 and 1. A general guideline for selecting r in this study is to minimise the RMS of the pairwise correlations of the IMFs (ideally under 0.05) while maintaining the nonorthogonal leakage also under 0.05. Phase coherence Next, the Hilbert transform is applied to calculate the instantaneous phase of each IMF and to determine the phase coherence between the corresponding IMFs of two time series 18 . For each corresponding pair of IMFs from the two time series, denoted as S 1j ( t ) and S 2j ( t ), and can be expressed as $${S}_{1j} (t) = {A}_{1j} (t){{\cos}}{\emptyset}_{1j}(t) \, {\mathrm{and}} \, {S}_{2j}(t) = {A}_{2j}(t) {{\cos}}{\emptyset}_{2j} {(t)},$$ (5) where A 1 j , ∅ 1 j can be calculated by applying the Hilbert transform, defined as \(S_{1jH} = \frac{1}{\pi }{\int} {\frac{{S_{1j}(t\prime )}}{{t - t^\prime }}{\mathrm{d}}t\prime }\) , and \(A_{1j}\left( t \right) = \sqrt {S_{1j}^2(t) + S_{1jH}^2(t)} \) , and ∅ 1 j ( t ) = arctan \(\left( {\frac{{S_{1{\mathrm{j}}H}\left( t \right)}}{{S_{1j}\left( t \right)}}} \right)\) ; and similarly applied for S 2 jH , A 2 j , and ∅ 2 j . The instantaneous phase difference is simply expressed as ∆ ∅ 12 j ( t ) = ∅ 2 j ( t ) ∅ 1 j ( t ). If two signals are highly coherent, then the phase difference is constant; otherwise, it fluctuates considerably with time. Therefore, the instantaneous phase coherence Coh measurement can be defined as $${\mathrm{Coh}}\left( {S_{1j},S_{2j}} \right) = \frac{1}{T}\left| {{\int}_{\hskip-5pt 0}^T {e^{i\Delta \emptyset _{12j}(t)}{\mathrm{d}}t} } \right|$$ (6) Note that the integrand (i.e., \(e^{i\Delta \emptyset _{12{\mathrm{j}}}(t)}\) ) is a vector of unit length on the complex plane, pointing toward the direction which forms an angle of \(\Delta \emptyset _{12j}(t)\) with the + x axis. If the instantaneous phase difference varies little over the entire signal, then the phase coherence is close to 1. If the instantaneous phase difference changes markedly over the time, then the coherence is close to 0, resulting from adding a set of vectors pointing in all possible directions. This phase coherence definition allows the instantaneous phase dependency to be calculated without being subjected to the effect of time lag between cause and effect (i.e., the time precedence principle), thus avoiding the constraints of time lag in predictive causality methods 10 . Causal decomposition between two time series With the decomposition of the signals by ensemble EMD and measurement of the instantaneous phase coherence between the IMFs, the most critical step in the causal-decomposition analysis is again based on Galilei’s principle: the removal of an IMF followed by redecomposition of the time series (i.e., the decomposition and redecomposition procedure). If the phase dynamic of an IMF in a target time series is influenced by the source time series, removing this IMF in the target time series (i.e., subtract an IMF from the original target time series) with redecomposition into a new set of IMFs results in the redistribution of phase dynamics into the emptied space of the corresponding IMF. Furthermore, because the causal-related IMF is removed, redistribution of the phase dynamics into the corresponding IMF would be exclusively from the intrinsic dynamics of the target time series, which is irrelevant to the dynamics of the source time series, thus reducing the instantaneous phase coherence between the paired IMFs of the source time series and redecomposed target time series. By contrast, this phenomenon does not occur when a corresponding IMF is removed from the source time series because the dynamics of that IMF are intrinsic to the source time series and removal of that IMF with redecomposition would still preserve the original phase dynamics from the other IMFs. Therefore, this decomposition and redecomposition procedure enables quantifying the differential causality between the corresponding IMFs of two time series. Because each IMF represents a dynamic process operating at a distinct time scale, we treat the phase coherence between the paired IMFs as the coordinates in a multidimensional space, and quantify the variance-weighted Euclidean distance between the phase coherence of the paired IMFs decomposed from the original signals as well as the paired original and redecomposed IMFs, which are expressed as follows: $$\begin{array}{l}D\left( {S_{1j} \to S_{2j}} \right) = \left\{ {\mathop {\sum}\nolimits_{j = 1}^m {W_j\left[ {{\mathrm{Coh}}\left( {S_{1j},S_{2j}} \right) - {\mathrm{Coh}}\left( {S_{1j},S_{2j}^\prime } \right)} \right]^2} } \right\}^{\frac{1}{2}}\\ D\left( {S_{2j} \to S_{1j}} \right) = \left\{ {\mathop {\sum}\nolimits_{j = 1}^m {W_j\left[ {{\mathrm{Coh}}\left( {S_{1j},S_{2j}} \right) - {\mathrm{Coh}}\left( {S_{1j}^\prime ,S_{2j}} \right)} \right]} ^2} \right\}^{\frac{1}{2}}\\ W_j = \left( {{\mathrm{Var}}_{1j} \times {\mathrm{Var}}_{2j}} \right){\mathrm{/}}\mathop {\sum}\nolimits_{j = 1}^m {\left( {{\mathrm{Var}}_{1j} \times Var_{2j}} \right)} \end{array}$$ (7) The range of D represents the level of absolute causal strength and is between 0 and 1. The relative causal strength between IMF S 1 j and S 2 j can be quantified as the relative ratio of absolute cause strength \(D\left( {S_{1j} \to S_{2j}} \right)\) and \(D\left( {S_{2j} \to S_{2j}} \right)\) , expressed as follows: $$\begin{array}{l}C\left( {S_{1j} \to S_{2j}} \right) = D\left( {S_{1j} \to S_{2j}} \right) \Big/ \left[ {D\left( {S_{1j} \to S_{2j}} \right) + D\left( {S_{2j} \to S_{1j}} \right)} \right]\\ C\left( {S_{2j} \to S_{1j}} \right) = D\left( {S_{2j} \to S_{1j}} \right)\Big/\left[ {D\left( {S_{1j} \to S_{2j}} \right) + D\left( {S_{2j} \to S_{1j}} \right)} \right].\end{array}$$ (8) This decomposition and redecomposition procedure is repeated for each paired IMF to obtain the relative causal strengths at each time scale, where a ratio of 0.5 indicates either that there is no causal relationship or equal causal strength in the case of reciprocal causation, and a ratio toward 1 or 0 indicates a strong differential causal influence from one time series to another. To avoid a singularity when both \(D\left( {S_{1j} \to S_{2j}} \right)\) and \(D\left( {S_{2j} \to S_{1j}} \right)\) approach zero (i.e., no causal change in phase coherence with the redecomposition procedure), D + 1 is used to calculate the relative causal strength when both absolute causal strength D values are <0.05. In summary, causal decomposition comprises the following three key steps: (1) decomposition of a pair of time series A and B into two sets of IMFs (e.g., IMFs A and IMFs B ) and determining the instantaneous phase coherence between each paired IMFs; (2) removing an IMF in a given time series (e.g., time series A ), performing the redecomposition procedure to generate a new set of IMFs (IMF A ′) and recalculating the instantaneous phase coherence between the original IMFs (IMFs B ) and redecomposed IMFs (IMFs A ′); and (3) determining the absolute and relative causal strength by estimating the deviation of phase coherence from the phase coherence of the original time series (IMFs A vs. IMFs B ) to either of the redecomposed time series (e.g., IMFs A ′ vs. IMF B ). Validation of causal strength To validate the causal strength, a leave-one-sample-out cross-validation is performed for each causal-decomposition test. Briefly, we delete a time point for each leave-one-out test and obtain a distribution of causal strength for all runs where the total number of time points is <100, or a maximum of 100 random leave-one-out tests where the total number of time points was higher than 100. A median value of causal strength is observed. Deterministic and stochastic model data The deterministic model was used in accordance with Sugihara et al. 5 based on a coupled two-species nonlinear logistic difference system, expressed as follows (initial value x (1) = 0.2, and y (1) = 0.4): $$\begin{array}{l}x\left( {t + 1} \right) = x\left( t \right)\left[ {3 . 8 - 3 . 8x\left( t \right) - 0 . 02y\left( t \right)} \right]\\ y\left( {t + 1} \right) = y\left( t \right)\left[ {3 . 5 - 3 . 5y\left( t \right) - 0 . 1x\left( t \right)} \right]\end{array}$$ (9) For the stochastic model, we used part of the example shown in Ding et al. 10 for Granger causality, which is expressed as follows (using a random number as the initial value). $$\begin{array}{l}x\left( {t + 1} \right) = 0 . 95\sqrt 2 x\left( t \right) - 0 . 9025x\left( {t - 1} \right) + w_1(t)\\ y\left( {t + 1} \right) = 0 . 5x\left( {t - 1} \right) + w_2(t)\end{array}$$ (10) Ecological data and validation We assessed the causality measures in both modelled and actual predator and prey systems. The Lotka Volterra predator–prey model 21 , 22 is expressed as follows: $${\mathrm{d}}x/{\mathrm{d}}t = \alpha x - \beta xy \\ {\mathrm{d}}y/{\mathrm{d}}t = \delta xy - \gamma y$$ (11) where x and y denote the prey and the predator, respectively ( α = 1, β = 0.05, δ = 0.02, γ = 0.5 were used in this study). Experimental data on Paramecium and Didinium are available online 45 , and these were obtained by scanning the graphics in Veilleux 17 and digitising the time series. Wolf and moose field data are available online at the United States Isle Royale National Park 23 . The lynx and hare data were reconstructed from fur trading records obtained from Hudson’s Bay Company 24 . The benchmark time series 46 was reconstructed from various sources in two periods (the 1844–1904 data were reconstructed from fur records, whereas the 1905–1935 data were derived from questionnaires) 24 . We used the fur-record time series between the year 1900 and 1920 for illustrative purposes. Comparison with other causality methods We compared causal decomposition with CCM, Granger’s causality, and MIME method 20 . The detail of the calculation of CCM 5 , Granger causality 10 , and MIME 20 has been documented in the literature. Of note, both the CCM and Granger causality involve the selection of lag order. In this paper, the lag order (i.e., embedding dimension) of 3 was chosen for the application of CCM method to the ecosystem data 5 , and the lag order in the Granger causality was selected by the Bayesian information Criterion. The MIME is an entropy-based causality method which also employs the time precedence principle 47 and is equivalent to Granger’s causality in certain conditions 48 . Code availability The source code for the causal-decomposition analysis (including ensemble EMD ( )) is implemented in Matlab (Mathworks Inc., Natick, MA, USA), and the current version (causal-decomposition-analysis-v1.0) or any future versions of the codes will be available at GitHub . Data availability The Didinium and Paramecium data that support the findings of this study are available in . Wolf and moose field data are available online at the United States Isle Royale National Park. Lynx and hare data are available online at . Change history 06 September 2018 This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication.
Natural little scientists, human babies love letting go of things and watching them fall. Baby's first experiment teaches them about more than the force of gravity. It establishes the concept of causality—the relationship between cause and effect that all human knowledge depends on. Let it go, it falls. The cause must precede its effect in time, as scientist from Galileo in the 16th Century to Clive Granger in 1969 defined causality. But in many cases, this one-way relationship between cause and effect fails to accurately describe reality. In a recent paper in Nature Communications, scientists led by Albert C. Yang, MD, Ph.D., of Beth Israel Deaconess Medical Center, introduce a new approach to causality that moves away from this temporally linear model of cause and effect. "The reality in the real-world is that cause and effect are often reciprocal, as in the feedback loops seen in physiologic/endocrine pathways, neuronal regulation, ecosystems, and even the economy," said Albert C. Yang, MD, Ph.D., a scientist in the Division of Interdisciplinary Medicine and Biotechnology. "Our new causal method allows for mutual or two-way causation, in which the effect of a cause can feed back to the cause itself simultaneously." Yang and colleagues' new approach defines causality independently from time. Their covariation principle of cause and effect defines cause as that which when present, the effect follows, and that which when removed, the effect is removed. The team demonstrates the new approach by applying it to predator and prey systems. Moreover, Yang and colleagues showed that their model can work well in systems where other causality methods cannot work. "I would expect the method to represent a breakthrough of causal assessment of observational data," said Yang. "It can be applied to a wide range of causal questions in the scientific field."
10.1038/s41467-018-05845-7
Other
Primitive fossil bear with a sweet tooth identified from Canada's High Arctic
Xiaoming Wang et al, A basal ursine bear (Protarctos abstrusus) from the Pliocene High Arctic reveals Eurasian affinities and a diet rich in fermentable sugars, Scientific Reports (2017). DOI: 10.1038/s41598-017-17657-8 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-17657-8
https://phys.org/news/2017-12-primitive-fossil-sweet-tooth-canada.html
Abstract The skeletal remains of a small bear ( Protarctos abstrusus ) were collected at the Beaver Pond fossil site in the High Arctic (Ellesmere I., Nunavut). This mid-Pliocene deposit has also yielded 12 other mammals and the remains of a boreal-forest community. Phylogenetic analysis reveals this bear to be basal to modern bears. It appears to represent an immigration event from Asia, leaving no living North American descendants. The dentition shows only modest specialization for herbivory, consistent with its basal position within Ursinae. However, the appearance of dental caries suggest a diet high in fermentable-carbohydrates. Fossil plants remains, including diverse berries, suggests that, like modern northern black bears, P . abstrusus may have exploited a high-sugar diet in the fall to promote fat accumulation and facilitate hibernation. A tendency toward a sugar-rich diet appears to have arisen early in Ursinae, and may have played a role in allowing ursine lineages to occupy cold habitats. Introduction In 1970, Philip Bjork described a small fossil bear from the Pliocene Glenn’s Ferry Formation of southwestern Idaho. Based on a single m1 as the holotype, he was understandably perplexed and named it Ursus abstrusus . Additional material has not been forthcoming since its initial description and this bear has remained an enigma. Hence the discovery in the 1990s of a similar bear from more complete fossils in the Pliocene of the Canadian High Arctic throws much needed light onto the mystery (Fig. 1 ). In addition to resolving the riddle of Ursus abstrusus , with a moderately complete skull and lower jaws with associated postcranials, the new materials present a rare opportunity to fill a large gap in our knowledge of North American High Arctic at a time in the early Pliocene when mean annual temperatures in the High Arctic were ~22 °C warmer than the present polar temperatures 1 . Such a warm climate supported an extensive boreal-type forest biome 2 , 3 , radically different from today’s arid polar tundra 4 . Thus the evidence of this primitive bear in an extinct polar forest offers valuable information about the diet and habitat of this basal ursine. Figure 1 Map of key basal ursine localities in Asia, Europe, and North America and routes of dispersal. The Beaver Pond site is indicated by red star 1 within Arctic Circle and the type locality of Protarctos abstrusus in Idaho is the red star 2, near the edge of the map. For much of the Neogene the Bering isthmus would have served as a land bridge, allowing for an Arctic biotic continuity between Eurasia and North America. By 3.5 Ma the Bering Strait was open, although mammalian dispersal could have been permitted by seasonal sea ice. Pliocene (5 Ma) paleogeography map modified from Wang et al . 86 Fig. 1 and Scotese 87 . Full size image The fossil records of basal ursines has improved with recent discoveries of three relatively complete specimens of basal ursines from China – a very advanced Ursavus 5 and a very primitive Protarctos 6 , 7 . We are now in a position to more tightly bracket the North American Pliocene bears as well as providing a wealth of information about cranial anatomy of basal ursines previously unavailable. The present description of P . abstrusus and a phylogenetic analysis combining molecular and morphological data of most fossil and living ursines for the first time allows a much more detailed view of the history of bears at the critical juncture of their initial diversification. In addition, the presence of dental caries provides insight into the evolutionary history of diet of ursines. Systematic Paleontology Order Carnivora Bowdich,1821 8 . Family Ursidae Fischer van Waldheim, 1817 9 . Subfamily Ursinae Fischer von Waldheim, 1817. Tribe Ursini Fischer von Waldheim, 1817. Protarctos Kretzoi, 1945 10 . Genotypic Species Protarctos boeckhi 11 . Included Species Protarctos boeckhi 11 ; P. abstrusus (Bjork, 1970); P. yinanensis (Li, 1993); and P. ruscinensis 12 . Distribution Pliocene of Europe, Pliocene and early Pleistocene of Asia, and Pliocene of North America. Emended diagnosis Protarctos abstrusus is a basal ursine the size of a small Asian black bear. It has a flat forehead covering an uninflated frontal sinus; very high sagittal crest that projects backward to overhang the occipital condyle (Figs 2 and 3 ); P4 with a small, distinct protocone situated at the level of carnassial notch; M2 talon modestly developed but not very elongated (Figs 4 and 5 ); no pre-metaconid on m1, smooth posterior surface of m1 trigonid without zigzag pattern, presence of a distinct pre-entoconid; m2 shorter than m1 (Fig. 6 and Supplementary Fig. S2 ). It is about the same size as P . boeckhi and differs from it in the relatively smaller p4, presence of a tiny cuspule on lingual side of posterior crest in p4, and presence of a pre-entoconid on m1. P . abstrusus is also similar in size to P . yinanensis and can be distinguished from the latter in a flattened forehead, posteriorly projected sagittal crest, p4 posterior accessory cuspule on lingual side of posterior crest, an m1 pre-entoconid, and less elongated M1 and M2. P . abstrusus differs from P . ruscinensis by its lack of unique features of the latter such as a deep angular process, reduction of P4 protocone, and a single entoconid on m1. Figure 2 Right ( A ) and left ( B ) lateral views of the skull of Protarctos abstrusus (CMN 54380), composite laser scans of five individual cranial fragments, assembled in Avizo Lite (version 9.0.0) and visualized in PointStream 3D Image Suite (Version 3.2.0.0). Full size image Figure 3 Dorsal ( A ) and ventral ( B ) views of the skull of Protarctos abstrusus (CMN 54380), composite laser scans of five individual cranial fragments, assembled in Avizo Lite (version 9.0.0) and visualized in PointStream 3D Image Suite (Version 3.2.0.0). Full size image Figure 4 Left upper posterior teeth (P4-M2) of Protarctos abstrusus , CMN 54380; ( A ) buccal, and ( B ), lingual views. Full size image Figure 5 Stereo photos of left upper posterior teeth (P4-M2) of Protarctos abstrusus , CMN 54380. Full size image Figure 6 Stereo photos of lower cheek teeth of Protarctos abstrusus , occlusal views. ( A ), holotype, a cast of UMMP V53419 (cast, USNM 170872) from the Hagerman fossil site in Idaho, ( B ), right (CMN 52078B), and ( C ), left (CMN 52078A) dentaries from the Beaver Pond site in Nunavut. Full size image Taxonomic Remarks There is much disagreement over the generic taxonomy of ursines. Most mammalogists and some paleontologists include all living black bears (Asian and North American), brown bears, and polar bears in the genus Ursus but allow separate generic status for the sloth bear, Melursus , and sun bear, Helarctos 14 , 15 , 16 , 17 , 18 , 19 , although some include all of above in Ursus 20 , 21 and others use Thalarctos for the polar bear 22 , 23 . With a deep time perspective, vertebrate paleontologists either adopt some subgeneric names, such as Ursus ( Melursus ) for sloth bear, Ursus ( Selenarctos ) for Asian black bear, Ursus ( Euarctos ) for American black bear, Ursus ( Protarctos ) for some extinct bears 7 , 24 , 25 , 26 or elevate some of them to generic status 6 , 27 . In his remarks about carnivoran classification, Kretzoi 10 erected a new genus, Protarctos , for Ursus boeckhi Schlosser, 1899. Kretzoi’s name has been adopted either at full generic rank 6 or as a subgenus 7 , although many authors still prefer a more inclusive usage of Ursus 19 , 24 , 25 , 28 , 29 , 30 . In our cladistic framework in this study, some generic reassignment becomes necessary to maintain monophyly, especially in light of the general preference to giving sloth and sun bears distinct generic status. Protarctos abstrusus (Bjork, 1970), new combination. Ursus abstrusus Bjork, 1970: Ruez 2009:43. Holotype UMMP V53419 (locality UM-Ida 79-65), left dentary fragment with p4 alveolus, m1, and m2-3 alveoli (Fig. 6 and Supplementary Fig. S6 ) from Glenn’s Ferry Formation, southwestern Idaho; Hagerman Local Fauna, 3.48–3.75 Ma, early Pliocene. Referred Specimens CMN 54380 (accession number CR-97-18; same as below), a fragmentary partial skull including much of dorsal roof, left and right maxillary, partial left and right basicranial area, and isolated left and right petrosals, with left I1, I2-3 alveoli, P4-M2, and right I1, I2-3 alveoli, C1, P1-3 alveoli, P4-M1, and M2 alveolus; CMN 52078-A (CR-93-8A), partial left dentary with detached canine (CR-92-24), p1-3 alveoli, p4-m2, and m3 alveolus; CMN 52078-B (CR-93-8B), partial right dentary with detached canine, p1-3 alveoli, p4-m3; CMN 51779-A (CR-97-33A), nearly complete left and right pelvis; CMN 51779-B (CR-97-33B), nearly complete left femur; CMN 53990 (CR-92-1), nearly complete axis vertebra; NUFV 303 (SF-06-15), nearly complete right radius; NUFV 304 (SF-06-17), cervical vertebrae, C3; CMN 53989 (CR-92-2), partial lumbar vertebra (museum label: 7th?); CMN 53984 (CR-95?-0), left tarsal IV; CMN 53988 (CR-96-43), right metacarpal III (museum label: slightly smaller than a modern male black bear; maximum length 68.4); CMN 53982 (CR-93-69), metacarpal or metatarsal IV (maximum length 60.2); CMN 53985 (CR-95-33), proximal phalanx (maximum length 33.0 mm); CMN 53980 (CR-93-36), proximal phalanx (maximum length 36.5); CMN 53981 (CR-93-43), left medial phalanx (maximum length 28.7 mm); CMN 53983 (CR-94-102), medial phalanx (maximum length 22.6 mm; width 14.5 mm); CMN 53987 (CR-96-31), distal phalanx (maximum length 31.3 mm). Locality and Age The Beaver Pond site, 78° 33′N 82° 22′W, is a >20 m succession of fine to coarse cross-bedded fluvial sands conformably overlain by cobble gravels interpreted to be glacial outwash and capped by 2 m of till on the northeastern edge of an interfluvial plateau southeast of Strathcona Fiord on Ellesmere Island, Nunavut 31 , 32 (red star 1 in Fig. 1 ). A peat deposit near the base of the sequence, up to 2.4 m thick, produced exceptionally well-preserved plant, invertebrate and vertebrate remains (Supplementary Fig. S2 ), and is disconformably overlaying light-colored, tilted Eocene sediments. Abundant beaver-cut branches and cut saplings of larch trees suggest that the peat growth may have been promoted by beaver activity. Further supporting this view are the skeletal remains of multiple beaver individuals, and two clusters of beaver-cut branches found within the peat unit, at least one of which was interpreted to be the core of a dam 32 , 33 . Using terrestrial cosmogenic nuclide (TCN) burial dating 34 , four samples of quartz-rich coarse sand from above the peat unit yielded a weighted mean date of >3.4 + 0.6/−0.4 Ma, suggesting the peat accumulation was formed during a mid-Pliocene warm phase 31 . Paleoenvironments and Associated Flora and Fauna At 78°N, the Beaver Pond site on Ellesmere Island is presently extremely cold and arid, with ice sheets, permafrost, and sparse vegetation. During the mid-Pliocene, the Canadian High Arctic would have been forested, and the latitudinal gradient was much less than modern, so that although global temperatures were 3-4 degrees warmer than modern, the mean annual temperature of the terrestrial High Arctic was ~22 °C warmer (Fletcher et al . 2017). The Beaver Pond site comprises the remains of a Pliocene forest wetland community that was dominated by larch ( Larix groenlandii ), and also supported alder ( Alnus ) and birch ( Betula ), spruce ( Picea ), pine ( Pinus ) and cedar ( Thuja ) 1 . Multiple proxies consistently suggest a Pliocene mean annual temperature at the Beaver Pond site of slightly above freezing, with plant community composition indicating a warmest summer air-temperatures of ~20 °C 1 . Coldest winter temperatures have been recently estimated from vegetation to be ~−12 °C, though a prior estimation from beetle fauna suggest −27 °C 1 . Precipitation at the Beaver Pond site was also much greater in the Pliocene. Modern (1960–1990) Mean Annual precipitation in the area is 104 mm/year, whereas in the Pliocene the plant community implies precipitation to have been ~550 mm/year 1 . In fossil vertebrates, the Beaver Pond site, in combination with the nearby Fyles Leaf Bed fossil site, has produced four native North American mammals: a castoroidine beaver Dipoides sp., an archaeolagine rabbit Hypolagus cf. H . vetus , a small canine dog Eucyon , and a cameline camel (c.f. Paracamelus) 31 , 35 . Of these, Eucyon and Paracamelus had arrived at Eurasia near the Mio-Pliocene boundary, and they may be closely related to the ancestral stock that gave rise to the Eurasian forms. The rest of the faunal components include a frog, a percid fish, Sander teneri , of Eurasian origin, and ten mammal taxa which share considerable similarity to equivalent-aged faunal assemblages in East Asia, including a neomyine shrew Arctisorex polaris , a microtine-like cricetid similar to Microtodon or Promimomys , a large wolverine (cf. Plesiogulo ), a fisher ( Martes/Pekania )-like carnivore, a marten-like carnivore Martes cf. M . americana , a weasel Mustela sp., a meline badger Arctomeles sotnikovae , a three-toed horse Plesiohipparion , a possible cervoid Boreameryx braskerudi of unknown origin, plus an ursine bear “ Ursus abstrusus ” described herein 4 , 32 , 35 . The third author has also identified a duck closest to the Greater Scaup ( Aythya marila ). Distribution Known only in the Pliocene (Blancan) of southwestern Idaho and Ellesmere Island, Nunavut of Canadian Arctic. A possible record from Buckeye Creek Local Fauna of Nevada has been attributed to this species 36 , but it is too poorly known to be certain. Description Skeletal remains of the fossil bear were collected in different years (1992, 1993, 1996, 1997, 2006) from the Beaver Pond site (Supplementary Fig. S1 ). The skull specimen, with upper teeth (CMN 54380) appears to be a young adult (Figs 2 – 5 , Supplementary Part 3 and Figs S3–5 ). The exoccipital-basioccipital, exoccipital-supraoccipital, and premaxilla-maxilla sutures are largely fused, whereas the internasal and interfrontal elements are unfused. In the modern black bear this degree of fusion of the cranial elements suggests the individual is between five and seven years old 13 . The upper teeth, particularly the premolar and molar cheek teeth, are essentially pristine and show wear only on the tip of the upper right canine, incisors, and anterior edge of M1, which also suggest a relatively young individual (Figs 4 and 5 ). In contrast, there is extensive wear on the lower teeth (CMN 52078-A and CMN 52078-B), indicating the mandibles are from an individual much older than that of the cranium (Fig. 6 and Supplementary Fig. S6 ). The symphysial sutures of the left and right dentaries occlude perfectly, and the wear patterns on the lower teeth on either side are comparable, indicating a single individual for the lower jaws. There are thus a minimum of two individuals. Judging by the lack of fusion between epiphysis and diaphysis, the postcranial elements may belong to the younger individual represented by the skull (CMN 54380). Results Phylogenetic analysis A phylogenetic analysis was conducted using 24 taxa and 59 morphological characters (Supplementary Tables 5–6. The taxa included five fossil ursines ( Ursavus primaevus , U . tedfordi , Protarctos abstrusus , P . yinanensis and Euarctos minimus ) and all seven living ursines ( Tremarctos ornatus , Melursus malayanus , Ursus thibetanus , U . americanus , U . arctos and U . maritimus ). A new single shortest tree was found by New Technology search in TnT with a tree length of 145, consistency index of 0.51, and retention index of 0.70 (Fig. 7 ). The topology of modern taxa was constrained using nuclear DNA evidence of Kutschera et al . 37 and the whole genome analysis of Kumar et al . 38 . Assuming the molecular relationship is correct, six extra morphological steps (homoplasies) are required to account for this new relationship. Protarctos abstrusus appears basal to all modern bears, including Tremarctos , the spectacle bear of South America. Moreover, its phylogenetic position suggests a Eurasian origin for this lineage. Asia appears to be of vital importance in the early diversification of ursines: Not only is Asia home to all basal ursines still alive today (sloth bear, sun bear, and Asian black bear) but the most advanced stem form, Ursavus tedfordi , leading to the ursines is also found in east Asia 5 , as well as early ursines such as Ursus yinanensis 6 , 7 . (see SI for further discussion). Figure 7 Cladogram of select extinct and extant ursids based on our character matrix (Supplementary Table S6 ) within a molecular backbone phylogeny of Kutschera et al . 37 Fig. 2A . This tree is six steps longer than the unconstrained tree (see text for explanation). Taxa in green represent living bears. Sleeping bear symbols indicate hibernators. Full size image Body mass estimate Using regression parameters derived from species of living Ursidae, log 10 (body mass) = 2.02*log 10 (skull length) −2.80 39 . Table 10.2 , we arrive at an estimated body mass of 97 kg for Protarctos abstrusus from its skull length (condylobasal length of Table S1 ) of 234 mm (CMN 54380). If m1 length is used (20.1 mm, see Table S2 ), a less desirable proxy 39 , an estimate of 79 kg results. In absence of more superior proxies such as long bone cortical thickness 40 , 41 , body mass estimate based on skull length is preferred here. P . abstrusus is thus close to average male body mass of American black bear from California (86 kg) and heavier than their female counterparts (58 kg) 42 . Dental Caries Judging from dental wear, the partial skull (CMN 54380) and mandibles (CMN 52078-A, B) from the Beaver Pond site represent two individuals of Protarctos abstrusus . Both show evidence of dental caries, particularly on teeth that have sustained the most wear; a pit usually develops on the exposed dentine surface (Fig. 8 ). We used microCT scanning to investigate features of the left upper second left molar (M2), and the right side lower first (m1) and second molars (m2) of the mandibular specimen, CMN 52078-A. The M2 has deep occlusal (Fig. 8 , M2.1) and proximal (Fig. 8 , M2.2) surface lesions. Scans show that both lesions are characterized by a thin zone of demineralization at the cavity boundary and deeper sclerosis of the dentinal tubules. There is also evidence of mild formation of reparative (secondary) dentin formation in the adjacent pulp associated with each lesion. The lower carnassial, m1, revealed five structures of interest (Fig. 8 , m1.1–m1.5). Feature m1.1 (Fig. 8 , m1.1) is a fragment of dentin that is slightly elevated from a worn surface because of cracks in the desiccated dentin. Feature m1.2 represents a series of carious lesions extending apically to the worn surface (Fig. 8 , m1.2). Three small pit-like lesions and one large lesion (feature m1.3) are identified. MicroCT scans reveal that lesions undercut the worn surface and show slight demineralization of their margins. Demineralization of dentinal tubules is a reaction to actively spreading caries while dentinal sclerosis and formation of reparative dentin are evidence of protective responses. Feature m1.3 (Fig. 8 , m1.3) demonstrates subsurface demineralization extending about 0.1 mm from the margins of the cavity. Features m1.4 and m1.5 are early carious lesions (Fig. 8 , m1.4 and m1.5, respectively) that clearly extend below the worn surface. Seven features of interest were identified in the occlusal surface of m2 (Fig. 8 , m2.1-m2.7). Five early carious lesions are identified under the scale bars of m2.1, m2.3, m2.4, m2.5 and m2.6. Feature m2.2 shows demineralization of the pulpal surface of the lesion. There is also evidence of demineralization of the dentinal tracts between the depth of the lesion and the pulp as well as a mildly sclerotic peripheral zone. Further, it is most likely that some reparative (secondary) dentin has formed in the region of the pulp adjacent to the demineralized dentinal tracts. Feature 2.7 also shows slight demineralization of the lesion surface as well as slight demineralization of dentinal tracks just pulpal to the lesion as well as deeper sclerotic changes. Figure 8 MicroCT scans of U . abstrusus dental caries; all cross sectional images oriented buccolingually through depth of lesion except as noted. m1 (CMN 52078-A), occlusal view of m1 (reversed from left side) and reconstructed view of m1 with enamel in yellow and dentin in green; m1 . 1 , worn and slightly elevated surface with no caries (as control); m1 . 2 , small dentinal caries; m1 . 3 , note demineralization of dentin wall extending approximately 0.1 mm and dentinal sclerosis pulpal to depth of lesion; m1 . 4 , small carious lesion, note also loss of enamel on lingual surface due to breakage; m1 . 5 , small carious lesion. m2 (CMN 52078-B), occlusal view of m2 and reconstructed view of m2 with enamel in yellow and dentin in blue; m2 . 1 , small lesion on left; m2 . 2 , carious lesion revealing demineralization of the pulpal surface, demineralization of the dentinal tracts extending to the pulp, and probably reparative (secondary) dentin formation in the pulp underlying the demineralized dentinal tracts.; m2 . 3 , early lesion on left; m2 . 4 , early lesion; m2 . 5 , early lesion; m2 . 6 , small lesion; m2 . 7 , note areas of demineralization on the periphery of the lesion and demineralization and deeper sclerosis of dentinal tubules running towards the pulp. M2 (CMN 54380, left side), occlusal view of M2 and reconstructed view of M2 with enamel in yellow and dentin in pink; M2.1, note subsurface demineralization at depth of lesion, deeper sclerosis of dentinal tubules, and apparent constriction of distal pulp horn by reparative dentin; M2.2, slice oriented mesiodistally through depth of lesion, note subsurface demineralization in depth of this proximal surface lesion, deeper sclerosis of dentinal tubules, and mild constriction of pulp horn by reparative dentin. Note also proximal surface caries of distal surface of adjacent M1. Scale = 1.00 mm. Full size image For comparative purposes, we assessed the prevalence of caries in modern American black bear populations ( Ursus americanus ) using museum collections, as well as published data from museum collections and a living population. Dental caries in extant black bears are seen in both museum specimens and in vivo bears (0–44% prevalence) (Supplementary Table S3 ), in contrast to their general absence in other carnivores 43 . Moreover, examination of northern boreal forest black bears from Canadian Museum of Nature collections revealed prevalence of caries increasing with age (Supplementary Table S4 ). Discussion Analysis of new fossil material of Protarctos abstrusus from the North America High Arctic shows that, although ecomorphologically similar to the modern North American black bear ( Ursus americanus ), P . abstrusus represents a basal ursine. The most prominent cranial features of P . abstrusus are its relatively short rostrum, flat forehead above the orbit, and high sagittal crest that extends posteriorly and overhangs the occipital condyles (Fig. 9 ), characters that generally signal primitive status within Ursinae. P . abstrusus appears to have been an isolated immigration event from Eurasia to North America, separate from Ursus , representing a time of Asian-North American high latitude floral and faunal interchange 32 , when the high-latitude forests of Asia and North America were connected across the Beringian isthmus. Figure 9 Artist restoration of lateral view of skull and lower jaw of Protarctos abstrusus based on a composite of partial skull (CMN 54380) and right dentary (CMN 52078-B). Missing bones (ascending ramus, mandibular condyle, and angular process) are based on living black bears. Missing teeth (I2-3, P1-3, i2-3, p1-3, and m3) are restored based on their alveoli. The stage of wear on the lower teeth is drawn to match with those of the upper teeth. Art by Xiaoming Wang. Full size image The American black bear, by contrast, appears in the North America fossil record in the Early Pleistocene as a result of an independent dispersal event from Eurasia. Fossil records of true American black bear, Ursus americanus Pallas, range from Irvingtonian to late Rancholabrean 44 , 45 . From the Irvingtonian age, Brown 46 described abundant materials from the Conard Fissure, Arkansas, which he referred to U . americanus . Gidley 47 named Ursus ( Euarctos ) vitabilis from the Cumberland Cave, Maryland, which was later referred to Euarctos vitabilis 48 . By late Rancholabrean black bears appear widespread throughout North America 49 . Several species or subspecies of late Pleistocene black bears have been named, which were sometimes confused with brown bears because of overlap in size and pronounced sexual dimorphism 50 , 51 , such as Ursus optimus from late Pleistocene McKittrick brea deposits of southern California 52 , which was determined by Graham 53 to be a brown bear. Graham 53 concluded that only one species, Ursus americanus , is valid throughout the Pleistocene with late Pleistocene fossil forms being larger than their living descendants, suggesting continuity of the black bear linage in North America, as was also pointed out earlier by Kurtén 54 . Within Ursinae, P . abstrusus represents a stage of dental evolution that is intermediate in its specialization for ingesting plants, and significantly less than modern bears - polar bears being an exception, showing evolutionary reversal toward increased carnivory. The evolutionary history of ursines is generally characterized by a shift in dental specialization from carnivory to increased omnivory, with the posteriormost molars of more recent forms being more elongate, and wrinkled, allowing for more crushing surface (Table S2 ). Although, morphologically, P . abstrusus is less specialized than modern bears, the presence of dental caries suggests the diet of this 3.5 million-year-old transitional form already included a significant carbohydrate component. Dental evidence from the beaver pond site P . abstrusus appears to be from two individuals, including an apparent young adult, and both show dental caries, suggesting their diets included high amounts of fermentable carbohydrates early in their lives. Simple sugars, such as glucose and fructose, are readily metabolized by many bacteria found in the oral biofilm into various acids. These acids demineralize enamel and dentin and may lead to dental caries 55 . Cariogenicity is highly correlated to the amount 56 , 57 and frequency 58 , 59 of sugar intakes. The type of sugar consumed and associated dental caries are also found to differ. Despite their high sugar content, raw fruits by themselves are not always implicated for cariogenicity, although high frequency (up to 17 times a day) may induce caries 60 . In humans, there is convincing evidence that free-sugar consumption of more than four times a day or more than 6–10% energy intake will increase incidents of dental caries 57 . Historically in humans, increase in prevalence of dental caries has generally been associated with dietary shifts, linked with a reduction of nomadic lifestyles 61 , the development of agriculture in Neolithic populations, and even more so with industrialization 62 . In bears, carbohydrate intake may account for the appearance of dental caries (Tables S3 and S4 ), and may also be related to sedentary behavior, particularly for northern bears which hibernate. Northern black bears hibernate five to seven months, and survive better if they have high fat reserves 63 . In bears, the optimal diet for production of fat reserves appears to be one of high-energy carbohydrates (e.g., fruits) and low in protein. High-latitude berries (such as bearberry) often have a wide, circumpolar distribution and can be found in a variety of northern habitats including forest, woodland, wetland and tundra habitats 64 . Black bears and grizzly bears in boreal forest eat berry fruits in the autumn, but some fruits, such as cranberry and bearberry, frequently remain on the vine over winter and are important to bears coming out of hibernation in the early spring 65 , 66 , 67 . Bearberry ( Arctostaphylos uva-ursi ) fruits are relished and highly important to black bear in Pelly River Valley of Yukon Territory 66 . Berries are found in nearly 80% of bear scats collected during the fall period and consistently represent a large component of black bear diet in Alaska, with blueberries ( Vaccinium uliginosum ) being the most common 65 . However, fruit intake may be mitigated by factors such as fruit abundance and body size. For example, larger bodied bears appear to tend toward carnivory, as they are less efficient than smaller bears at exploiting small fruits 68 . These factors may underlie the high variation observed in caries prevalence seen among populations of modern black bears (Table S3 ). Floral macrofossils from the Beaver Pond shows a diversity of berries would have been available to U . abstrusus , including Empetrum nigrum (crowberry), Vaccinium sp. (e.g., blueberry, lingonberry), Rubus idaeus (raspberry) 69 , and their abundance may have been enhanced following forest fires, which is evident at this site 33 . Therefore berries may have constituted a component of the Beaver Pond bear’s diet, particularly during the peak seasons, and their high sugar and acid contents could have resulted in the observed pronounced dental caries. The bear’s habitat may also have included honeybees, but this is speculative. The genus Apis includes honeybees that are today the basis of the honey industry. The genus appears to have originated in Europe dispersing into Asia, Africa as well as North America 70 . In North America the fossil record of this lineage is represented by a single species ( Apis nearctica ) from the Miocene (13 MA) of Nevada 71 . The most likely route that the Apis lineage took to arrive in North America would have been via the Bering Isthmus 70 , which was present throughout the Neogene until ~5–7 Ma; 72 . This land connection would have allowed for the existence of expansive high-latitude terrestrial continuity, spanning the northern reaches of the Eurasian and North American continents. Thus Apis in North America may have originally inhabited this Arctic biome before dispersing southward into the mid-latitudes of North America. In which case, the polar P . abstrusus may have had opportunity to supplement its diet with honey. Aside from the Beaver Pond site fossil bear, all other basal ursines are known from the northern mid-latitudes (30–40° N) of Eurasia and North America (Fig. 1 , and Supplementary Information). The lack of fossil bears in the intervening latitudes reflects the scarcity of northern Neogene vertebrate fossil sites in these regions. Thus, the discovery of the Beaver Pond site P . abstrusus at 78°N fills a substantial geographical gap. The finding also shows that early ursines were adapted to northern forests with snowy winters. Moreover, the Beaver Pond site bear is a small-bodied bear with dental caries and associated with a polar forest, rich in seasonal fruits (Fig. 10 ), suggesting that the northern populations of P . abstrusus likely consumed large amounts of sugar-rich foods in the fall, a pattern consistent with preparation for hibernation seen in modern bears. If so, the Beaver Pond site bear represents the earliest known, and most primitive bear, to have hibernated. Modern ursid hibernators include high latitude/altitude Asian black bears ( U . thibetanus ), northern American black bears ( U . americanus ), all brown bears ( U . arctos ), and female polar bears ( U . maritimus ) 73 . Also, the fossil species of cave bears ( U . spelaeus and U . deningeri ) are inferred to have hibernated 74 . All living bears also employ a reproductive strategy of embryonic diapause (delayed implantation) with implicit adaptive value of reducing the cost of reproduction by truncating embryonic development and of optimizing birth season at the most appropriate time 75 . Furthermore, these reproductive cycles may regulate metabolism by facilitating earlier entry of pregnant females into winter-dormancy state 75 , 76 . In the context of the phylogeny of modern bears the northern americanus-arctos-spelaeus-maritimus clade appears to have acquired hibernation from a single ancestor. The case for Asian black bear is ambiguous because its nearest relatives are not known to hibernate; namely the sloth bear ( Melursus ) of India, the sun bear ( Helarctos ) of Southeast Asia. The early diverging spectacled bear ( Tremarctos ) of South America is also a non-hibernator (Fig. 6 ). If the northern adapted Beaver Pond bear was a hibernator, then hibernation can be traced to the ancestor of all modern bears. This would imply that the Asian black bear retains the primitive condition, and the Eurasian ancestor of the spectacled bear, which would have passed through cold Beringian habitat when it first immigrated to North America 77 , also employed hibernation as part of its repertoire for winter survival. In this evolutionary scenario modern non-hibernating bears, are interpreted to have secondarily lost this trait, in association with adaptation to warmer habitats. Figure 10 Reconstruction of the mid-Pliocene Protarctos abstrusus in the Beaver Pond site area during the late-summer. An extinct beaver, Dipoides , is shown carrying a tree branch in water. Plants include black crowberry ( Empetrum nigrum ) with ripened berries along the path of the bear, dwarf birch ( Betula nana ) in foreground; sweet gale ( Myrica gale ) carried by the beaver, sedges in water margins, flowering buckbeans along the mounds behind the beaver, and larch trees in distant background. Art by Mauricio Antón based on research of this paper and with input on plant community from Alice Telka. Full size image Methods Phylogenetic methods Our phylogenetic analysis combined character matrices from Abella et al . 78 and Qiu et al . 5 and added several relevant basal ursines not present in either of the above authors (Table S1 ). All seven living ursine bears were included in the analysis. Our use of the term “ursine(s)” refers to the tribe Ursini, which include all taxa that fall within the clade of living sloth, sun, black, brown, and polar bears plus their fossil relatives to the exclusion of the tremarctine bears (Tremarctini). Together ursines and tremarctines constitute the subfamily Ursinae. Living ursids examined in this study include: Tremarctos ornatus , AMNH CA 2861, LACM 72530; Melursus ursinus , LACM 88916; Helarctos malayanus , LACM 52380; Ursus thibetanus , AMNH CA 1981, LACM 30781; Ursus americanus , AMNH CA 2886, AMNH CA 35005, LACM 92299; Ursus arctos , LACM 31257; Ursus maritimus , LACM 86096. Character coding and manipulation are done on Mesquite program 79 and phylogenetic analysis is performed on TnT (version 1.1, Dec. 2013) 80 . Initial search resulted in a single shortest tree of 139 steps (tree search parameters: Implicit Enumeration and New Technology search; both methods yielded the same result). This tree has a number of nodes for living ursids that contradict molecular phylogeny. We then constrained our search using the topology of extant taxa, fixing the relationship based on nuclear DNA in Kutschera et al . 37 and Kumar et al . 38 . Chronology For estimates of magnetic ages we adopt the ATNTS2012 Geomagnetic Polarity Time Scale (GPTS) of Hilgen et al . 81 . Our usage of the Plio-Pleistocene boundary (Neogene-Quaternary boundary) follows the recent decision by the International Commission on Stratigraphy at the boundary of magnetochrons C2r-C2An (2.581 Ma) 82 . Surface scanning of skull Skull elements of Protarctos abstrusus were scanned using an Arius 3-D laser scanner, digitally reassembled in PointStream software (Version 3.2.0.0) 83 and converted into a triangulated polymesh surface using Paraform 84 . This model was later adjusted in Avizo 9.0 85 by two of us (XW and SCW). Cranial measurements on the digital model were taken by tools provided in the above software. MicroCT scanning of teeth MicroCT examinations were made of m1 and m2 from CMN 52078 A (left dentary), and left M2 from CMN 54380 using a SkyScan 1173 scanner operating at 70 kV, 114 microA and the images were reconstructed with an isotropic voxel size of 12.08941 micrometer for m1 and 10.30843 micrometer for m2. The reconstructed BMP images were imported into Fiji (V. 2.0.0-rc-30/1.49t) where they were reoriented to make the occlusal plane horizontal to the image edge, cropped to the size of the crown, contrast adjusted and converted to 8 bit tif files. The resulting images were imported into Avizo 9.0.0 Lite 85 for analysis. The enamel and dentin were segmented and 3D models constructed. Cross-sectional images (oriented buccolingually) were made through regions of interest. Prevalence of caries in extant U . americanus Data on prevalence of carious lesions in modern populations of American black bear ( U . americanus ) were collected from published data (see Table S3 ). Also the upper teeth of 57 specimens from northern populations from the collections of Canadian Museum of Nature were examined for the presence of caries: CMNMA 15004, 17790, 17791, 17933, 17953, 17958, 17959, 17970, 18038, 1826, 1830, 1831, 1833, 1834, 1836, 1840, 1841, 1842, 1844, 1905, 19598, 19816, 19817, 19818, 21811, 21812, 21813, 21814, 21817, 21880, 22009, 24245, 24247, 26695, 26696, 30874, 30875, 30876, 30877, 31764, 31765, 34109, 34335, 34336, 34337, 34338, 34339, 34340, 34341, 34342, 36926, 37352, 39744, A20682, A20683, A20684, 9577. Age class for the latter was determined from the fusion of cranial element 13 .
Researchers from the Canadian Museum of Nature and the Natural History Museum of Los Angeles County have identified remains of a 3.5-million-year-old bear from a fossil-rich site in Canada's High Arctic. Their study shows not only that the animal is a close relative of the ancestor of modern bears—tracing its ancestry to extinct bears of similar age from East Asia—but that it also had a sweet tooth, as determined by cavities in the teeth. The scientists identify the bear as Protarctos abstrusus, which was previously only known from a tooth found in Idaho. Showing its transitional nature, the animal was slightly smaller than a modern black bear, with a flatter head and a combination of primitive and advanced dental characters. The results are published today in the journal Scientific Reports. "This is evidence of the most northerly record for primitive bears, and provides an idea of what the ancestor of modern bears may have looked like," says Dr. Xiaoming Wang, lead author of the study and Head of Vertebrate Paleontology at the Natural History Museum of Los Angeles County (NHMLA). "Just as interesting is the presence of dental caries, showing that oral infections have a long evolutionary history in the animals, which can tell us about their sugary diet, presumably from berries. This is the first and earliest documented occurrence of high-calorie diet in basal bears, likely related to fat storage in preparation for the harsh Arctic winters." The research team, which included co-author Dr. Natalia Rybczynski, a Research Associate and paleontologist with the Canadian Museum of Nature, were able to study recovered bones from the skull, jaws and teeth, as well as parts of the skeleton from two individuals. A view of the Beaver Pond fossil site, with a number of the animals and plants based on fossils recovered from the site. In the background, there is a bear family. When this art was contracted 15 years ago by the Canadian Museum of Nature, it wasn't known exactly what they were but can now be Protarctos. Credit: Art by George "Rinaldinho" Teichmann. The bones were discovered over a 20-year period by Canadian Museum of Nature scientists, including Dr. Rybczynski, at a fossil locality on Ellesmere Island known as the Beaver Pond site. The peat deposits include fossilized plants indicative of a boreal-type wetland forest, and have yielded other fossils, including fish, beaver, small carnivores, deerlets, and a three-toed horse. The findings show that the Ellesmere Protarctos lived in a northern boreal-type forest habitat, where there would have been 24-hour darkness in winter, as well as about six months of ice and snow. "It is a significant find, in part because all other ancient fossil ursine bears, and even some modern bear species like the sloth bear and sun bear, are associated with lower-latitude, milder habitats," says co-author Dr. Rybczynski. "So, the Ellesmere bear is important because it suggests that the capacity to exploit the harshest, most northern forests on the planet is not an innovation of modern grizzlies and black bears, but may have characterized the ursine lineage from its beginning." Dr. Wang analyzed characteristics of fossil bear remains from around the world to identify the Ellesmere remains as Protarctos and to establish its evolutionary lineage in relation to other bears. Modern bears are wide-ranging, found from equatorial to polar regions. Their ancestors, mainly found in Eurasia, date to about 5 million years ago. Digital reconstruction of the Canadian Arctic fossil bear, Protarctos abstrusus. Credit: Xiaoming Wang Fossil records of ursine bears (all living bears plus their ancestors, except the giant panda, which is an early offshoot) are poor and their early evolution controversial. The new fossil represents one of the early immigrations from Asia to North America but it is probably not a direct ancestor to the modern American black bear. Of further significance is that the teeth of both Protarctos individuals show signs of well-developed dental cavities, which were identified following CT scans by Stuart White, a retired professor with the UCLA School of Dentistry. The cavities underline that these ancient bears consumed large amounts of sugary foods such as berries. Indeed, berry plants are found preserved in the same Ellesmere deposits as the bear remains. "We know that modern bears consume sugary fruits in the fall to promote fat accumulation that allows for winter survival via hibernation. The dental cavities in Protarctos suggest that consumption of sugar-rich foods like berries, in preparation for winter hibernation, developed early in the evolution of bears as a survival strategy," explains Rybczynski.
10.1038/s41598-017-17657-8
Earth
Antarctic sea-ice expansion in a warming climate
Eui-Seok Chung et al, Antarctic sea-ice expansion and Southern Ocean cooling linked to tropical variability, Nature Climate Change (2022). DOI: 10.1038/s41558-022-01339-z Journal information: Nature Climate Change
https://dx.doi.org/10.1038/s41558-022-01339-z
https://phys.org/news/2022-04-antarctic-sea-ice-expansion-climate.html
Abstract A variety of hypotheses, involving sub-ice-shelf melting, stratospheric ozone depletion and tropical teleconnections, have been proposed to explain the observed Antarctic sea-ice expansion over the period of continuous satellite monitoring and corresponding model–observation discrepancy, but the issue remains unresolved. Here, by comparing multiple large ensembles of model simulations with available observations, we show that Antarctic sea ice has expanded due to ocean surface cooling associated with multidecadal variability in the Southern Ocean that temporarily outweighs the opposing forced response. In both observations and model simulations, Southern Ocean multidecadal variability is closely linked to internal variability in the tropics, especially in the Pacific, via atmospheric teleconnections. The linkages are, however, distinctly weaker in simulations than in observations, accompanied by a marked model–observation mismatch in global warming resulting from potential model bias in the forced response and observed tropical variability. Thus, the forced response dominates in simulations, resulting in apparent model–observation discrepancy. Main Continuous satellite observations since ~1979 indicate a pronounced interhemispheric asymmetry in sea-ice change, with a modest expansion in the Southern Ocean (SO) despite the global warming trend 1 , 2 . Unlike the marked sea-ice decline in the Arctic, Antarctic sea-ice expansion, which is accompanied by an overall cooling of sea surface temperature (SST) in the SO 3 , 4 , 5 , 6 , has generally not been reproduced by climate models over 1979−2014, under historical forcing 7 , 8 , 9 , 10 , 11 , 12 . Considering that Antarctic sea-ice changes affect ocean–atmosphere heat and momentum exchanges, ocean carbon uptake, ecosystems and the thermohaline circulation 13 , this marked discrepancy may have serious implications for the credibility of near-term model-projected climate change. It has been suggested that Antarctic sea-ice expansion has been due to increased freshwater fluxes 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 and changes in the Southern Annular Mode and associated SO circulation changes 5 , 22 , 23 , 24 , 25 , 26 , 27 , with this triggered by increased GHG concentrations and human-induced stratospheric ozone depletion. Although model deficiencies in representing these mechanisms cannot be ruled out 8 , 9 , 28 , 29 , several other studies have suggested that the Antarctic sea-ice expansion may have arisen from internal climate variability 3 , 4 , 7 , 9 , 11 , 30 , 31 , with this tied in part to climate variability in the Pacific and Atlantic Oceans 12 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 . The recent multiyear Antarctic sea-ice decline 2 , 8 , 36 , 41 , 42 , 43 seems to fit into this view. However, both the main cause of the satellite-observed sea-ice expansion, whether external forcing or internal variability, and the question of why models fail to reproduce observations under historical forcing, remains unresolved 13 , 16 , 17 , 44 , 45 , 46 , 47 . On the basis of the fact that regional patterns of sea-ice trends are governed mainly by wind fields 48 , ref. 26 demonstrated in a given model that applying realistic wind forcing along with realistic SSTs is essential for reproducing the observations over the period 1990−present, during which marked sea-ice expansion occurred. This implies that climate models may have deficiencies in representing teleconnection processes that affect SO wind and SST fields. One of the major obstacles to resolving these issues is the inherent difficulty in separating the observed changes over the relatively short period (1979−2014) into externally forced changes and internal variability. As the influence of internal variability on long-term trends diminishes with increasing time span 49 (Supplementary Text 1 ), we employ a long-term SST record in the SO (1950−2020) as a proxy for Antarctic sea ice. In this article, using the long-term proxy record and large-ensemble climate model simulations, we attempt to elucidate the main processes responsible for the satellite-observed sea-ice expansion and the causes of the model–observation discrepancy. Sea-ice and SST changes in the SO Before delving into the causes of the observed sea-ice expansion, we examine annual-mean total sea-ice extent (SIE) and SO (south of 50° S) SST trends over 1979–2014, for which continuous satellite observations are available, and each of the models analysed in this study is represented by more than 15 ensemble members ( Methods ). The satellite observations indicate a statistically significant sea-ice expansion at a rate of 0.223 ± 0.087 × 10 6 km 2 decade −1 over this period (Fig. 1a , solid line in red), which is not captured by the model simulations analysed in this study (dark blue boxes tagged as Hist in Fig. 1a ). A marked model–observation discrepancy is also apparent over periods other than 1979–2014, but this discrepancy does not appear to grow further with increases in time span (Fig. 2a ). Fig. 1: Observed and model-simulated changes in annual-mean SIE and SST over the SO (south of 50° S). a , Box plots of model-simulated SIE trends over 29-year (yellow green) and 36-year (dark blue) periods for three cases: Hist, trends over 1950−1978 and 1979−2014 under historical forcing; PI, trends for all possible overlapping 29-year and 36-year segments of pre-industrial control runs; and PI + forced, PI trends with the corresponding ensemble-mean values for 1950−1978 and 1979−2014 added. The box covers the inter-quartile range with the line inside the box representing the median value across multi-ensemble models and whiskers denoting the maximum and minimum values. The red solid line denotes the satellite-observed 1979−2014 SIE trend with the accompanying dashed lines representing the standard error of the trend. b , Same as in a , but for SST trends. The orange solid line denotes the observed 1950−1978 SST trend averaged over four SST datasets: Extended Reconstructed Sea Surface Temperature (ERSST), Hadley Centre Sea Ice and Sea Surface Temperature (HadISST), Centennial in situ Observation-Based Estimates (COBE) and European Centre for Medium-Range Weather Forecasts Reanalysis v.5 (ERA5). The accompanying dashed lines represent minimum and maximum trends. The solid and dashed lines in red denote the corresponding observed SST trends over 1979−2014. c , Time series of SIE anomaly relative to the 1979−2020 means. The red dot denotes the SIE anomaly for September 1964 from the Nimbus-1 satellite. For model simulations, lines denote the ensemble-mean anomaly for individual models. The shading indicates inter-ensemble variability for the Community Earth System Model version 2 (CESM2) Large Ensemble with one and two standard deviations represented, respectively, by dark and light grey. d , Same as in c , but for SST anomaly. Note the reversed y axis direction in b and d . Full size image Fig. 2: Comparison of observed and model-simulated trends in Antarctic SIE and global-mean surface temperature. a , Timescale dependence of the model–observation discrepancy in the annual-mean Antarctic SIE trend. The abscissa denotes the end year of a given period starting in 1979. The solid line in red denotes the observed trend, with the accompanying shading representing the standard error of the trend. The solid line in dark blue denotes the median values of model-simulated trends across multi-ensemble models with the corresponding inter-quartile and entire range represented, respectively, by dark and light shading. GCM, global climate models. b , Scatterplot of annual-mean Antarctic SIE trend with the corresponding annual-mean global-mean surface temperature (GMST) trend over 1979−2014. The red dot denotes the observed trend; smaller dots in dark blue represent model-simulated trends. c , Same as in b , but for 1979−2020. Full size image The satellite-observed Antarctic sea-ice expansion over 1979–2014 results from large increases in the Indian and West Pacific sectors, especially in the Ross Sea, despite moderate decreases in the Amundsen and Bellingshausen seas (Fig. 3a ). As noted in previous studies 3 , 4 , 6 , the overall expansion of Antarctic sea ice occurred along with surface cooling in the SO (red lines in Fig. 1b ), particularly in the Pacific sector (Fig. 3b ). By contrast, the model-simulated forced response exhibits spatially coherent sea-ice decline (Fig. 3c ) and ocean surface warming (Fig. 3d ) over the same period, which is consistent with increasing global temperatures, although intermodel spread is substantial. Note that all models analysed in this study fail to capture the observed SIE/SST trends (dark blue boxes tagged as Hist in Extended Data Fig. 1a,d ). Fig. 3: Observed and model-simulated trends in annual-mean sea ice and SST over the period 1979−2014. a , b , Observed trends in sea-ice concentration (SIC, HadISST) ( a ) and SST (ERSST) ( b ). c , d , Same as in a and b , but for multimodel mean of the ensemble-mean trends for a given model. For observations, stippling indicates statistical significance of the computed trends at the 95% confidence level. For the multimodel mean, stippling denotes regions where the multimodel mean exceeds two standard deviations of the trend across the models. Full size image To determine whether the model–observation discrepancy arises from an insufficient number of ensemble members or from external forcing, model-simulated trends under pre-industrial conditions are computed from all possible overlapping 36-year segments of corresponding pre-industrial control runs (dark blue boxes tagged as PI in Fig. 1a,b and Extended Data Fig. 1b,e ). The observed SIE/SST trends over 1979–2014 lie within the range simulated by climate models in the absence of external forcings, in line with previous studies suggesting that the observed sea-ice expansion can be attributed in large part to internal variability 9 , 23 . Next, assuming that internal variability is state independent, the distribution in the PI case is adjusted by adding the ensemble-mean trend (Supplementary Table 2 ), which can be regarded as externally forced response, for each model over 1979–2014 (dark blue boxes tagged as PI + Forced in Fig. 1a,b and Extended Data Fig. 1c,f ). Note that adding the forced response causes most climate models to fail in capturing the observed trends (Extended Data Fig. 1c,f ) although these models are lacking a potential forced response from ice-sheet freshwater input, which tends to increase trends. These results imply that the model–observation discrepancy stems from either an overestimated forced response or an underestimated internal variability in model simulations rather than an insufficient ensemble size. The potential overestimation of a model-simulated SIE decrease can arise from not only missing freshwater forcing in simulations 21 , but also model biases in the global-mean warming response 9 , 28 . Scatterplots of the SIE trends with corresponding global-mean warming trends over 1979−2014 (Fig. 2b ) and 1979−2020 (Fig. 2c ) suggest that as noted in ref. 9 , the global-mean warming response is distinctly stronger in model simulations and thereby contributes to the model–observation discrepancy in SIE trends. The mismatch in the global-mean warming response appears to stem primarily from biases in model climate sensitivity 9 , but part of the mismatch may arise due to internal variability. For example, the time evolution of model-simulated ensemble-mean, annual-mean global-mean surface temperature anomaly over 1950–2020 generally agrees well with observations, although some models appear to overestimate GHG-induced global warming (Extended Data Fig. 2a ). The difference between observed and modelled ensemble-mean changes exhibits a strong negative trend over 1979–2014 (Extended Data Fig. 2b ), which can be caused by an incorrect forced response to GHG forcing in model simulations. However, the negative trend does not hold up in the 2010s despite continued increases in GHGs, implying that part of the model–observation mismatch is attributable to a lack of internal variability in the model. In fact, pacemaker experiments ( Methods ), in which observed SST anomalies in the eastern equatorial Pacific were assimilated, were able to capture this negative trend that was driven by SST variability in the eastern equatorial Pacific (Extended Data Fig. 2c , dashed line), in agreement with previous studies 50 . These results, therefore, suggest that the model–observation mismatches in both SIE and global-mean warming responses can be attributed in part to tropical internal variability. To determine whether the observed SIE expansion over 1979−2014 can be explained by internal variability, the time evolution of the annual-mean SIE anomaly in the observations with respect to the 1979−2020 climatology is compared with the model simulations (Fig. 1c ). While the ensemble-mean changes exhibit a largely monotonic decline over time (solid lines in colour other than red), the observations (red lines) suggest a substantial multidecadal variability 3 , 4 , 7 over 1964−2020. According to the National Snow and Ice Data Center (NSIDC) G02135 data (solid line in red), the observed expansion over 1979−2014 was virtually cancelled out by a precipitous decline over the subsequent years and since then has returned to mean values for the satellite record 2 , 8 , 9 , 41 , 43 . Furthermore, the NSIDC-0192 (dashed line in red) and G00917 (dash-dotted line in red) data indicate that the observed expansion over 1979−2014 was preceded by a marked decline in the 1970s. As noted in ref. 7 , the Nimbus-1 SIE anomaly for September 1964 (red dot) further suggests that the observed expansion over 1979−2014 was driven by internal variability. Although the observed short-term trends are not always in agreement with model simulations, the sign of the observed changes over 1964−2020 is broadly consistent with that of the model’s forced response. Due to the absence of continuous satellite observations before 1979, it is not possible to quantify sea-ice changes over the entire 71-year period. However, considering the close relationship between SIE and SST changes, comparisons of SST observations with the model’s forced response can shed light on observed sea-ice changes. The sign of the observed long-term SST changes over the entire 71-year period largely agrees with the model’s forced response (Fig. 1d ). As shown in ref. 3 , a strong contrast in the sign of observed SST trends is found between 1950−1978 (orange lines, warming) and 1979−2014 (red lines, cooling) in Fig. 1b , with the positive trends in the former period implying overall Antarctic sea-ice reduction, as in the model simulations (yellow green boxes tagged as Hist in Fig. 1a and Extended Data Fig. 1a ). By contrast, model simulations consistently exhibit SST increases, with the latter period showing a substantially enhanced warming (Extended Data Fig. 1d,f ). Interestingly, the observed SST warming over 1950−1978 is noticeably stronger than the model’s forced response, implying that the influence of internal variability and GHG forcing on SSTs acted in the same direction. These characteristics further support the argument emphasizing a role for internal variability. Connections with tropical internal variability The overall expansion of Antarctic sea ice and concurrent ocean surface cooling in the SO over 1979−2014 was accompanied by distinct cooling in the central-to-eastern tropical Pacific and pronounced warming in the northwest/southwest Pacific and the North Atlantic (Fig. 3a,b ). This implies the potential linkages between the observed changes in the SO and the Interdecadal Pacific Oscillation (IPO, Methods ) and Atlantic Multidecadal Variability (AMV, Methods ), as suggested by previous studies 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 and discussed in depth in ref. 51 . We examine whether the unforced components of SO sea-ice and SST changes are linked primarily to the IPO and AMV. Assuming that the linear trends of SST over 1950−2020 represent the forced response, internal variability is estimated at each grid point through linear detrending. The detrended time series of the observed SO-mean SST anomaly indicate the presence of multidecadal variability (red lines in Fig. 4a ) along with strong negative trends over 1979−2014, which appear to be closely linked to markedly reduced global-mean warming response over the same period (Extended Data Fig. 2 ). The correlation between the detrended SST time series at each grid point and the corresponding SO-mean time series over 1950−2020 confirms the connection of the SO SSTs to both the IPO and AMV (Fig. 4b ). The spatial patterns of regression slope against the AMV (Extended Data Fig. 3a ) and IPO (Extended Data Fig. 3b ) indices further highlight these connections, which are also evident in other reconstructed/reanalysis datasets (Extended Data Fig. 4 ). The Pacific sector is linked mainly to the IPO during all seasons, while the Atlantic sector appears to have been more sensitive to the AMV, especially in austral summer and fall (Extended Data Fig. 5 ). Fig. 4: Multidecadal variability of SO SST and its connection to the IPO and AMV. a , Time series of detrended annual-mean SO-mean SST changes over 1950−2020 (red lines). Also shown are time series of the AMV index (blue line with sign reversed), the IPO index (green, multiplied by 0.5) and model-simulated ensemble-mean SO-mean SST changes resulting from observed SST variability in the eastern equatorial Pacific (purple). The model-simulated response to observed SST variability in the eastern equatorial Pacific is estimated by subtracting SST changes from coupled historical experiments with IPSL-CM6A-LR from those obtained from pacemaker experiments where the observed SST anomalies in the eastern equatorial Pacific were assimilated under the same forcing as in the historical experiments. b , Temporal correlation of detrended ERSST annual-mean SST change at each grid point with corresponding SO-mean change. c , SST trends over 1979−2014, which are linearly congruent with the observed IPO and AMV trends over 1979−2014. The congruent trends are estimated by summing the multiplicative product of the observed IPO trend and the regression coefficient for the IPO at each grid point in the multiple linear regression of detrended SST anomalies against the IPO and AMV indices and that for the AMV. d , Same as in c , but with the regression coefficients derived from multimodel pre-industrial control runs. Stippling indicates statistical significance of the correlation coefficients at the 95% confidence level in b and regions where the multimodel mean trend exceeds two standard deviations of the trend across the models in d . Full size image The forced response is unlikely to be linear over time, in particular for SIE 52 . Thus, we also identify the internal variability component by subtracting off, for each grid point and year, the simulated ensemble mean of annual-mean SST anomalies over 1950−2020. Despite intermodel discrepancies in the forced response, especially anthropogenic aerosol–cloud interactions 53 , the resulting unforced component of SST changes in the SO is highly correlated with both the IPO and AMV (Extended Data Fig. 3c–h ). To examine whether similar relationships hold in climate models, regression coefficients of SST and sea-ice concentration changes against the AMV and IPO indices are computed for each model using their pre-industrial control-run output. The regression slopes against the AMV index exhibit a substantial intermodel discrepancy in terms of spatial patterns and sign (Extended Data Figs. 6 and 7 , left panels), implying that the AMV signal may not be robust in Antarctic sea ice 10 . By contrast, all models reasonably depict the connection of SST and sea-ice concentration in the Pacific sector to the IPO 10 , 12 , 36 , 37 (Extended Data Figs. 6 and 7 , right panels). However, unlike in the observations (Extended Data Fig. 3b ), the IPO regression coefficients over the Pacific sector of the SO are noticeably smaller than those over the central-to-eastern tropical Pacific in the simulations (Extended Data Fig. 6 , right panels). In addition, the IPO tends to exert a weaker influence over the SO, particularly over the eastern Pacific sector, in the simulations. This model–observation mismatch in the IPO signal could be caused by model biases such as an excessive cold tongue in the equatorial Pacific. To assess these discrepancies more quantitatively, SST trends congruent with the observed IPO and AMV trends over 1979−2014 ( Methods ) are computed using the multiple linear regression coefficients from both observations and model simulations under pre-industrial conditions (Fig. 4c,d ). Cooling is pronounced in the Pacific sector of the SO for both observations and simulations, but the IPO and AMV-induced SO-mean cooling is distinctly weaker (~70%) in the simulations with large intermodel spread (Extended Data Fig. 8 ). A similar model–observation discrepancy is found in the congruent sea-ice concentration trends (Extended Data Fig. 9 ). These discrepancies imply potential model deficiencies in representing IPO/AMV-linked teleconnection processes. Given that this period coincides with the AMV and IPO phase transitions (Fig. 4a ), despite potential model deficiencies, we further investigate whether the AMV and IPO phase transitions can explain part of the observed changes using output from idealized coupled-model SST forcing experiments ( Methods ). In agreement with previous work on how the AMV and IPO are linked to Antarctic sea-ice changes 36 , 37 , 39 , 40 , a negative-to-positive phase transition of AMV leads to modest sea-ice expansion and surface cooling in the West Pacific sector, while sea-ice decline and surface ocean warming in the Amundsen and Bellingshausen seas is associated with an intensification of the Amundsen Sea Low (Fig. 5 ). We note, however, that these AMV-induced changes are not robust and are not large enough to fully explain the observed 1979–2014 trend. Although the spatial pattern is noticeably different from observations, a positive-to-negative phase transition of the IPO results in an overall sea-ice expansion and surface cooling and a marked deepening of the Amundsen Sea Low (Fig. 5d–f ), which has been linked to atmospheric Rossby waves emanating from the tropical Pacific 32 , 35 , 36 , 37 , 38 . In addition, a strengthening of westerlies at ~60° S may lead to a short-term cooling and sea-ice expansion due to the equatorward Ekman transport of cold surface water (Fig. 5f ) during cold seasons 46 , 54 while a long-term response would be a warming and sea-ice reduction due to the upwelling of warm and salty Circumpolar Deep Water 46 . Fig. 5: Influence of internal variability in the Atlantic and Pacific on sea ice, SST and circulation in coupled-model simulations. a , Response of sea-ice concentration to a negative-to-positive phase transition of the AMV in idealized SST restoring experiments with IPSL-CM6A-LR. The response is computed by subtracting annual-mean sea-ice concentration changes averaged over a 10-year period in the negative phase SST pattern experiment, in which North Atlantic SSTs are restored to negative AMV anomaly superimposed on model control-run climatology, relative to the corresponding control experiment, in which North Atlantic SSTs are restored to model control-run climatology, from that in the counterpart positive phase experiment. b , Same as in a , but for SST response. c , Same as in a , but for sea-level pressure (SLP, shading), surface winds (vectors) and 300 hPa geopotential height (contours) responses. d – f , Same as in a – c , but for responses to a positive-to-negative phase transition of the IPO. In a – f , stippling denotes regions where the change is statistically significant at the 95% confidence level. In c and f , surface wind and 300 hPa geopotential height changes are shown over regions where the change is statistically significant at the 95% confidence level. Full size image To further illustrate that internal climate variability is responsible, in part, for the opposing SST/SIE trends between 1950−1978 and 1979−2014, we also analysed coupled-model pacemaker experiments. Although year-to-year fluctuations are substantial and the long-term trend is weak (and the impact of equatorial Pacific SSTs on Antarctic sea-ice trends could be model dependent 27 ), these model simulations broadly reproduce the opposing SST/SIE trends between the two periods (Fig. 4a and Extended Data Fig. 10 ). Summary and discussion In this study, we provide compelling evidence that the observed Antarctic sea-ice expansion over 1979−2014 occurred, in large part, as a result of internal variability, linked to the IPO and/or AMV, overpowering temporarily the forced response. By contrast, tropical teleconnections to the SO are distinctly weaker, in general, in model simulations. This implies that skilful near-term decadal prediction of Antarctic sea-ice change should, to some degree, be contingent on improving the representation of processes controlling internal variability 26 . Since model biases in ocean stratification and westerlies, among many others, can affect both internal variability and forced response, correcting such biases is a pre-requisite for improving internal variability processes. The phase of IPO shifted from negative to positive around 2015. If the IPO were truly one of the major factors governing Antarctic sea-ice variability, one would expect that the IPO-induced SIE change over recent years is in the opposite direction to what it was over 1979−2014. Consistent with this conjecture, recent years have witnessed a rapid decline of Antarctic sea ice (Fig. 1c ), and modelling studies have demonstrated that the recent turnaround is linked in part to the IPO phase shift 36 , 42 . The failure of models to reproduce the observed Antarctic sea-ice expansion is attributed, in large part, to weaker reproduction of multidecadal internal variability in the SO. However, considering non-negligible discrepancy with observations in wind and SST-nudged simulations 26 as well as model biases (including stronger global warming 9 ) and model deficiencies, one should not infer that the observed sea-ice expansion can be explained exclusively by IPO/AMV-linked teleconnections as other factors such as increased freshwater fluxes 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 and stratospheric ozone depletion 22 might also play important roles. In particular, the model–observation discrepancy could be caused entirely by missing freshwater fluxes in model simulations. However, the associated uncertainties are enormous due to the absence of continuous long-term in situ observations of sub-ice-shelf melting and the phasing of freshwater fluxes, although remote-sensing products can provide estimates of these quantities 55 . Moreover, the Antarctic sea-ice response to freshwater forcing is highly sensitive to models and implementation method, as reflected in a pronounced inconsistency among previous modelling studies 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 . Given strong seasonality and regional differences in the Antarctic sea-ice trend 36 , 54 , further investigation is required to fully understand Antarctic sea-ice changes and variability by accounting for all these processes together, which will be contingent on sustaining a multidecadal multiplatform observing system and resolving existing issues in model simulations. Methods Observational datasets and model simulation output The NSIDC Sea Ice Index data (version 3, dataset ID G02135) derived from the Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR) and Defense Meteorological Satellite Program Special Sensor Microwave/Imager (SSM/I) and Special Sensor Microwave Imager/Sounder (SSMIS) passive microwave brightness temperatures 56 are used to analyse observed changes in Antarctic SIE, defined as the total area of pixels with sea-ice concentration greater than 15%. The NSIDC G02135 SIE data are available beginning in November 1978. To examine SIE changes before the period of continuous satellite monitoring, we also used the NSIDC Electrically Scanning Microwave Radiometer–SMMR–SSMI merged SIE dataset (dataset ID NSIDC-0192) 57 over the period 1973–2002, National Oceanic and Atmospheric Administration (NOAA)/National Meteorological Center/Climate Analyses Center Arctic and Antarctic Monthly Sea Ice Extent digitized from weekly operational sea-ice charts (version 1, dataset ID G00917) 58 over the period 1973–1990 and SIE for September 1964 from the Nimbus-1 satellite (19.7 × 10 6 km 2 ) (ref. 59 ). According to ref. 59 , the uncertainty range of Nimbus-1 SIE for September 1964 is from 18.9 to 20.4 × 10 6 km 2 , implying that the SIE in 1964 was larger than in any September measurements since 1979 except for 2013 and 2014. As these products are not intercalibrated, for NSIDC-0192 and G00917, the mean bias is computed for each calendar month over the respective overlapping period with NSIDC G02135 and then removed. In the case of the Nimbus-1 SIE for September 1964, potential biases were not adjusted because there is no overlapping period with other products. Although the mean bias may not be constant in time over the overlapping period, the pre-1979 SIE variability shown in Fig. 1c is consistent with previous studies 60 , 61 , 62 . In addition to these NSIDC products, sea-ice concentrations from the Hadley Centre Sea Ice and Sea Surface Temperature (HadISST) dataset 63 are used to determine the spatial pattern of the sea-ice concentration trend. Because of the close relationship between SIE and SST changes in the SO 3 , 4 , 6 , we also examined multidecadal variability in SO (south of 50° S) SSTs. To account for observational uncertainties due to insufficient in situ measurements in the SO before the 1980s 3 , we use multiple reconstructed/reanalysis datasets: NOAA’s Extended Reconstructed Sea Surface Temperature version 5 (ERSST v.5) 64 over the period 1950–2020, HadISST 63 over the period 1950–2020, Centennial in situ Observation-Based Estimates (COBE) SST2 65 over the period 1950–2019, and the European Centre for Medium-Range Weather Forecasts Reanalysis v.5 (ERA5) 66 over the period 1950–2020. Despite potential SST uncertainties over the pre-satellite period, the SST variability is broadly consistent with other independent in situ observations 3 . In addition, although there is large spread in SO SSTs among the reconstructed/reanalysis datasets (Figs. 1d and 4a ), as shown in Supplementary Fig. 1 , the spread decreases substantially if the SST variability is determined using only datasets (that is, ERSST and ERA5) that are consistent with a quality-checked, bias-adjusted non-interpolated dataset (HadSST4 67 ). To represent the time evolution of the global-mean, annual-mean surface temperature, we use the average of four datasets: HadCRUT5.0.1.0 68 , GISTEMPv.4 69 , 70 , Berkeley Earth 71 and NOAA globaltemp v.5.0.0 72 . These observational datasets are listed in Supplementary Table 1 . The observation-based changes in SIE, SSTs and global-mean temperature are compared with simulated changes from multiple initial-condition large ensembles conducted with Earth system models under historical forcing (and representative concentration pathway (RCP) 8.5 forcing over the period 2006−2014 for some models), in which ensemble members are forced by the same external forcing but with slightly different initial conditions. Since the imposed external forcing is identical across ensemble members of a given model, the ensemble-mean change can be regarded as a forced response to the imposed external forcing. To reduce uncertainties in the estimated forced response, Coupled Model Intercomparison Project (CMIP) phase 5 (CMIP5-) and CMIP6-class models, that have more than 15 ensemble members over the period 1979–2014, are analysed in this study: two CMIP5-class 73 models (the CanESM2 Large Ensemble 74 and the Community Earth System Model (CESM) version 1 (CESM1) Large Ensemble 75 ) and seven CMIP6-class 76 models (ACCESS-ESM1-5, CanESM5 (with two physics options available), the CESM2 Large Ensemble 77 , EC-Earth3, IPSL-CM6A-LR, NorCPM1 and UKESM1-0-LL). The number of ensemble members and forcing information are given in Supplementary Table 2 . As shown in Supplementary Fig. 2 , the mean seasonal cycle of Antarctic total SIE over the period 1979–2014, characterized by the maximum around September and the minimum around February, is broadly consistent with that from NSIDC G02135 although some models, such as EC-Earth3, exhibit noticeable discrepancies in amplitude. We also examined the characteristics of unforced variability of sea ice and SSTs using pre-industrial control simulation output (Supplementary Table 2 ). Previous studies have suggested that the Antarctic climate can be affected by climate variability in the Atlantic and Pacific via atmospheric teleconnections 32 , 33 , 34 , 35 , 36 , 37 , 38 . To further enhance our understanding of the potential linkage of sea-ice and SST changes in the SO to Atlantic and Pacific climate variability, we analysed coupled-model simulation output from the idealized SST forcing experiments conducted as part of the CMIP6 Decadal Climate Prediction Project (DCPP) 78 . The DCPP SST forcing experiments analysed in this study are designed to investigate the response of coupled models to the patterns of AMV and IPO by restoring North Atlantic and Pacific SSTs, respectively, to both positive and negative anomaly patterns of AMV and IPO superimposed on model control-run climatology over a 10-year period. In addition to the AMV and IPO experiments, we analysed output from pacemaker experiments in which the observed SST anomalies in the eastern equatorial Pacific were assimilated over the period 1950−2014. As the pacemaker experiments were forced with the same external forcings as the historical experiments, the deviations from the corresponding historical experiments largely represent changes due to unforced variability of the eastern equatorial Pacific SSTs. More detailed information on these DCPP experiments can be found in ref. 78 . We focus on simulation output for IPSL-CM6A-LR as sea-ice fields are available for all of these DCPP experiments. The number of ensemble members is 25 for the AMV experiments, 10 for the IPO experiments, and 10 for the pacemaker experiments. AMV and IPO An SST-based AMV (also referred to as AMO) index, defined as low-pass filtered area-averaged North Atlantic (0–60° N, 80° W–0°) SST anomalies, is computed using the ERSST version 5 dataset. A positive AMV phase is characterized by positive SST anomalies over most of the North Atlantic Ocean. Instead of detrending the SST anomalies to remove the climate change signal, following ref. 79 , we subtracted the global-mean values from corresponding SST anomalies at each grid point over the North Atlantic. This method is also applied to the pre-industrial control simulation SST fields from CMIP5 and CMIP6 models. Although ref. 79 devised this method to avoid errors inherent to the detrending method, the AMV index computed in this way is also likely to include errors as the externally forced change is not spatially uniform over the globe. On the basis of the close connection of AMV to the Atlantic Meridional Overturning Circulation, the AMV has been regarded as internally generated, unforced climate variability 80 , 81 . However, North Atlantic climate variability might be driven in part by changes in external forcing agents such as sulfate aerosols 82 . Following ref. 83 , the IPO index is computed as the low-pass filtered difference between the SST anomaly averaged over the central-to-eastern equatorial Pacific (10° S–10° N, 170° E–90° W) and the average of the SST anomaly over the northwest Pacific (25–45° N, 140° E–145° W) and the southwest Pacific (50–15° S, 150° E–160° W). This method is applied to both the ERSST version 5 dataset and CMIP5/CMIP6 pre-industrial control simulation SST fields. A positive IPO phase is characterized by positive SST anomalies over the central-to-eastern equatorial Pacific and negative SST anomalies over the northwest Pacific and southwest Pacific. It is noted that the multidecadal variability of IPO is not independent of the cumulative impact of El Niño/Southern Oscillation (ENSO) 37 and Southern Annular Mode (SAM) due to the rectification effect of ENSO along with the apparent linkage of La Niña events with positive SAM. Time evolution of both unfiltered and low-pass filtered AMV and IPO indices over 1950–2020 is presented in Supplementary Fig. 3 . To determine whether the observed AMV and IPO trends over the period 1979–2014 fall within the range simulated by climate models, histograms of model-simulated AMV and IPO trends over overlapping 36-year periods are computed using pre-industrial control runs. A comparison indicates that although the observed trends over the period 1979–2014 lie within the range simulated by climate models, they are unlikely to occur frequently (Supplementary Fig. 4 ). This implies that even if multidecadal variability linked to the IPO and/or AMV is accurately represented in climate models, the observed sea-ice and SST changes in the SO might not be captured by model simulations under historical forcing. SST trends congruent with observed trends in the IPO and AMV SST trends, which are congruent with observed trends in the IPO and AMV, are computed over the period 1979–2014. First, multiple linear regressions are conducted at each grid point against both the IPO and AMV, with SST anomalies as the dependent variable over the period 1950–2020. The resulting regression coefficients for IPO and AMV are then, respectively, multiplied by the observed IPO and AMV trends over the period 1979–2014, with the sum of the multiplicative products representing the congruent trends. The regression coefficients derived from climate model simulations under pre-industrial control conditions are also used to compute the SST trends congruent to the observed IPO and AMV trends. Statistical information We used the standard least-squares linear regression approach to compute correlation coefficients, regression coefficients and trends. Statistical significance of the computed correlation coefficients, regression coefficients and trends is determined using a two-sided Student’s t test at the 95% confidence level with reduced degrees of freedom to account for autocorrelation in a given time series. In the case of multimodel mean trends or ensemble-mean trends, the significance is determined by checking whether the multimodel mean or ensemble-mean trend exceeds two standard deviations of the trend across the models or the ensemble members. In Fig. 5 , the significance of the response to a phase transition of AMV or IPO is determined using the Student’s t test at the 95% confidence level by comparing the ensemble-mean response in the positive phase SST pattern experiment relative to the corresponding control experiment with that for the counterpart experiment relative to the control experiment. Data Availability The NSIDC data are available at , the ERSST version 5 dataset at , the HadISST dataset at , the COBE SST2 dataset at , the ERA5 dataset at , the HadSST4 dataset at , the HadCRUT5.0.1.0 dataset at , GISTEMPv.4 at , NOAA globaltemp v.5.0.0 at , the Berkeley Earth dataset at , the CanESM2 Large Ensemble output at , the CESM1 Large Ensemble output at , the CESM2 Large Ensemble output at and the CMIP6 simulation output at . Code Availability The code used to generate the figures in this study is freely available at (ref. 84 ).
Antarctic sea-ice has expanded over the period of continuous satellite monitoring, which seemingly contradicts ongoing global warming resulting from increasing concentrations of greenhouse gasses. In a study, published in Nature Climate Change, an international team of scientists from the University of Hawaiʻi at Mānoa, National Oceanic and Atmospheric Administration (NOAA) and South Korea shows that a multi-decadal swing of the tropical sea surface temperatures and its ability to change the atmospheric circulation across large distances is in large part responsible for the observed sea-ice expansion since the late 1970s. Sea ice, which covers a substantial portion of the ocean surface in the polar regions, plays an important role in controlling global temperatures by reflecting incoming solar radiation. Decreases in sea-ice coverage, therefore, are expected to amplify greenhouse gas-induced global warming. Changes in sea ice also affect energy exchanges between the ocean and atmosphere, carbon uptake by the ocean, ecosystems and the thermohaline oceanic circulation. It is of great importance to monitor long-term changes in global sea ice and to ensure that physical processes that lead to those changes are accurately depicted in climate prediction models. Difference between computer model simulations and observations Continuous satellite observations, which started at the end of the 1970s, indicate marked decreases in Arctic sea ice over the satellite era, which is consistent with the global warming trend. In contrast, small but increasing trends have been observed, especially over the period 1979–2014, in the Southern Hemisphere. Furthermore, while climate models are able to broadly reproduce the observed Arctic sea-ice decreases, the majority of them are not able to capture the Antarctic sea-ice expansion over the period 1979–2014. "The observed Antarctic sea-ice expansion and model-observation discrepancy have perplexed climate scientists over more than a decade," said lead author Eui-Seok Chung, from the Korea Polar Research Institute. "Various hypotheses, such as increased freshwater fluxes due to sub-ice shelf melting, atmospheric and oceanic circulation changes associated with human-induced stratospheric ozone depletion and tropical teleconnections, have been proposed to explain the observed Antarctic sea-ice expansion, but the issue has remained as one of the biggest challenges in climate science," said professor Axel Timmermann, director of the IBS Center for Climate Physics at Pusan National University, and co-author of this study. The observed Antarctic sea-ice changes are caused not only by increasing concentrations of greenhouse gasses and/or stratospheric ozone depletion, but also linked to natural variability of the climate system, which occurs without direct connections with human activities. To determine the main causes of the observed Antarctic sea-ice expansion and model-observation discrepancy, the scientists turned their attention to a longer record of Southern Ocean sea surface temperatures as a proxy for Antarctic sea ice and conducted comprehensive analyses of multi-model large ensemble climate model simulations. Mismatch due to natural variability and regional model biases Over a certain period of time, Southern Ocean cooling and associated atmospheric and oceanic circulation changes linked to natural variability in the tropics may temporarily outweigh the opposing human-induced changes, thus resulting in temporary sea ice expansion. However, it does not explain the model-observation discrepancy. Malte Stuecker, co-author and assistant professor of oceanography from the UH Mānoa School of Ocean and Earth Science and Technology (SOEST) explained, "Southern Ocean multi-decadal variability is also closely linked to tropical natural variability in climate model simulations, but the linkages are substantially weaker than in the observations. Thus, human-induced ocean surface warming dominates in the Southern Ocean in model simulations."
10.1038/s41558-022-01339-z
Medicine
Advancing the search for antibodies to treat Alzheimer's disease
Ming Jin et al, An in vitro paradigm to assess potential anti-Aβ antibodies for Alzheimer's disease, Nature Communications (2018). DOI: 10.1038/s41467-018-05068-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-05068-w
https://medicalxpress.com/news/2018-07-advancing-antibodies-alzheimer-disease.html
Abstract Although the amyloid β-protein (Aβ) is believed to play an initiating role in Alzheimer’s disease (AD), the molecular characteristics of the key pathogenic Aβ forms are not well understood. As a result, it has proved difficult to identify optimal agents that target disease-relevant forms of Aβ. Here, we combined the use of Aβ-rich aqueous extracts of brain samples from AD patients as a source of human Aβ and live-cell imaging of iPSC-derived human neurons to develop a bioassay capable of quantifying the relative protective effects of multiple anti-Aβ antibodies. We report the characterization of 1C22, an aggregate-preferring murine anti-Aβ antibody, which better protects against forms of Aβ oligomers that are toxic to neurites than do the murine precursors of the clinical immunotherapeutics, bapineuzumab and solanezumab. These results suggest further examination of 1C22 is warranted, and that this bioassay maybe useful as a primary screen to identify yet more potent anti-Aβ therapeutics. Introduction Approaches using monoclonal antibodies to target the amyloid β-protein (Aβ) constitute the largest and most advanced therapeutic effort to treat Alzheimer’s disease (AD) 1 , 2 , 3 . Despite generally good outcomes in preclinical mouse models, anti-Aβ immunotherapy has yielded limited success in humans 2 , 3 . Explanations offered to account for the poor translation of pre-clinical lead antibodies into human therapies include imperfect trial design, intervention at a disease stage when there is already significant neural loss, and inappropriate target selectivity of the antibodies used 2 , 4 , 5 . When assessing the efficacy of any therapeutic, there are several issues to consider besides target engagement, and yet the specific targeting of the most cytotoxic forms of Aβ is by far the most critical requirement. Synthetic Aβ can exist in vitro in a bewildering array of assemblies that differ in structure and size 6 , but it remains unclear whether the assemblies that can be formed in vitro ever exist in the human brain. In striking contrast to the hundreds of studies that have investigated the aggregation and toxicity of synthetic Aβ, only ~20 studies have focused on aqueously soluble Aβ species extracted directly from human brain. These studies can be divided into three categories: efforts to identify the primary sequence and/or assembly forms that constitute water-soluble Aβ, whether bioactive or not 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 ; attempts to investigate the cytotoxic activity 22 , 23 , 24 , 25 , 26 , 27 or seeding activity 28 , 29 of crude Aβ–containing extracts; and efforts to study the assembly size of the neurotoxic components of Aβ-rich brain extracts 30 , 31 , 32 , 33 , 34 . Collectively, these studies suggest that Aβ in aqueous extracts of AD brain exists as a mixture of different sized assemblies 10 , 12 , 13 , 14 , 21 , 30 and that one or more of these are extremely potent toxins 22 , 23 , 24 , 25 , 26 , 30 , 31 , 32 , 33 . Indeed, in some experiments, human brain-derived Aβ assemblies were found to be many orders of magnitude more potent than synthetic Aβ peptides 24 , 32 . Recently, we have shown that only a fraction of AD brain-derived Aβ has disease-relevant bioactivity 34 . There are now at least 9 anti-Aβ monoclonal antibodies (mAbs) at various stages of clinical investigation 35 , five of which are believed to preferentially target Aβ oligomers 25 , 36 , 37 , 38 , 39 . Three of these advanced mAbs—crenezumab 25 , BAN2401 14 , 40 , and SAR228810 38 —were selected against synthetic Aβ, whereas aducanumab was selected based on immunohistochemical detection of AD amyloid plaques 37 , 41 . The rationale underlying the use of putatively oligomer-specific mAbs is based on the hypothesis that both Aβ monomers and insoluble fibrillar plaques are relatively innocuous; therefore, an ideal antibody would react weakly with monomers and mature fibrils, but strongly with diffusible oligomers. A key requirement for all CNS immunotherapies is delivering sufficient mAb to the brain. Normally, only ~0.1% of circulating antibody arrives in the brain at steady state 42 , so it is essential that the antibody that does enter the brain is not lost on superfluous targets. One explanation for the disappointing clinical efficacy of anti-Aβ antibodies in human trials is that they target a broad range of Aβ species, including many relatively inactive forms 34 and thus cannot attain the necessary therapeutic concentration against bioactive forms. Similarly, in certain studies sub-optimal antibody levels were used to avoid side-effects such amyloid-related imaging abnormalities (ARIA) 43 . Currently, there is no information in the public domain about the relative ability of candidate therapeutic antibodies to recognize toxic forms of Aβ in human brain, and the properties of optimal therapeutic Aβ antibodies remain ill-defined. To address this central problem, we generated an aggregate-preferring mAb, called 1C22, which shares many of the characteristics of the anti-oligomer mAbs in clinical development 25 , 36 , 40 , 44 , 45 , and we compared its binding properties to those of the murine precursors of solanezumab (mAb 266) and bapineuzumab (mAb 3D6). Solanezumab continues to be tested in two secondary prevention trials 46 , 47 , and an Fc-modified form of bapineuzumab, called AAB-003, is being assessed for treating mild AD 48 . We found that both 3D6 and 266 bound tightly to monomers, whereas 1C22 bound monomers only weakly, and that 1C22 preferentially bound protofibrils (PFs) of Aβ. PFs comprise a heterogeneous mixture of prefibrillar assemblies which by EM appear as short flexible rods with an average width of 5.8 ± 0.2 nm and length <300 nm 49 , 50 . Having established that 1C22, 3D6 and 266 possess distinct binding preferences, we examined the most important property of any potential anti-Aβ immunotherapeutic, its ability to neutralize neurotoxic Aβ. For this purpose, we developed a sensitive medium-throughput assay based on the application of Aβ-containing extracts from AD brain to iPSC-derived human neurons. The Aβ-containing extracts induced a concentration- and time-dependent degeneration of neurites that was attenuated by each of the 3 anti-Aβ mAbs. 1C22 and 3D6 produced effective dose-dependent protection against bioactive human Aβ with apparent IC 50 s of ~0.8 and 1.1 ng/ml, respectively. However, the protection afforded by 266 was so modest that it was not possible to estimate an IC 50 . Thus, the paradigm described here can quantitatively differentiate mAbs based on their ability to neutralize human neurotoxic Aβ. These results recommend this paradigm as a primary screen to identify even more potent anti-Aβ therapeutics, and they suggest that further examination of 1C22 is warranted. Results The study of Aβ aggregation and antibodies that bind to Aβ aggregates is complicated by the fact that multiple Aβ species exist in a dynamic equilibrium 6 , 51 . In order to produce soluble aggregates of Aβ free of both Aβ monomer and fibrils, we used a covalently-stabilized synthetic Aβ dimer, [Aβ1-40S26C] 2 , that readily assemblies to form kinetically trapped, soluble protofibrils (PFs) (Supplementary Figure 1 ) 49 , 50 . Aggregate-free wild-type monomers were isolated by size exclusion chromatography (Supplementary Figure 1 ). 1C22 was generated by immunizing mice with [Aβ1-40S26C] 2 and using a four-step screen to identify antibodies that preferentially recognize Aβ aggregates (Supplementary Figure 2 ). From an initial pool of ~ 7000 hybridomas, we selected 1C22. Thereafter, we compared the ability of 1C22, 3D6 and 266 to bind to synthetic Aβ monomer and kinetically trapped Aβ PFs. 1C22 preferentially binds to Aβ aggregates Initial experiments focused on the binding of mAbs to plate-immobilized synthetic Aβ. Monomers and PFs were immobilized at a constant concentration (200 ng/well), and 1C22, 3D6, and 266 were diluted across the plates. Each mAb produced a sigmoidal titer curve for both Aβ monomers and PFs (Figure 1a ). 3D6 exhibited comparable binding to both Aβ monomers and PFs, with half maximal binding (EC 50 ) achieved at antibody concentrations of ~40 and ~20 pM, respectively. In contrast, 266 exhibited significantly stronger binding for monomers (EC 50 ~30 pM) than PFs (EC 50 ~420 pM). 1C22 showed the reverse preference, binding more tightly to PFs (EC 50 ~6 pM) than to monomers (EC 50 ~20 pM). Thus, with regard to their ability to bind surface-immobilized Aβ, there was a clear difference between the 3 mAbs, with 1C22 showing tighter binding to PFs, 266 binding better to monomers, and 3D6 exhibiting only a marginal preference for PFs. Fig. 1 MAb 1C22 binds to PFs better than monomer. a Aβ1-40 monomer (Mon) and [Aβ1-40S26C] 2 protofibrils (PFs) were immobilized at 200 ng/well on microtiter plates and mAbs 1C22, 3D6 and 266 diluted across the plates. Antibody binding curves were sigmoidally fit and used to determine the concentration of antibody that gave half-maximal binding EC 50 . Values in the table are in pM. b Aβ competition curves for mAbs binding to plate-immobilized Aβ monomers in the presence or absence of solution-phase Mon or PFs competitors were sigmoidally fit and used to determine the concentration of competitor that produced half-maximal inhibition of mAb binding (IC 50 ). Values in the table are in μg/ml. In both a and b values are the average±SD of each condition analyzed in triplicate. When error bars are not visible they are smaller than the size of the symbol. Because we do not know the molecular weight of protofibrils the concentration of Aβ is given in μg/ml. In contrast, since the molecular weight of IgG is known, we provided mAb concentration in molar amounts. All Aβ concentrations are based on monomer molar equivalents and results are representative of at least 3 independent experiments Full size image To explore the relative reactivity of mAbs with Aβ in solution, we modified our direct ELISA to measure binding to plate-immobilized monomer in the presence or absence of solution-phase Aβ conformers. By holding the concentration of mAb and plate-immobilized Aβ monomer constant and adding increasing amounts of competing soluble monomer or soluble PFs, we estimated the relative preference for the mAbs to bind solution-phase Aβ conformers. For all mAbs, addition of solution-phase Aβ caused a concentration-dependent inhibition of binding to plate-immobilized monomer (Fig. 1b ). When 266 was tested, both monomer and PFs caused a similar level of inhibition, with half-maximal inhibition (IC 50 ) achieved at monomer and PF concentrations of ~0.06 and ~0.14 μg/ml, respectively. In contrast, binding of both 1C22 and 3D6 to plate-immobilized Aβ monomer was more effectively competed by solution-phase PFs than solution-phase monomer. The competition ELISA (Fig. 1b ) and direct ELISA (Fig. 1a ) results for 1C22 indicate that this antibody has a clear preference for both immobilized and solution-phase PFs. However, for 3D6 and 266 the competition ELISA and direct ELISA yielded somewhat divergent results. 3D6 showed a modest preference for immobilized monomers, but a significant preference for solution-phase PFs. Similarly, 266 shows a strong preference for immobilized monomer, but only a slight preference for solution-phase monomer. Since immobilization of monomer could lead to surface-induced conformational changes, molecular crowding and/or aggregation, we also investigated mAb binding to monomers and PFs when the mAbs were immobilized and the Aβ conformers were in solution. In these experiments, all 3 antibodies exhibited similar strong binding to soluble PFs, but differed in how well and how much soluble monomer they bound (Fig. 2a ). 266 exhibited comparable binding to soluble PFs and monomer with EC 50 s of ~4.2 and ~5.1 ng/ml, respectively (Fig. 2a , right panel). 3D6 also exhibited comparable binding to soluble PFs (EC 50 ~3.7 ng/ml) and monomer (EC 50 ~4.8 ng/ml) although the maximal signal was much greater for PFs than monomer (Fig. 2a , middle panel). Notably, 1C22 again showed by far the weakest interaction with monomer (EC 50 ~3082 ng/ml; Fig. 2a , left panel). Based on these findings 1C22 stands out as the only mAb tested that exhibits strong preferential binding to PFs—whether immobilized or in solution. In contrast, 266 bound immobilized PFs weakly, but bound to both PFs and monomer in solution similarly well (Fig. 2a , right panel). We hypothesize that the results in Figs. 1b , 2a are more comparable than those in Fig. 1a because the former are measuring relative binding to conformers in solution, whereas surface immobilizations of Aβ conformers leads to loss of relevant epitopes through conformational changes and/or molecular crowding. Fig. 2 Surface-immobilized 1C22 mAb preferentially binds to PFs in solution. a mAbs 1C22, 3D6, and 266 were immobilized on the wells of microtiter plates and allowed to bind solution-phase protofibrils (PFs) and Aβ monomers (Mon). Antibody binding curves were sigmoidally fit and used to determine the concentration of antibody that gave half-maximal binding, EC 50 . When error bars are not visible they are smaller than the size of the symbol. Values in the table are in μg/ml and are the average ± SD of each condition analyzed in triplicate. Antibodies were immobilized on CM5 chips and solution-phase b PFs, or c Mon added. The molar concentration of Aβ monomers and PFs (wrt to Aβ monomer content) used is indicated on each sensogram. Except for Aβ monomer binding by 1C22, sensograms for mAb binding to Aβ monomers were fit to a 1:1 langmuir binding model. Sensograms for 1C22 binding to Aβ monomers were fit to steady state analysis. The inset in c panel 1 is a plot of response units (RU) at steady state for Aβ monomers binding to chip-immobilized 1C22. The apparent binding constant ( K APP ) of mAbs for PFs are: 1C22 < 1 nM; 3D6 < 1 nM; 266 < 1 nM, and the binding constants ( K D ) for Mon are: 1C22 = 1100 ± 500 nM; 3D6 = 7.9 ± 0.16 nM; and 266 = 2.1 ± 1.8 nM Full size image Like the direct and competition ELISAs, the capture ELISA (with mAbs immobilized) has certain limitations. For instance, epitope access by the detector anti-Aβ antiserum (AW7) may be differentially influenced by the sites at which immobilized 1C22, 266 and 3D6 bind solution-phase conformers. Thus, we also used surface plasmon resonance (SPR), a label-free, real-time technique ideal for directly measuring antibody-antigen binding. When mAbs were conjugated to SPR chips, all 3 mAbs irreversibly bound to PFs presented in solution with K D (app)’s <1 nM (Fig. 2b ). Similarly, 266 and 3D6 bound soluble monomer with very high affinity and barely measureable off-rates (Fig. 2c ). Since mAbs 266 and 3D6 form very tight complexes with Aβ monomers, it is only possible to determine apparent K D values (2.1 ± 1.8 nM for 266; 7.9 ± 0.16 nM for 3D6, values are means ± SD, with n ≥ 3). Moreover, there is currently no appropriate model to determine the binding avidity of mAbs to PFs. Nevertheless, it is clear from inspection of the SPR sensograms for soluble PF and monomer binding that 266 and 3D6 bound immeasurably stronger to PFs because of a relatively slow rate of dissociation of the mAb–PFs complexes (Fig. 2c ). In contrast, 1C22 binding of monomer had measurable on- and off-rates with a calculated K D of 1.1 ± 0.5 μM (Fig. 2c , left graph). Overall, these SPR results are in good agreement with those obtained above using ELISA-based methods and indicate that 1C22 and 266 have very different antigen preferences. 1C22, which is reminiscent of BAN2401 45 , has only weak affinity for monomer but binds strongly to PFs whether in solution or immobilized. 266 has very high affinity for monomer, but binds PFs in solution to a similar level as 1C22 (Figs. 1 , 2 ). 3D6 seems intermediate between 1C22 and 266, exhibiting tight binding to monomer in all assays but showing a slight preference for solution-phase PFs (Figs. 1 , 2 ). Avidity drives the preferential binding of 1C22 to PFs A possible explanation for why an antibody may weakly bind Aβ monomer but tightly bind Aβ aggregates (e.g. PFs) derives from the fact that IgGs have 2 identical antigen binding sites, and Aβ aggregates contain multiple identical subunits (monomers). Thus, even though the affinity of a given antigen binding site is the same for both a monomer and aggregate, because an aggregate contains multiple binding sites in close proximity, there is a high probability that the bivalent IgGs will be bound at two antigen sites. In this case, when an antibody dissociates from one site on an aggregate, it can more rapidly bind to a nearby site in a manner not possible with individual Aβ monomers. A common way to test for such enhanced binding due to avidity is to compare the binding of a monovalent form of the antibody to the intact bivalent IgG 52 . Here, we compared the binding to plate-immobilized PFs by the intact mAb vs. the Fab of the same mAb. The binding of intact mAbs to PFs was highly similar to that seen in our previous direct ELISA experiments (compare Fig. 3a vs. Fig. 1a ). When the Fab of 3D6 or 266 was tested for binding to plate-immobilized PFs, the results were similar to those obtained for intact IgGs (Fig. 3a , middle and right panels). In striking contrast, the Fab of 1C22 showed dramatically reduced binding to PFs compared to the intact 1C22 IgG (Fig. 3a , left panel and Table). Fig. 3 Bivalency drives 1C22 binding to PFs. a IgG and Fab binding curves for 1C22, 266, and 3D6 against plate-immobilized protofibrils (PFs). EC 50 values determined from the sigmoidally fit curves demonstrated that 1C22 IgG had ~130-fold stronger reactivity against PFs than the 1C22 Fab fragment. In contrast, Fab fragments of 3D6 and 266 bound to PFs as strongly as the intact molecules. When error bars are not visible they are smaller than the size of the symbol. Values in the table are in pM and are the average ± SD of each condition analyzed in triplicate. b Representative sensograms for 1C22 IgG (upper panels) and Fab (lower panels) binding to CM5 chip-immobilized PFs (right panels) and monomer (Mon) (left panels) confirm that intact 1C22 binds more tightly to immobilized PFs than 1C22 Fab, whereas intact 1C22 and Fab bind similarly to Aβ monomer. Insets show plots of RU values at steady state for intact 1C22 and 1C22 Fab binding to PFs or Mon. The apparent binding constant of 1C22 IgG for PFs = 0.48 ± 0.002 nM, whereas the binding constant ( K D ) for 1C22 IgG with Mon = 1.39 ± 0.46 μM, and the K D for IgG Fab binding to PFs and Mon are: 0.80 ± 0.88 μM,1.14 ± 0.49 μM, respectively Full size image For reasons detailed above, SPR has certain advantages over indirect ELISA-based modalities, so we used SPR to further compare binding of intact 1C22 and 1C22 Fab to chip-immobilized Aβ monomers or PFs. Actual and apparent K D values determined from the fitted sensograms showed that intact 1C22 IgG bound to PFs ~1000-fold stronger ( K app = 0.48 ± 0.002 nM) than its Fab ( K D = 800 ± 88 nM) (Fig. 3b , left panels). In contrast, the intact antibody and Fab fragment showed highly similar binding to chip-immobilized monomer, with K D values of 1.39 ± 0.46 μM and 1.14 ± 0.49 μM, respectively (Fig. 3b , right panels). These results indicate that the tight binding 1C22 displays for PFs is largely driven by avidity effects. However, this appears not to be due to repetitive display of a simple short linear epitope. Specifically, when tested for binding to a nested set of short overlapping Aβ peptide fragments, 1C22 showed only marginal binding to the fragments and much greater binding to the intact monomer (Supplementary Figure 3A , left panel, and Supplementary Figure 3B ). In contrast, the well characterized N-terminal directed mAb, 6E10 53 , showed excellent binding to N-terminal fragments that contained its known epitope (Supplementary Figure 3A , right panel, and Supplementary Figure 3B ). These results suggest that the relative preference of 1C22 for soluble aggregates is in part due to a requirement for an extended or conformational epitope. A new paradigm to assess the potential of anti-Aβ antibodies The studies described above demonstrate that 1C22, 3D6, and 266 differ significantly with regard to their ability to bind synthetic Aβ conformers. However, given that the nature of cytotoxic Aβ in the AD brain is poorly understood, binding studies using synthetic Aβ may not accurately predict the optimal properties of antibodies intended for use in humans. Moreover, the artificial surface-immobilization of antibodies or Aβ species may give rise to avidity effects not replicated in brain when both Aβ and antibody are in solution. Thus, we sought to develop a bioactivity assay using the most disease relevant form of Aβ, namely Aβ extracted from AD brain tissue, and apply this material to human neurons in the presence and absence of antibodies. Neuritic dystrophy is a well-accepted feature of AD 54 , 55 , and previously we showed that Aβ extracted from human AD cerebral cortex can disrupt the microtubule cytoskeleton of primary rat hippocampal neurons and cause time-dependent neuritic degeneration and tau phosphorylation 32 . However, the methodology we used was laborious and not suitable for testing large numbers of samples and conditions. Moreover, since we observed that rat neurons transduced to express human tau were more susceptible to the effects of the AD brain-derived Aβ (32), and it has recently been reported that human neurons are uniquely sensitive to Aβ 56 , we thought it was important to use human rather than rodent neurons. These considerations encouraged us to develop a medium throughput assay to routinely measure the neuritotoxicity of AD brain extracts on human neurons, i.e., an all human-derived bioassay. We took advantage of recent advances in iPSC biology to generate highly differentiated human neurons (Fig. 4 ) that can be prepared just as rapidly as mature rodent primary neurons. The method employed is a modified version of the Neurogenin 2 (Ngn2) differentiation protocol pioneered by the Sudhof group 57 and is described in Supplementary Methods and illustrated in Fig. 4a . The Ngn2 method incorporates a GFP expression cassette, so all successfully transduced cells are GFP fluorescent (Fig. 4b ). To assess neuritic maturation and the effects of AD brain derived Aβ on neurites, we used the IncuCyte Zoom live-cell video microscopy system from Essen Bioscience (Fig. 4b, c ). Beginning 7 days after induction of Ngn2 expression (a time point we designate as iN day 7), cells were imaged every 12 h for a total of 14 days, and neurite length and branch points determined. From iN day 7–14, neurite length and branch points increased rapidly but thereafter remained constant (Fig. 4b, c ). The levels of GluA1, PSD-95, synaptophysin, synapsin 1, and tau increased between iN days 7 and 14 and then remained constant (Supplementary Figure 5B ). Neurons stained at iN day 21 were positive for the neuronal markers MAP2, NeuN and tau (Supplementary Figure 5C ). Fig. 4 Time-lapse imaging of differentiated human induced neuron (iNs). Human induced neuron (iNs) were prepared as described in the Supplementary Methods and used for live-cell imaging from iN day 0 to iN day 21. a Schematic depicts the process used to generate and mature iNs and indicates the nomenclature used to designate the different stages of the process. b Phase contrast and fluorescence images of iN days 7, 14, and 21 are shown in the upper panels. These images were then analyzed using the IncuCutye NeuroTrack algorithm to identify neurites (pink) and cell bodies (brown). NeuroTrack-identified neurites (pink) and cell bodies (brown) are shown superimposed on the phase contrast image in the lower panels. The Scale bar is 100 μm. c Images were collected at 12 h intervals from iN days 7–21 and analyzed using the IncuCutye NeuroTrack algorithm to determine neurite length (left) and the number of neurite branch points (right). Each data point is the average of measurement from 12 wells of iN cells cultured in the same 96 well plate. Error bars are SEM Full size image Once these consistent results were established, we exposed the neurons to Aβ-rich soluble AD brain extracts at iN day 21 and imaged every 2 h for a total of 72 h of exposure. Application of AD1 brain extract (Supplementary Figure 6A-C ) caused a time- and dose-dependent decrease in both neurite length and branch points relative to the same neurons measured between −6 and 0 h prior to treatment, and sister wells of untreated neurons (neurite length, p < 0.0001; branch points, p < 0.0001, two-way ANOVA) (Fig. 5a, b ). Importantly, AD1 extract that had been immunodepleted of Aβ (called ID-AD1; Supplementary Figure 6A-C ) had no significant effect on either neurite length or branch points (Fig. 5b, c ) (neurite length, p = 0.7195; branch points, p = 1.0000, two-way ANOVA). The effects of AD extracts were clearly dose- and Aβ-dependent irrespective of whether normalized means of triplicate wells (Fig. 5 ) or individual wells (Supplementary Figure 7A ), or non-normalized means were used (Supplementary Figure 7B ). To examine the generalizability of this effect, we tested a soluble Aβ-rich extract from a second AD brain, AD2 (Supplementary Figure 6D-F ). As with AD1, AD2 caused a time- and dose-dependent decrease in both neurite length and branch points (neurite length, p < 0.0001; branch points, p < 0.0001, two-way ANOVA), whereas ID-AD2 had no effect (Supplementary Figure 8A, B and Fig. 5c ) (neurite length, p = 1.0000; branch points, p = 0.9973, two-way ANOVA). Importantly, neither AD1 nor AD2 evinced any sign of overt perikaryal loss, and the number of cell body clusters remained constant throughout the course of the experiments and did not differ from the corresponding ID-AD or media controls (Supplementary Figure 8C ) (AD1, AD vs ID, p = 1.0000; AD2, AD vs ID, p = 0.0745, two-way ANOVA). In separate studies, Aβ-rich brain extracts from three other AD patients also caused neuritotoxicity, albeit to different extents, and in each case neuritoxicity was prevented by specific immunodepletion of Aβ-[ 34 ]. Importantly, extracts from 2 control brains (Supplementary Figure 9 ) had no measurable adverse effects on iNs. Fig. 5 Treatment of iNs with AD brain-derived soluble Aβ induces neuritic dystrophy. Live-cell imaging was used to monitor the effect of Aβ-containing AD brain extracts on iNs. a iN day 21 cultures were treated with mock-immunodepleted AD1 extract (Mock ID) or AD1 extract immunodepleted with the pan anti-Aβ antiserum AW7 (AW7 ID) and cells imaged for 72 h. Phase contrast images (left panels) at 0, 24, 48, and 72 h were analyzed using the IncuCutye NeuroTrack algorithm to identify neurites (middle panels) and the NeuroTrack-identified neurites (pink) are shown superimposed on the phase contrast image (right panels). Scale bars are 100 μm. b Each well of iNs was imaged for 6 h prior to addition of sample and NeuroTrack-identified neurite length and branch points used to normalize neurite length and branch points measured at each interval after addition of sample. Mock-ID AD1 extract was tested at 3 dilutions, 1:4, 1:8, and 1:16. Immunodepleted AD1 was tested at 1:4 and cells treated with medium alone were used to monitor the integrity of untreated cells. The values shown in graphs are the average of triplicate wells for each treatment ± SEM. c Plots of normalized neurite length (left panel) and neurite branch points (right panel) are derived from the last 6 h of the traces shown in b and in Supplementary Figure 8B and are presented as mean values ± SEM. Application of AD1 brain extract caused a decrease in both neurite length and branch points relative to: (i) the same neurons prior to treatment, and (ii) sister wells of untreated neurons (neurite length, p < 0.0001; branch points, p < 0.0001, two-way ANOVA). Importantly, AD1 extract that had been immunodepleted of Aβ had no significant effect on either neurite length or branch points (neurite length, p = 0.7195; branch points, p = 1.0000, two-way ANOVA). The results shown are representative of at least three independent experiments Full size image 1C22 protects against Aβ toxicity better than 3D6 or 266 Having established a quantitative paradigm to monitor neuritotoxicity, we next assessed whether the 3 mAbs we had characterized for Aβ binding could attenuate neuritotoxicity induced by AD brain-derived Aβ. To control for non-specific antibody effects, we used an isotype control IgG1 antibody, 46–4, which was raised to HIV glycoprotein 120 58 . Half of the medium on iNs was removed (leaving ~100 μl) and then 50 μl of mAb stock solution (0.4 to 12 μg/ml) was added plus either 50 μl AD extract or fresh medium. The mAb concentrations tested were 0.1, 0.5, 1.0, 1.5, 2.0, 2.5, and 3 μg/ml and were applied in the presence or absence of a 1:4 diluted AD brain extract (which was itself a 20% (w/v) brain extract). As before, ID-AD1 had no effect on either neurite length (Fig. 6 ) or branch points (Supplementary Figure 10 ) compared to medium alone ( p = 1.0000, two-way ANOVA). In contrast, the 1:4 diluted AD1 extract caused a profound reduction over 72 h in both neurite length (Fig. 6 ) and branch points (Supplementary Figure 10 ) compared to the 6 h pre-treatment interval and to the medium control ( p < 0.0001, two-way ANOVA). Co-administering 46–4 did not attenuate the neuritotoxicity induced by AD1 (Fig. 6a and Supplementary Figure 10 A) ( p < 0.0001, AD1/46–4 3 μg/ml vs. medium control, two-way ANOVA), whereas, addition of 1C22 caused a dose-dependent rescue of neurite length (Fig. 6d ) and complexity (Supplementary Figure 10D ). Notably, at 3 μg/ml 1C22 conferred near complete protection against the effects of AD1 (Fig. 6d and Supplementary Figure 10 D) ( p = 0.9840, AD1/1C22 3 μg/ml vs. medium control, two-way ANOVA). Both 266 (Fig. 6b and Ssupplementary Figure 10 B) and 3D6 (Fig. 6c and Supplementary Figure 10 C) partially protected against the disruptive effects of AD1 extract. 3D6 was more effective than 266, and 1C22 yielded the best protection. mAbs exerted similar protective effects when co-administered along with AD2 extract, i.e., 1C22 afforded the greatest protection and 266 the least (Supplementary Figure 11 ). Fig. 6 Anti-Aβ antibodies dose-dependently attenuate the neuritotoxic effects of AD brain extracts. To determine whether anti-Aβ antibodies could protect against the neuritotoxicity induced by Aβ-containing AD brain extracts iNs were treated with AD1 extract at a dilution of 1:4 in the presence or absence of increasing amounts of antibody. Graphs show time-course measurements of NeuroTrack-defined neurite length of iNs treated ± AD1 extract and a 46-4, b 266, c 3D6, and d 1C22. Each data point is the average of 3 wells±SEM Full size image Importantly, immunocytochemical analysis of end stage cultures used in Fig. 6 confirmed the neuritic loss seen by live cell imaging and revealed an increase in phospho-tau in neurons treated with AD1 extract vs. vehicle or ID-AD1-treated cells (Supplementary Figure 12 ). Addition of 1C22 at 3 μg/ml completely rescued the increase in phospho tau, whereas the same concentration of 266 only modestly attenuated tau phophorylation. To test the reproducibility of AD extract on neurite length and complexity, we conducted 2 replicate experiments, each time using a different iN culture and different aliquots of the AD1 extract and the mAbs, and with the experimenter always blind to the identity of the mAb. Figure 7 shows the results from the final 6 h of automated imaging for 3 separate experiments testing the effects of AD1 extract on neurite length and branch points in the presence or absence of 3 μg/ml mAb. In neurons treated with ID-AD1 or medium alone, there was a slight (but statistically insignificant) decrease in neurite length (1.1 to 13.8%), with branch point numbers sometimes slightly increased or decreased (−7.9 to 5.7 %, compared to first 6 h interval prior to treatment). Fig. 7 MAb 1C22 more effectively protects against AD brain extract induced neuritotoxicity than either 3D6 or 266. Three independent experiments were conducted as in Fig. 6 . a Graphs show the normalized change in NeuroTrack-defined neurite length or branch points over the last 6 h of imaging of iNs treated with AD1 extract ± 3 μg/ml mAbs 46–4, 266, 3D6, or 1C22. In each individual experiment, 1C22 almost completely protected against the neuritotoxicity of AD1 extract and always exerted a stronger effect than 3D6, and 3D6 always provided better protection than 266 (left panel, p = 0.9991, 1C22 vs 3D6; p < 0.0001, 1C22 vs 266, two-way ANOVA). Highly similar results were obtained for neurite branch points (right panel, p = 0.9464, 1C22 vs 3D6; p < 0.0001, 1C22 vs 266, two-way ANOVA). b To investigate the effect of mAb concentration, NeuroTrack-defined neurite length (left panel) was averaged over the last 6 h of imaging for each treatment and values normalized to the immunodepleted AD1 treatment and neurite length plotted vs. antibody concentration. The ability of mAbs to protect neurite branch points (right panel) was determined in a similar fashion as described for neurite length. Values are the average ± SD of each condition analyzed in triplicate. When error bars are not visible they are smaller than the size of the symbol Full size image Addition of AD1 (dark red diamonds) caused a 54.6% decrease of neurite length in experiment 1, a 60.2% decrease in experiment 2, and a 46.4 % decrease in experiment 3 (Fig. 7a ). Co-administration of 46–4 (yellow diamonds) did not alter neurite length relative to AD1 treatment alone ( p = 1.0000, two-way ANOVA). In the 3 experiments, 266 (orange diamonds), 3D6 (light blue diamonds) and 1C22 (gray diamonds) protected against AD1. In each individual experiment, 1C22 always exerted a stronger effect than 3D6, and 3D6 always provided better protection than 266 (Fig. 7a left panel, p = 0.9991, 1C22 vs 3D6; p < 0.0001, 1C22 vs 266, two-way ANOVA). Highly similar results were obtained for neurite branch points (Fig. 7a right panel, p = 0.9464, 1C22 vs 3D6; p < 0.0001, 1C22 vs 266, two-way ANOVA). To compare the dose-dependent effects of the mAbs, neurite length (Fig. 7b , left panel) was averaged over the final 6 h of imaging for each treatment and the resultant values normalized to the ID-AD1 control treatment. The results from 3 separate experiments were then averaged and plotted vs. mAb concentration and used to determine IC 50 s. 1C22 and 3D6 produced effective dose-dependent protection against AD1, with apparent IC 50 s of 0.795 μg/ml for neurite length, 0.99 μg/ml for branch points, and 1.12 μg/ml for neurite length, 1.52 μg/ml for branch points. The protection afforded by 266 was so modest that it was not possible to estimate an IC 50 . Discussion Anti-Aβ immunotherapeutics are the furthest advanced among disease-modifying agents being tested in AD patients, with multiple trials underway worldwide at this writing. Until now, preclinical assessment of candidate antibodies has relied largely on in vitro binding experiments with synthetic Aβ 25 , 38 , 59 and passive immunization of APP transgenic mice 60 , 61 , 62 . However, these approaches have not translated well to humans 2 , 3 , 4 , and it is uncertain whether synthetic Aβ peptides or APP transgenic mice can yield the type of neurotoxic Aβ assemblies that accumulate in the brains of humans with AD. In regard to behavioral deficits observed in some APP transgenic mice, it is unclear whether these are due to Aβ and/or a result of over-expression of APP 63 . Indeed, mice which produce and deposit human Aβ in the absence of over-expression of APP (i.e., APP knock-in mice or BRI2-Aβ mice) show no deficits in synaptic plasticity or memory 64 . Here we report the development of an unbiased, in vitro assay that combines the use of Aβ-rich human (AD) brain extracts and human neurons. The use of only human material in our new testing paradigm ensures that the Aβ species applied and the bioactivity readout are directly relevant to the human disease. Moreover, quantitative competition for binding to active vs. inactive forms of human Aβ is built into our system, since a large portion the Aβ species in aqueous extracts of AD brain are inactive 33 , 34 . While no single assay can be expected to predict the absolute utility of an anti-Aβ antibody when administered to humans, the novel paradigm described here would enable important objective comparisons of new anti-Aβ antibodies and current lead antibodies in human trials. In addition to being based solely on human brain tissue and human neurons, our approach offers a number of other advantages over current in vivo therapeutic antibody screens. First and most obvious, our procedure is relatively rapid and should allow for the testing of large numbers of antibodies, and it could thus serve as a primary screen to identify novel antibodies of interest. Second, the measurement of neuritotoxicity and its attenuation is quantitative, making it possible to estimate the amount of antibody that would be required to neutralize neuritotoxic Aβ in the brains of patients with AD. For instance, the amount of Aβx-42 in AD1 cortical extract diluted 1:4 was ~1.55 nM and the approximate IC 50 for mAb 1C22 was ~5.33 nM (0.8 μg/ml). Therefore, on a molar basis, a ~4-fold excess of 1C22 was required to achieve a 50% attenuation of neuritotoxicity from human Aβ. These calculations are based on measurement of Aβx-42, and analysis of a single brain extract (AD1), but it is imminently feasible to assay additional brain extracts and to measure multiple Aβ alloforms. Thus, one can readily estimate the minimal dose of any mAb required to achieve maximal binding to neurotoxic Aβ species present across a range of AD brains. Other advantages of our neuritotoxicity assay include the use of genetically identical and consistent human cultures, the supply of which is essentially limitless and could be adapted to use cells from various donors susceptible to AD. Of course, for assessing certain effects of mAbs such as ARIA 65 , in vivo animal experiments will be necessary, but this need only be done with the most promising leads identified using our screen. Such an approach would both markedly expedite the discovery process and minimize the unnecessary use of laboratory animals. Of the three antibodies tested, our novel aggregate-preferring mAb, 1C22, was the most efficacious. Given the results of our binding studies (summarized in Supplementary Table 1 ) on synthetic Aβ and the prevailing belief that soluble Aβ aggregates (aka oligomers) may be the principal initiators of the AD pathogenic process 5 , 66 , 67 , 68 , the effectiveness of 1C22 vs. 3D6 and 266 may seem predictable. However, it is important to emphasize that not all soluble Aβ oligomers are bioactive 33 , 34 and it is not clear if there is a specific form of Aβ that is toxic or if toxicity is conferred by a pool of soluble aggregated Aβ species. Different aggregate-preferring mAbs could exhibit distinct recognition of active vs. inactive oligomers and therefore may allow different degrees of protection against cytotoxic Aβ. Typical of the pattern we have seen in extracts from many AD brains 21 , 33 , 34 , most of the Aβ in the AD1 and AD2 water-soluble extracts was aggregated and only a small fraction existed as unaggregated monomers (Supplementary Figure 6 ). In vitro binding experiments indicate that 3D6 and 266 bind solution-phase synthetic PFs as well or better than 1C22 (Figs. 1 – 3 ), yet 1C22 offers the best protection against AD brain-derived neuritotoxic Aβ (Figs. 6 , 7 ). With regard to binding to synthetic Aβ, the biggest difference between 1C22 vs. 3D6 and 266 relates to monomer. 1C22 evinces much weaker binding to monomer than either 3D6 or 266. Since, monomer contributed <7% of the total Aβ42 in the AD1 extract (Supplementary Figure 6 ), it seems unlikely that the differential recognition of monomer could account for the vastly superior performance of 1C22 relative to 266 in neutralizing human Aβ oligomers in our IncuCyte assay. Although, the form or forms of Aβ which mediate neuritotoxicity are as yet undefined, it is reasonable to assume that the greater protection afforded by 1C22 relative to 3D6 and 266 results from differential binding to toxic Aβ. A concern about an avid, oligomer-preferring antibody such as 1C22 is the possibility that it will bind tightly to amyloid plaques, and that in AD brains with abundant plaques, the concentration of 1C22 available to target soluble, neurotoxic Aβ would accordingly be reduced. An ideal anti-Aβ mAb would therefore exhibit minimal reactivity with plaques 69 . Interestingly, in studies using fresh-frozen human brain (Supplementary Figure 4 ) and in in vivo mouse studies 70 , we have shown that 1C22 exhibits only modest binding to plaques. The relatively low binding of this highly avid antibody to plaques suggests that there is more to 1C22′s binding than avidity alone. This conclusion is consistent with our observation that 1C22 prefers an extended or conformational epitope (Supplementary Figure 3 ) and suggests that this epitope is present and accessible on diffusible neuritotoxic Aβ but is not readily accessible on fibrillar plaques. These results also suggest that soluble neuritotoxic Aβ oligomers may have distinct structural properties from much of the oligomeric Aβ that ends up deposited in plaques. Currently the structural differences between naturally-occurring active and inactive Aβ oligomers are not understood, and it is only recently that the field has begun to appreciate that not all Aβ oligomers in the AD brain are equally toxic. While it will be challenging to identify the molecular bases of these differences, our new screen may aid this process. The approach described here should allow identification of monoclonal antibodies that best target brain-derived neuritotoxic Aβ, and in the future, we and others will screen libraries of small molecules to identify compounds with similarly distinct properties. In turn, the identification of such small molecules and antibodies may enable the full purification and detailed biochemical analysis of the most noxious forms of human Aβ. Focusing on relevant bioactivity assays and sources of natural (AD) Aβ, as done here, should enable the discovery of more selective and efficacious anti-Aβ immunotherapeutics as well as imaging agents and other diagnostic tools. Methods Peptides and reagents Human Aβ(1–40) and Aβ(1–42), and Aβ(1–40) in which serine 26 was substituted with cysteine were synthesized and purified by Dr. James I. Elliott at Yale University (New Haven, CT). Peptide masses and purities ( > 95%) were confirmed by electrospray ionization/ion trap mass spectrometry and reverse-phase HPLC. Overlapping Aβ peptide fragments were synthesized and purified at the Bioploymer Laboratory in the Department of Neurology at UCLA. All other chemicals were of the highest purity available and unless indicated otherwise were obtained from Sigma-Aldrich (St. Louis, MO). Antibodies The antibodies used in this study and their sources are described in Supplementary Table 2 . Assays used to assess antibody binding to Aβ conformers The preparation of Aβ conformers, generation and characterization of 1C22 are described in Supplementary Methods. Three distinct immunoassay formats were employed to investigate binding of mAbs to different Aβ conformers. Each assay used the same microtiter plates (#3369, COSTAR, Corning, NY), blocking buffer and assay buffer. The blocking buffer was 1% (w/v) BSA in PBS, pH 7.4, and assay buffer contained blocking buffer supplemented with 0.05% (v/v) tween 20. Direct ELISA was performed using microtiter plate wells coated with 200 ng of Aβ conformer and subsequently blocked. Antibody binding curves against plate-immobilized Aβ conformers were determined by diluting mAbs with assay buffer into duplicate wells. Antibody binding to blocked wells that had no Aβ, and wells that had Aβ but no primary antibody, served as background controls. Biotinylated goat anti-mouse IgG (γ-chain specific, Sigma-Aldrich) was then applied and detected using streptavidin-horse radish peroxidase (Jackson ImmunoResearch Laboratories, Inc.) and 3,3′,5,5′—tetramethylbenzidine (TMB) substrate (SureBlue Reserve™; KPL, Gaithersburg, MD, USA). Antibody binding curves were generated by subtracting background from assay signal, and the resultant graphs were fitted using a standard 3-parameter sigmoid (logistic) function (SigmaPlot 2000, version 6; Systat Software, Chicago, IL). The concentration of antibody that gave half-maximal binding, EC 50 , and maximum signal amplitude were determined from the fitted curves. Competition ELISA was performed using microtiter plate wells coated with 200 ng of Aβ monomers and subsequently blocked. Solution-based Aβ conformers were serially diluted (0–0.1 mg/ml) into the coated wells. Then, antibody was immediately added to each well at a concentration equal to twice its EC 50 value determined by direct ELISA, and the ability of each Aβ conformer to inhibit antibody binding to plate-immobilized Aβ was determined. Background control wells included blocked wells that had no Aβ, and wells that had Aβ but no primary antibody added. The competitor concentration that produced half-maximal inhibition of antibody binding, IC 50 , was determined from sigmoidally fitted curves. Capture ELISA was performed to assess an mAb’s ability to capture Aβ conformers in solution using plate-immobilized antibodies (200 ng per well). Briefly, synthetic Aβ conformers were serially diluted (0–10 μg/ml) with assay buffer into appropriate microtiter wells. The amount of antibody-bound to Aβ was determined using a polyclonal rabbit anti-Aβ antibody, AW7 71 , a HRP-conjugated donkey anti-rabbit IgG (whole molecule, GE Heathcare, Buckinghamshire, UK) and TMB substrate (SureBlue Reserve™; KPL). EC 50 values and maximum signal amplitudes were determined from sigmoidally fitted Aβ binding curves. Surface plasmon resonance (SPR) was used to assess antibody-Aβ binding studies using a Biacore 3000 optical biosensor (Piscataway, NJ) at room temperature and running buffer consisting of PBS containing 0.05% tween 20, pH 7.4. CM5 Chips (Biacore, Uppsala, Sweden) were activated using N -ethyl- N ’-(dimethylaminopropyl)cabodiimide (EDC) and N -hydroxysuccinimide (NHS) (GE Healthcare) and IgG (3648 ± 658 response units (RU)) or PFs (3344 ± 290 RU) conjugated to chips via primary amines in optimized immobilization buffer (10 mM sodium acetate pH 4.0–5.5). Reference flow cells consisted of activated chip surfaces that were blocked with ethanolamine. Each binding experiment consisted of analyte mAb (0–1 μM), Fab (0–1 μM), or an Aβ conformer (0–1 μM) flowed over a chip at 30 μL/min. Sensograms were recorded with association and dissociation phases monitored for 300 s and 600 s, respectively. Control studies confirmed that chip regeneration with 10 mM glycine HCl, pH 2.0–3.0, did not modulate analyte binding. Except for PFs, equilibrium and/or kinetic constants for analyte binding were determined by fitting the sensograms, corrected for reference cell signal, to a simple 1:1 Langmuir binding model or to steady state analysis using BIAevaluation software (version 3.2, Biacore Inc.). IgG binding parameters were not determined for experiments involving PFs analyte since binding was essentially irreversible and these assemblies are heterogeneous in size, and presumably, each assembly has a unique propensity for multivalent antibody binding. Addition of AD brain extract to iNs and live-cell imaging Production and characterization of human brain extracts and induced neurons (iNs) from human induced pluripotent cells (iPSCs) are described in the Supplementary Methods. Aliquots (two, 0.5 ml) of mock-immunodepleted (AD) or AW7-immunodepleted brain (ID-AD) extracts were thawed on ice for 30–60 min, vortexed, centrifuged at 16,000 × g for 2 min, and buffer exchanged into neurobasal medium supplemented with B27/Glutamax using HiTrap 5 ml desalting column (GE Healthcare, Milwaukee, WI). AD and ID-AD extracts (1 ml) were applied to a desalting column using a 1 ml syringe at a flow rate of ~1 ml/min and eluted with culture medium using a peristaltic pump. In total 10, 0.5 ml fractions were collected. Prior experimentation revealed that the bulk of Aβ eluted in fractions 4 and 5. These two fractions were pooled—this pool is referred to as exchanged extract. A small portion (50 μl) of the exchanged extract was taken for Aβ analysis and the reminder used in iN experiments. Approximately 7 h prior to exchanging AD and ID-AD extracts into culture medium, iN day 21 neurons were placed in an IncuCyte Zoom live-cell imaging instrument (Essen Bioscience, Ann Arbor, MI). Four fields per well of a 96 well plate were imaged every 2 h for a total of 6 h. This analysis was used to define neurite length and branch points prior to addition of brain extracts. Buffer exchanged brain extracts were diluted 1:2 with culture medium. Half of the medium on iNs was removed (~100 μl) and replaced with 100 μl of 1:2 diluted buffer-exchanged extract – yielding a 1:4 diluted extract on iNs. Similarly, treatments using 1:8 and 1:16 diluted extracts were done in a similar manner. For long-term, continuous imaging, images of four fields per well were acquired every 2 h for 3 days (starting at iN day 21). Whole image sets were analyzed using Incucyte Zoom 2016A Software (Essen Bioscience, Ann Arbor, MI). The analysis job Neural Track was used to automatically define neurite processes and cell bodies based on phase contrast images. Typical settings were: Segmentation Mode—Brightness; Segmentation Adjustment—1.2; Cell body cluster filter—minimum 500 μm 2 ; Neurite Filtering—Best; Neurite sensitivity—0.4; Neurite Width—2 μm. Total neurite length (in millimeters) and number of branch points were quantified and normalized to the average value measured during the 6 h period prior to sample addition. Total neurite length is the summed length of neurites that extend from cell bodies, and number of branch points is the number of intersections of the neurites in image field. For experiments involving addition of mAbs to iNs, 4 × stocks of mAbs (0.4 to 12 μg/ml) were prepared in iN medium. Half of the medium on iNs was removed (~ 100 μl) and replaced with 50 μl of mAb stock plus either 50 μl exchanged extract or 50 μl fresh iN media. Thus, the mAb concentrations tested ranged from 0.1 to 3 μg/ml and were applied in the presence and absence of 1:4 diluted AD brain extract. Data analysis and Statistical test Experiments shown in all Figures data are representative of at least 2 independent experiments. For live-cell imaging experiments, samples and treatments were coded and tested in a blinded manner. Differences between groups were tested with two-way analysis of variance (ANOVA) with Bonferroni post-hoc tests or student’s t -tests ( # p < 0.05, ## p < 0.01, and ### p < 0.001). Data availability All data generated or analyzed during this study are included in this published article (and its supplementary information files) and all raw data are available from the authors upon reasonable request.
Two new studies published by investigators from Brigham and Women's Hospital illustrate that not all forms of amyloid-beta (Aβ) protein—the protein thought to initiate Alzheimer's disease—play an equally menacing role in the progress of the disease. Using a new way of preparing and extracting the protein as well as a new technique to search for promising drug candidates, researchers have highlighted the importance of testing and targeting different forms of Aβ. Their work may help advance the search for more precise and effective drugs to prevent or halt the progress of Alzheimer's disease. "Many different efforts are currently underway to find treatments for Alzheimer's disease, and anti-Aβ antibodies are currently the furthest advanced. But the question remains: what are the most important forms of Aβ to target? Our study points to some interesting answers," said Dominic Walsh, Ph.D., a principal investigator in the Ann Romney Center. Aβ protein can take forms ranging from monomers—single molecules—to twisted tangles of plaques that can pollute the brain and are large enough that they can be seen with a traditional microscope. Walsh compares monomers to single Lego bricks, which can start sticking together to form complex structures of varying sizes. The two recently published studies investigate how to find new potential therapeutics that can target the structures most likely to cause harm. Most Alzheimer's disease studies use synthetic Aβ to approximate what conditions in the brain of an Alzheimer's patient might be like. A small number of researchers have used Aβ extracted from human brain, but the extraction process is crude. In a study published in Acta Neuropathologica in April, Walsh and colleagues developed a much gentler extraction protocol to prepare samples from subjects with Alzheimer's disease. The team found that Aβ was far more abundant in traditional crude extracts, but that the bulk of the extracted Aβ was innocuous. In contrast, much less Aβ was obtained with the gentler protocol, but in this case most of the Aβ was toxic. In a second study published in Nature Communications in July, Walsh and colleagues developed a screening test to try to find potential drugs to target the toxic forms of Aβ. The new technique uses extracts of brain samples from Alzheimer's disease patients and live-cell imaging of stem-cell derived brain cells to find promising therapeutics. The team reports on 1C22, an Aβ antibody that they found could protect against toxic forms of amyloid-beta more effectively than the most clinically advanced Alzheimer's disease therapeutics currently in clinical trials. "We anticipate that this primary screening technique will be useful in the search to identify more potent anti-Aβ therapeutics in the future," said Walsh.
10.1038/s41467-018-05068-w
Medicine
New research suggests nose picking could increase risk for Alzheimer's and dementia
Sheng Liu et al, Generalizable deep learning model for early Alzheimer's disease detection from structural MRIs, Scientific Reports (2022). DOI: 10.1038/s41598-022-20674-x Journal information: Scientific Reports
https://dx.doi.org/10.1038/s41598-022-20674-x
https://medicalxpress.com/news/2022-10-nose-alzheimer-dementia.html
Abstract Early diagnosis of Alzheimer’s disease plays a pivotal role in patient care and clinical trials. In this study, we have developed a new approach based on 3D deep convolutional neural networks to accurately differentiate mild Alzheimer’s disease dementia from mild cognitive impairment and cognitively normal individuals using structural MRIs. For comparison, we have built a reference model based on the volumes and thickness of previously reported brain regions that are known to be implicated in disease progression. We validate both models on an internal held-out cohort from The Alzheimer's Disease Neuroimaging Initiative (ADNI) and on an external independent cohort from The National Alzheimer's Coordinating Center (NACC). The deep-learning model is accurate, achieved an area-under-the-curve (AUC) of 85.12 when distinguishing between cognitive normal subjects and subjects with either MCI or mild Alzheimer’s dementia. In the more challenging task of detecting MCI, it achieves an AUC of 62.45. It is also significantly faster than the volume/thickness model in which the volumes and thickness need to be extracted beforehand. The model can also be used to forecast progression: subjects with mild cognitive impairment misclassified as having mild Alzheimer’s disease dementia by the model were faster to progress to dementia over time. An analysis of the features learned by the proposed model shows that it relies on a wide range of regions associated with Alzheimer's disease. These findings suggest that deep neural networks can automatically learn to identify imaging biomarkers that are predictive of Alzheimer's disease, and leverage them to achieve accurate early detection of the disease. Introduction Alzheimer’s disease is the leading cause of dementia, and the sixth leading cause of death in the United States 1 . Improving early detection of Alzheimer’s disease is a critical need for optimal intervention success, as well as for counseling patients and families, clinical trial enrollment, and determining which patients would benefit from future disease-modifying therapy 2 . Alzheimer’s disease related brain degeneration begins years before the clinical onset of symptoms. In recent years, the development of PET imaging techniques using tracers for amyloid and tau have improved our ability to detect Alzheimer’s disease at preclinical and prodromal stages, but they have a significant disadvantage of being expensive and requiring specialized tracers and equipment. Many studies have shown that structural MRI-based volume measurements, particularly of the hippocampus and medial temporal lobe, are somewhat predictive of Alzheimer’s disease progression 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 . While the availability and cost of MRI is beneficial, these early attempts to discriminate healthy aging from Alzheimer’s disease based on volumetry had significant limitations, including small sample size and reliance on semi-automated segmentation methods. This motivated the emergence of more sophisticated methods to analyze MRI data based on machine learning. In the last decade, machine learning and fully automatic segmentation methods have achieved impressive results in multiple computer vision and image processing tasks. Early applications of machine learning to Alzheimer’s disease diagnosis from MRIs were based on discriminative features selected a priori 14 , 15 , 16 , 17 . These features include regional volumes and cortical thickness segmented from brain regions known to be involved/implicated with memory loss and accelerated neurodegeneration that accompany Alzheimer’s disease 17 , 18 , 19 . Newer machine learning methods based on deep convolutional neural networks (CNNs) make it possible to extract features directly from image data in a data-driven fashion 20 , 21 , 22 , 23 , 24 , 25 , 26 . These methods have been shown to outperform traditional techniques based on predefined features in most image processing and computer vision tasks 27 , 28 . In the biomedical field, CNN-based methods also have the potential to reveal new imaging biomarkers 29 , 30 . Multiple studies have addressed mild Alzheimer’s disease dementia detection from MRI via deep learning, with notable examples of 3D convolutional neural networks based on 3D AlexNet, 3D Resnet, patch based models, Siamese networks, auto-encoder based models, among others 31 , 32 , 33 . Based on systematic reviews and survey studies 34 , 35 , many of previous approaches had major limitations in their design or validation: Most of these studies focus on distinguishing Alzheimer’s disease dementia patients from normal controls. However, in order to develop effective and clinically relevant early detection methods, it is crucial to also differentiate prodromal Alzheimer’s disease, otherwise known as mild cognitive impairment (MCI), from both normal controls and patients with manifest Alzheimer’s disease dementia. Some recent studies have made inroads to this end 36 , 37 , 38 , but do not evaluate their results on large independent cohorts where there can be more variability in image acquisition and clinical diagnosis, more representative of real world scenarios. The goal of this work is to address these significant challenges. We propose a deep-learning model based on a novel CNN architecture that is capable of distinguishing between persons who have normal cognition, MCI, and mild Alzheimer’s disease dementia. The proposed model is trained using a publicly available dataset from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Although a multisite study, ADNI sites follow a rigorous standard protocol and stringent quality control to minimize site differences and improve our ability to reliably detect neuroanatomical changes. To assess the performance of the proposed methodology when applied in more realistic conditions, we evaluated our approach on an entirely independent cohort of 1522 subjects from the National Alzheimer’s Coordinating Center (NACC). Since (until very recently) each NIH/NIA funded center contributing to the NACC database is free to employ different acquisition parameters, this enables us to validate our approach on imaging data acquired with variable and non standardized protocols. Our approach achieves an area-under-the-curve (AUC) of 85.12 (95% CI: 84.98–85.26) when distinguishing between cognitive normal subjects and subjects with either MCI or mild Alzheimer’s dementia in the independent NACC cohort. For comparison, we have built a reference model based on the volumes and thickness of previously reported brain regions that are known to be implicated early in disease progression. These measures were obtained by the automated segmentation tool Freesurfer 39 . We demonstrate that our proposed deep-learning model is more accurate and orders-of-magnitude faster than the ROI-volume/thickness model. Our results suggest that CNN-based models hold significant promise as a tool for automatic early diagnosis of Alzheimer’s disease across multiple stages. Results Study participants The study is based on data from ADNI and NACC. The cohorts are described in Table 1 . ADNI is a longitudinal multicenter study designed to develop clinical, imaging, genetic, and biochemical biomarkers for the early detection and tracking of Alzheimer’s disease 40 . NACC, established in 1999, is a large relational database of standardized clinical and neuropathological research data collected from Alzheimer’s disease centers across the USA 41 . Both datasets contain MRIs labeled with one of three possible diagnoses based on the cognitive status evaluated closest to the scanning time: cognitive normal (CN), mild cognitive impairment (MCI), or Alzheimer’s disease dementia. Labeling criteria are included/described in supplementary Table S1 . We separated the ADNI subjects at random into three disjoint sets: a training set, with 1939 scans from 463 individuals, a validation set with 383 scans from 99 individuals, and a test cohort of 297 scans from 90 individuals. We built an additional independent test cohort based on NACC using the following inclusion criteria: individuals aged ≽ 55 years with MRIs within ± 6 months from the date of clinically-confirmed diagnosis of cognitively normal (CN), mild cognitive impairment (MCI), or mild Alzheimer’s disease dementia (AD). This resulted in a cohort of 1522 individuals (1281 CN, 322 MCI and 422 AD) and 2045 MRIs. Table 1 reports the basic demographic and genetic characteristics of participants whose scans were used in this study. While cognitive groups in ADNI are well matched on age, in NACC cohort CN subjects were on the average ~ 5 years younger than the two impaired groups; they were 6–7 years younger than ADNI participants. There is a female predominance in NACC data, especially in CN and MCI, and a male predominance in ADNI, notable in the impaired stages. In the impaired population (MCI, AD), the prevalence of the AD genetic risk factor APOE4 is lower in NACC compared to ADNI. Considering these significant differences in cohort characteristics, using NACC as an external validation cohort allows us to assess the robustness of our method. In both ADNI and NACC, education seems to also be lower with progressive impairment stage, which may be indicative of lower structural reserve. Identification of cognitive impairment status Our deep-learning model is a 3D convolutional neural network (CNN) for multiclass classification, with an architecture that is specifically optimized for the task of distinguishing CN, MCI, and AD status based on MRIs (Fig. 1 b, see the “ Methods ” section for more details). We also designed a gradient-boosting model 42 based on 138 volumes and thickness of clinically-relevant brain ROIs (see supplementary Table S2 for the list) obtained by segmenting the MRIs using the Freesurfer software (v6.0, surfer.nmr.mgh.harvard.edu). Quality control was applied to the segmentations through sampling and visual inspection by a trained neuroimaging analyst (JC), in consultation with a clinical neurologist (AVM). Details of the quality control process are included in the “ Methods ” section. Figure 1 Overview of the deep learning framework and performance for Alzheimer’s automatic diagnosis. ( a ) Deep learning framework used for automatic diagnosis. ( b ) Receiver operating characteristic (ROC) curves for classification of cognitively normal (CN), mild cognitive impairment (MCI) and Alzheimer’s disease (AD), computed on the ADNI held-out test set. ( c ) ROC curves for classification of cognitively normal (CN), mild cognitive impairment (MCI) and Alzheimer’s disease (AD) on the NACC test set. ( d ) Visualization using t-SNE projections of the features computed by the proposed deep-learning model. Each point represents a scan. Green, blue, red colors indicate predicted cognitive groups. CN and AD scans are clearly clustered. ( e ) Visualization using t-SNE projections of the 138 volumes and thickness in the ROI-volume/thickness model. Compared to ( d ) the separation between CN and AD scans is less marked. The t-SNE approach is described in details in the methods section. Full size image In order to evaluate the diagnostic performance of the machine-learning models, we computed ROC curves for the task of distinguishing each class (CN, MCI, or AD) from the rest. Table 2 includes our results. On the ADNI held-out test set, the proposed deep-learning model achieved the following AUCs: 87.59 (95% CI: 87.13–88.05) for CN vs the rest, 62.59 (95% CI: 62.01–63.17) for MCI vs the rest, and 89.21 (95% CI: 88.88–89.54) for AD vs the rest. Table 1 Demographic characteristics of the ADNI and NACC cohorts. Full size table Table 2 Classification performance in ADNI held-out set and an external validation set. Full size table The AUCs of the ROI-volume/thickness model were statistically significantly lower than those of the deep-learning model: 84.45 (95% CI: 84.19–84.71) for CN vs the rest, 56.95 (95% CI: 56.27–57.63) for MCI vs the rest, and 85.57 (95% CI: 85.16–85.98) versus the rest. The deep-learning model achieved similar performance on the NACC external validation data compared to ADNI held-out test set, achieving AUCs of 85.12 (95% CI: 85.26–84.98) for CN vs the rest, 62.45 (95% CI: 62.82–62.08) for MCI vs the rest, and 89.21 (95% CI: 88.99–89.43) for AD vs the rest. The ROI-volume/thickness model suffered a more marked decrease in performance, achieving AUCs of 80.77 (95% CI: 80.55–80.99) for CN vs the rest, 57.88 (95% CI: 57.53–58.23) for MCI vs the rest, and 81.03 (95% CI: 80.84–81.21) for AD vs the rest. Note that in Fig. 1 b and c, the Micro-AUC is worse in the ADNI dataset than in the external dataset (NACC). This is because of the imbalance between classes. Micro AUC tends to be driven by the classes with more examples, which are the MCI class in ADNI, and the CN class in NACC. The superior micro-AUC of NACC is therefore due to the fact that the model performs better on the CN class than on the MCI class, which is more common in ADNI. In Supplementary Figure F1 we also report the precision-recall curve for the deep learning model on both the ADNI held-out test set and NACC external validation set. In Supplementary Tables S5 and S6 we provide confusion matrices that show the misclassification rate between different classes. We analyze the features extracted by the deep-learning model using t-distributed stochastic neighbour embedding (t-SNE), which is a projection method suitable for data with high-dimensional features, such as those learned via deep learning 43 . Figure 1 (d) shows the two dimensional t-SNE projections of the deep learning based features corresponding to all subjects in the NACC dataset. Points corresponding to CN and AD subjects are well separated. Figure 1 (e) shows the t-SNE projections of the ROI-volume/thickness features. In this case the separation between AD and CN scans is less clear. In both cases, points corresponding to MCI scans are not clustered together. This visualization is consistent with our results, which suggest that the features extracted by the deep-learning model are more discriminative than the ROI-based features, and also that distinguishing individuals diagnosed as MCI is more challenging. Our deep learning model is significantly faster than classification based on regions of interest. On average, for each MRI, our deep learning model requires 0.07 s (plus 7 min for NMI normalization as preprocessing), compared to 11.2 h required for extracting the regions of interest with Freesurfer (we calculate the average running time of the Freesurfer software on each MRI scan, details on the computational settings can be found in the “ Methods ” section). Progression analysis We investigated whether the deep-learning model and ROI-volume/thickness model learn features that may be predictive of cognitive decline of MCI subjects to AD. In the held-out test set of ADNI, we divided the subjects into two groups based on the classification results of the deep learning model from the baseline date defined as the time of initial diagnosis as MCI: group A if the model classified the first scan as AD (n = 18), and group B if it did not (n = 26). Figure 2 shows the proportion of subjects in each group who progressed to AD at different months past the baseline. Based on the deep learning model, 23.02% (95% CI: 21.43%–24.62%) of subjects in group A (blue line) progress to AD, compared to 8.81% (95% CI: 8.09%–9.53%) of subjects in group B (red line). For the ROI-volume/thickness model, 20.22% (95% CI: 18.49%–21.95%) of subjects in group A (blue line) progress to AD, compared to 11.11% (95% CI: 10.32%–11.89%) of subjects in group B (red line). The forecasting ability of the deep learning model is therefore significantly higher than that of the ROI-volume/thickness based model. Our results suggest that deep-learning models could be effective in forecasting the progression of Alzheimer’s disease. Figure 2 Progression analysis for MCI subjects. ( a ) Progression analysis based on the deep learning model. ( b ) Progression analysis based on the ROI-volume/thickness model. The subjects in the ADNI test set are divided into two groups based on the classification results of the deep learning model from their first scan diagnosed as MCI: group A if the prediction is AD, and group B if it is not. The graph shows the fraction of subjects that progressed to AD at different months following the first scan diagnosed as MCI for both groups. Subjects in group A progress to AD at a significantly faster rate, suggesting that the features extracted by the deep-learning model may be predictive of the transition. Full size image Sensitivity to group differences Figure 3 shows the performance of the deep learning and the ROI-volume/thickness model across a range of sub-cohorts based on age, sex, education and APOE4 status. Supplementary Table S3 includes the AUC values and 95% confidence intervals. The deep learning model achieves statistically significantly better performance on both ADNI and NACC cohorts. One exception is the ApoE4-positive group within NACC, for which classification of CN vs the rest, and MCI vs the rest by deep learning were worse than the ROI-volume/thickness model. Differences in sex and age representation in NACC versus ADNI, as discussed above, could influence this result. However, deep learning outperformed the ROI-volume/thickness model in both males and females, in both cohorts. The CN cohort in the NACC dataset is on average younger than that in the ADNI dataset (Table 1 ). In order to control for the influence of age, we stratified the NACC cohort into two groups (above and below the median age of 70 years old). However, the AUCs of the deep learning model for classification of CN vs the rest were very similar in both groups (85.2 for the younger cohort, to 86.1 for the older cohort). Another possible explanation for the ApoE4 difference is that NACC has a more clinically heterogeneous population, including participants with early stages of other diseases for which ApoE4 can be a risk factor, such as Lewy body and vascular dementia, and which can be clinically indistinguishable from AD at CN and MCI stages. In both ADNI and NACC, low education (< 15 years) also identified a subgroup in which deep learning was outperformed by ROI-volume/thickness model (on distinguishing MCI from others). This subgroup’s clinical presentation, especially at prodromal stages, may be more directly related to brain volume changes according to the cognitive reserve hypothesis 44 . Figure 3 Performance across different subgroups. Performance of the deep learning model (in blue) and of the ROI-volume/thickness model (in red) for different subpopulations of individuals, separated according to sex, education, and ApoE4 status. Full size image Impact of dataset size In order to evaluate the impact of the training dataset size, we trained the proposed deep-learning model and the ROI-volume/thickness model on datasets of varying sizes, by randomly subsampling the training dataset. As shown in Fig. 4 , the performance of the ROI-volume/thickness model improves when the training data increases from 50 to 70%, but remains essentially stagnant after further increases. In contrast, the performance of the deep learning model consistently improves as the size of the training set increases. This is also observed in recent works 45 . Figure 4 Datasize Impact. Performance of the baseline ROI-volume/thickness model (left) and the proposed deep learning model (right) when trained on datasets with different sizes (obtained by randomly subsampling the training set). The performance of the ROI-volume/thickness model improves when the training data increases from 50 to 70%, but remains essentially stagnant after further increases. In contrast, the performance of the deep learning model consistently improves as the size of the training set increases. Given that the deep learning model is trained on a very small dataset compared to standard computer-vision tasks for natural images, this suggests that building larger training sets is a promising avenue to further improving performance. Full size image Model interpretation In order to visualize the features learned by the deep learning model, we computed saliency maps highlighting the regions of the input MRI scans that most influence the probability assigned by the model to each of the three classes (CN, MCI, or AD), as described in the “ Methods ” section. Figure 5 shows the saliency maps corresponding to each class, aggregated over all scans in the ADNI held-out test set. The figure also reports the relative importance of the top 30 ROIs, quantified by a normalized count of voxels with high gradient magnitudes (see “ Methods ” section for more details). In Supplementary Table S4 we report the full list, as well as a quantification of the importance of the ROIs for the baseline volume/thickness model. Figure 5 ( a – c ) Visualization of the aggregated importance of each voxel (in yellow) in the deep learning model when classifying subjects into CN/MCI/AD. For each subject, the importance map was computed using the gradient of the deep-learning model with respect to its input (Details in “ Methods ” section). The computed gradients are visualized over the MNI T1-weighted template. ( d – f ) Top 30 regions of interest, sorted by their normalized gradient count, which quantifies their importance (see “ Methods ” section), for each of the classes. Full size image Combining deep learning and the ROI-volume/thickness models We combined the deep learning and ROI-volume/thickness model model by treating the predictions of the deep learning model as new features, and fusing them with the volume/thickness features to train a new gradient boosting model. This model achieved the following AUCs on ADNI test set: 89.25 (95% CI: 88.82–89.63) for CN versus the rest, 70.04 (95% CI: 69.40–70.68) for MCI versus the rest, and 90.12 (95% CI: 89.75–90.49) for AD vs the rest. It achieved similar performance on the NACC external validation data: AUCs of 85.49 (95% CI: 85.06–85.92) for CN versus the rest, 65.85 (95% CI: 65.37–66.33) for MCI versus the rest, and 90.12 (95% CI: 89.86–90.38) for AD versus the rest. Discussion Our analysis supports the feasibility of automated MRI-based prioritization of elderly patients for further neurocognitive screening using deep learning models. Recent literature 46 has consistently shown high prevalence of missed and delayed diagnosis of dementia in primary care. Major contributory factors include insufficient training, lack of resources, and limited time to comprehensively perform early dementia detection. We show that deep learning is a promising tool to perform automatic early detection of Alzheimer’s disease from MRI data. The proposed model is able to effectively identify CN and AD subjects based on MRI data, clearly outperforming the model based on more traditional features such as ROI volumes and thicknesses. While identifying MCI is more challenging, our method still demonstrates improved performance compared to traditional methods. Moreover, as demonstrated in Fig. 3 , MCI misclassification as mild Alzheimer’s dementia may prove to be clinically useful in identifying a higher risk MCI subgroup that progresses faster. For example, in practice, if subsequent functional assessment confirms MCI rather than dementia, these patients may need to be monitored more closely, counselled differently, and more quickly introduced to disease-modifying therapy that may be available in the future. Our results suggest that deep convolutional neural networks automatically extract features associated with Alzheimer's disease. The analysis of feature importance for the ROI-volume/thickness method shows that the importance of the left hippocampus is an order of magnitude larger than any of the other ROIs, suggesting that this region may dominate the output of the model (see Supplementary Table S4 ). This discovery is consistent with existing literature where a strong association between AD and the volume of the left hippocampus has been reported 47 , 48 , 49 . In contrast, the deep-learning model exploits a much wider range of regions. This highlights the potential of such models to exploit features in imaging data, which are not restricted to traditional measures such as volume and thickness. Many regions previously implicated in distinguishing stage severity of Alzheimer’s disease are recognized as salient by the deep-learning model (see Results). The left and right entorhinal cortex and hippocampus, which are considered the most relevant ROIs to the early stages of Alzheimer’s disease progression by the Braak staging method 50 , appear within the 11 most salient regions. Other regions identified as salient that are in agreement with previous literature include the inferior lateral ventricles (left and right), parahippocampal gyrus (left and right), and white matter hypo-intensities 51 . When comparing to another study that used segmented volumes to distinguish Alzheimer’s disease dementia from controls 52 , the 4th ventricle was uniquely and highly relevant to the deep-learning model whereas certain gyri (fusiform, temporal, angular, supramarginal) were not identified as particularly salient. Our results suggest several avenues for improving deep learning models for early detection of Alzheimer’s disease. First, the available datasets to train these models is quite limited compared to standard benchmarks for computer vision tasks, which have millions of examples 53 . We show that the number of training data has a strong effect on performance, so gathering larger training sets is likely to produce a significant boost in accuracy. Second, we have shown that combining features learned using deep learning with more traditional ROI-based features such as volume and thickness improves the performance. However, using segmentation-based features is very costly computationally (segmentation takes 11.2 h on average, compared to the 7.8 min needed to apply the deep-learning model). Designing deep-learning models trained to extract volumetric information automatically may improve performance, without incurring such a heavy computational cost. In this work, we limit our analysis to brain structural MRIs, in order to develop imaging biomarkers for early detection of Alzheimer’s disease. Integrating information such as age or education, genetic data (e.g. single nucleotide polymorphisms), clinical test data from electronic health records, and cognitive performance tests results could provide a more holistic view of Alzheimer’s disease staging analysis. Building deep learning models capable of effectively combining such information with imaging data is an important direction for future research. Materials and methods Data The data used in this study consists of imaging and diagnosis data from Alzheimer’s Disease Neuroimaging Initiative (ADNI) and National Alzheimer’s Coordinating Center (NACC). Since all the analyses were performed on de-identified data which is publically available, IRB Review was not required. In addition, all methods were carried out in accordance with the approved guidelines. The structural MRI scans (T1 MRIs) were downloaded from the ADNI portal (n = 2619, ). As the diagnoses are done for each screen visit, we directly used the current diagnosis (DXCURREN column), in the ADNI’s diagnosis summary table for each scan. We used the NACC dataset for external evaluation (n = 2025 MRI scans). The NACC initiative was established in 1999, and maintains a large relational database of standardized clinical and neuropathological research data collected from Alzheimer’s disease centres across the USA 54 . The scan-level diagnostic labels were obtained based on diagnosis within 6 months of the scanning time (closest visit). Scans which did not have any diagnostic information within 6 month of the scan were excluded. Volumetric data for the same cohort were compiled from Freesurfer outputs (with built-in commands asegstats2table and aparcstats2table). Both of these two datasets were from large-scale multicenter studies, in which subject inclusion criteria and/or image acquisition protocols can vary by study center, leading to the potential differences in the scan and diagnosis rating (See Supplementary Table S0 for comparison on image acquisition protocols of two cohorts). Our analysis was restricted to patients over the age of 55 in both cohorts, and we only considered T1 MRIs (without contrast) for the study. MRI Data preprocessing All scans in both cohorts were preprocessed by applying bias correction and spatial normalization to the Montreal Neurological Institute (MNI) template using the Unified Segmentation procedure 55 as implemented in step A of the T1-volume pipeline in the Clinica software 56 . The preprocessed images consist of 121 × 145 × 121 voxels, with a voxel size of 1.5 × 1.5 × 1.5 mm 3 . Deep learning model A 3D CNN, composed of convolutional layers, instance normalization 57 , ReLUs and max-pooling layers, was designed to perform classification of Alzheimer’s disease and mild cognitive impairment and normal cognition cases. The architecture is described in more detail in Fig. 1 a(b). In a preliminary work we showed that the proposed architecture is superior to state-of-the-art CNNs for image classification 54 . The proposed architecture contains several design choices that are different from the standard convolutional neural networks for classification of natural images: (1) instance normalization, an alternative to batch normalization 58 , which is suitable for small batch sizes and is empirically observed to achieve better performance (See Supplementary Table S8 ) (2) small kernel and stride in the initial layer for preventing losing information in small regions; (3) wider network architecture with more filters and less layers for the diversity of the features and ease of training. These techniques all independently contribute to boosting performance. As is standard in deep learning for image classification 59 , we performed data augmentation via Gaussian blurring with mean zero and standard deviation randomly chosen between 0 and 1.5, and via random cropping (using patches of size 96 × 96 × 96). Training and testing routines for the DL architectures were implemented on an NVIDIA CUDA parallel computing platform (accessing 2 Intel(R) Xeon(R) Gold 6230 CPU @ 2.10 GHz nodes on the IBM LSF cluster each with 2 NVIDIA Tesla V100 SXM2 32 GB GPUs) using GPU accelerated NVIDIA CUDA toolkit (cudatoolkit; ), CUDA Deep Neural Network (cudnn) and PyTorch 60 tensor libraries. The model was trained using stochastic gradient descent with momentum 0.9 (as implemented in the torch.optim package) to minimize a cross-entropy loss function. We used a batch size of 4 due to computational limitations. We used a learning rate of 0.01 with a total 60 epochs of training which were chosen by grid search based on validation set performance. During training, the model with the lowest validation loss was selected. ROI-volume/thickness model To build a model based on traditionally and commonly used ROI thickness and volumes, we first segmented each brain MRI using Freesurfer, and then computed volume and thickness from these ROIs (using the Freesurfer commands asegstats2table and aparcstats2table ). In order to get the volumetric data for each scan, we processed ADNI and NACC datasets with “recon-all” command at a high performance computer cluster. 16 parallel batch jobs were carried out together, each job was assigned with 320 RAM, 40 CPUs. The average processing time for each scan is about 12 h. 2730 scans of ADNI and 2999 of NACC were successfully processed. For each brain MRI, a total of 138 MRI volume and thickness features (full list in Supplementary Table S2 ), were used as inputs to construct a Gradient Boosting (GB) classifier to predict Alzheimer's disease statuses. GB is a standard method to leverage pre-selected features as opposed to learning them from the data 61 . It constructs an ensemble of weak predictors by iteratively combining weaker base predictors in a greedy manner. We applied the implementation in the Python Sklearn package v0.24.1 sklearn.ensemble.GradientBoostingClassifier 62 . We set the learning rate to 0.1 (this value was selected based on validation performance). Other hyperparameters were set to their default values. After hyperparameter selection, we trained the model 5 times with different random seeds and reported average performances of these 5 models on the ADNI test set and the external NACC test dataset. Quality control for freesurfer segmentation Because of the large number of scans, we developed a two-stage approach for the quality control (QC) of a specific ROI. In the first stage, we located outlier cases within each cohort by fitting a Gaussian distribution to the volumes and centroids of all the segmented ROIs, using a cut-off of mean +/− 3 standard deviations. In the second stage, we conducted QC on the outlier and non-outlier cases separately. For the outliers, all cases were examined visually. For the non-outliers, 100 cases were randomly selected for visual examination. The visual examination was conducted by a trained neuroimaging researcher (JC), in consultation with a neurologist (AM) and a radiologist (HR). This two-stage approach was then repeated for two representative ROIs, namely hippocampus and entorhinal cortex, on each hemisphere, for each cohort. As a result, several segmentation errors were found in the outlier group of both ADNI and NACC cohorts (and excluded from follow-up machine-learning analyses), while no errors were found in the non-outlier group. Performance metrics We computed areas under the ROC curve (AUC), which are widely used for measuring the predictive accuracy of binary classification problems. This metric indicates the relationship between the true positive rate and false positive rate when the classification threshold varies. As AUC can only be computed for binary classification, we computed AUCs for all three binary problems of distinguishing between one of the categories and the rest. We also calculated two types of averages, micro- and macro-average denoted as Micro-AUC and Macro-AUC respectively. The micro average treats the entire set of data as an aggregated results, and consists of the sum of true positive rates divided by the sum of false positive rates. The macro average is computed by calculating the AUC for each of the binary cases, and then averaging the results. t-SNE projection t-SNE is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and minimizes the Kullback–Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. We applied the implementation in the Python Sklearn package v0.24.1 sklearn.manifold.TSNE with default hyperparameters. Interpretation of models in terms of ROIs In order to analyze the features learned by the deep learning model, we computed saliency maps consisting of the magnitude of the gradient of the probability assigned by the model to each of the three classes (CN, MCI, or AD) with respect to its input 63 . Intuitively, changes in the voxel intensity of regions where this gradient is large have greater influence on the output of the model. As mentioned above, we segmented the MRI scans in our dataset to locate ROIs. To determine the relative importance of these regions for the deep-learning model, we calculated the total count of voxels where the gradient magnitude is above a certain threshold ( \({10}^{-3}\) , which is the magnitude observed in background regions where no brain tissue is present) within each ROI, and normalized it by the total number of voxels in the ROI. We excluded left-vessel, right-vessel, optic-chiasm, left-inf-lat-vent, and right-inf-lat-vent, due to their small size (less than 120 voxels). For the ROI-volume/thickness models, we determined feature importance using a standard measure for gradient boosting methods 64 . This is obtained from the property feature_importances_ in the Python Sklearn package v0.24.1 sklearn.ensemble.GradientBoostingClassifier . Statistical comparisons To report statistical significance of descriptive statistics we employed 2-tailed, unpaired testing. We used python statsmodel v0.12.2 and scipy.stats v1.6.1. A p-value < 0.05 was reported as significant. To compute 95% confidence intervals, the bootstrapping method with 100 bootstrap iterations was used. Reproducibility The trained deep learning model, and corresponding code, notebooks, the IDs of subject-splits (training/validation/held-out) from publicly available ADNI, and the IDs of NACC participants included in our external validation study are all publicly available in our open-source repository: . Ethics The datasets used in this analysis are both de-identified and publicly available and therefore we did not need to get IRB approval for this study. Data availability Both cohorts used in our study are publicly available at no cost upon completion of the respective data-use agreements. We used all the T1 MRI scan and clinical data from ADNI (June 15th 2019 freeze), and NACC (Jan 5th 2019 freeze). The IDs of patients and scans used in our study and training, validation, test indicators are available in our github repository. Results of our Freesurfer segmentation (results of over 60,000 compute time on our HPC) for all of NACC and ADNI scans are also publicly available at no cost through NACC and ADNI websites, to anyone who has signed ADNI and NACC data-use agreements. Our trained model, training and validation scripts, and model predictions for each scan ID and patient ID, as well as Freesurfer segmentation volume and thickness features are also available as open-source on github.
Griffith University researchers have demonstrated that a bacteria can travel through the olfactory nerve in the nose and into the brain in mice, where it creates markers that are a tell-tale sign of Alzheimer's disease. The study, published in the journal Scientific Reports, showed that Chlamydia pneumoniae used the nerve extending between the nasal cavity and the brain as an invasion path to invade the central nervous system. The cells in the brain then responded by depositing amyloid beta protein which is a hallmark of Alzheimer's disease. Professor James St John, Head of the Clem Jones Center for Neurobiology and Stem Cell Research, is a co-author of the world first research. "We're the first to show that Chlamydia pneumoniae can go directly up the nose and into the brain where it can set off pathologies that look like Alzheimer's disease," Professor St John said. "We saw this happen in a mouse model, and the evidence is potentially scary for humans as well." The olfactory nerve in the nose is directly exposed to air and offers a short pathway to the brain, one which bypasses the blood-brain barrier. It's a route that viruses and bacteria have sniffed out as an easy one into the brain. The team at the Center is already planning the next phase of research and aim to prove the same pathway exists in humans. "We need to do this study in humans and confirm whether the same pathway operates in the same way. It's research that has been proposed by many people, but not yet completed. What we do know is that these same bacteria are present in humans, but we haven't worked out how they get there." There are some simple steps to look after the lining of your nose that Professor St John suggests people can take now if they want to lower their risk of potentially developing late-onset Alzheimer's disease. "Picking your nose and plucking the hairs from your nose are not a good idea," he said. "We don't want to damage the inside of our nose and picking and plucking can do that. If you damage the lining of the nose, you can increase how many bacteria can go up into your brain." Smell tests may also have potential as detectors for Alzheimer's and dementia says Professor St John, as loss of sense of smell is an early indicator of Alzheimer's disease. He suggests smell tests from when a person turns 60 years old could be beneficial as an early detector. "Once you get over 65 years old, your risk factor goes right up, but we're looking at other causes as well, because it's not just age—it is environmental exposure as well. And we think that bacteria and viruses are critical."
10.1038/s41598-022-20674-x
Biology
When coral reefs change, researchers and local fishing communities see different results
A. Rassweiler et al. Perceptions and Responses of Pacific Island Fishers to Changing Coral Reefs. Ambio. In press, dx.doi.org/10.1007/s13280-019-01154-5 Journal information: AMBIO
http://dx.doi.org/10.1007/s13280-019-01154-5
https://phys.org/news/2019-03-coral-reefs-local-fishing-results.html
Abstract The transformation of coral reefs has profound implications for millions of people. However, the interactive effects of changing reefs and fishing remain poorly resolved. We combine underwater surveys (271 000 fishes), catch data (18 000 fishes), and household surveys (351 households) to evaluate how reef fishes and fishers in Moorea, French Polynesia responded to a landscape-scale loss of coral caused by sequential disturbances (a crown-of-thorns sea star outbreak followed by a category 4 cyclone). Although local communities were aware of the disturbances, less than 20% of households reported altering what fishes they caught or ate. This contrasts with substantial changes in the taxonomic composition in the catch data that mirrored changes in fish communities observed on the reef. Our findings highlight that resource users and scientists may have very different interpretations of what constitutes ‘change’ in these highly dynamic social–ecological systems, with broad implications for successful co-management of coral reef fisheries. Access provided by MPDL Services gGmbH c/o Max Planck Digital Library Working on a manuscript? Avoid the common mistakes Introduction Coral reef ecosystems are under significant anthropogenic pressures from overfishing, pollution, sedimentation, ocean acidification, and rising seawater temperatures (Bellwood et al. 2004 ; Hughes et al. 2018 ), resulting in unprecedented levels of coral mortality (Hughes et al. 2017 ) and shifts from coral-dominated to macroalgae-dominated community states (Rogers and Miller 2006 ). Beyond biodiversity loss, degraded reefs present challenges for millions of coastal dwellers who rely on healthy reef ecosystems for food, income, and their personal and cultural identities. This has prompted research examining how local communities and resource users perceive, adapt to, and manage coral reefs in the Anthropocene (McClanahan and Cinner 2012 ; McMillen et al. 2014 ), including a focus on adaptive co-management, whereby management is implemented and adapted based on knowledge about feedbacks between resource users and shifting local ecosystems (Hughes et al. 2005 ). The Pacific Islands region represents an ideal context to investigate how local communities and changing coral reefs interact. Island peoples have shown the capacity to adapt, cope, and innovate in the face of social–ecological change, with positive and negative outcomes for coral reef health (Johannes 2002 ). In some Pacific Islands, such as Fiji, Vanuatu, and the Solomon Islands, marine resources have been effectively managed over long periods through periodic fishing ground closures, gear restrictions, and other socially enforced constraints on harvesting (Cinner et al. 2006 ). Elsewhere, local responses to changing conditions have had negative ecological outcomes, as with poison and dynamite fishing (McManus et al. 1997 ). The effectiveness of adaptive responses is shaped by local cultural values and power relations that inform decision-making and the range of possibilities available (Cinner et al. 2018 ). Effective adaptive management requires that resource users detect or anticipate shifts in the local environment and alter their activities accordingly. Some empirical studies have demonstrated that Pacific islanders can detect rapid shifts in benthic communities disrupted by tsunamis (Lauer and Matera 2016 ), in addition to more gradual changes such as expanding seagrass beds (Lauer and Aswani 2010 ). Numerous questions remain, however, about the sensitivity of local resource users to change, and in particular whether ecosystem disturbances identified by ecologists are similarly perceived by Pacific islanders. We addressed these issues for a small-scale reef fishery on the island of Moorea, French Polynesia. Social and ecological surveys explored how communities perceived and responded to changes in fishery resources associated with a crown-of-thorns sea star (COTS) outbreak followed by a destructive cyclone. In 2004, coral cover around Moorea was near the highest levels observed in the past half century (Trapon et al. 2011 ; Lamy et al. 2016 ), but an outbreak of corallivorous COTS that peaked in 2009, followed by Cyclone Oli in early 2010, reduced live coral cover by > 95% (Adam et al. 2011 ; Trapon et al. 2011 ; Adam et al. 2014 ; Lamy et al. 2015 ). Dead coral skeletons and cleared reef substrates provided substantial free space for growth of macroalgae, raising the possibility that a macroalgal phase shift could occur. However, benthic community changes were rapidly followed by changes in the fish assemblage, with roving herbivorous fishes such as parrotfishes doubling in density and tripling in total biomass (Han et al. 2016 ), thus preventing macroalgae from establishing. Moreover, in the years since the disturbances, coral cover has increased and even exceeds predisturbance levels in some areas (Holbrook et al. 2018 ). Despite intensive ecological study, it is not known if these changes in the fish assemblages have altered fishable resources, the activities of reef fishers, or how local people perceived the changes. Because spearfishing—a highly selective method—is common in Moorea, a shift in the abundances of fishable resources provides an opportunity to assess whether fishers alter what they catch as their resource environment changes. This study addressed four questions: (1) How did residents of Moorea perceive the shifts documented in ecological studies? (2) Do they report changing their fishing behavior or seafood consumption in response to the shift? (3) How did the changes in the fish assemblage affect the availability and taxonomic composition of fishable biomass? and (4) Is there evidence for changes in fishing behavior (such as taxonomic selectivity) over time? To answer these questions, we conducted 351 household surveys documenting fishers’ perceptions of the changes and their potential responses via alteration in fishing practices or fish consumption. We analyzed a time series of catch data (~ 18 000 identified and measured fishes) collected before and after the disturbances, spanning a 9-year time period, to determine changes in targeted fish species and sizes, including key groups of herbivores crucial to recovery and resilience of the coral state. Finally, we compared the catch data with extensive surveys that estimated abundances and biomass of fishes on the reef throughout the same time frame. Materials and Methods Ecological and social contexts Moorea (17°32′S, 149°50′W) is a volcanic ‘high’ island 60 km in perimeter with an offshore barrier reef that encloses a shallow lagoon (Fig. 1 a). The island has three types of reef habitats: within the lagoon, there are fringing reefs and back reefs, while outside the barrier reef crest, there is a steeply sloping fore reef. Moorea has over 17 000 inhabitants (Institut de la statistique de la Polynésie française 2012 ) residing in five communes associées : Afareaitu, Ha’apiti, Paopao, Papetoai, and Teavaro. It has undergone substantial economic development over the last half-century, including becoming a major international tourist destination. Communal land has been supplanted by private land ownership, and the state declared that all lagoon and marine areas are public property, meaning that customary sea tenure is nonexistent. Fig. 1 a Map of the island with the focal regions of Afareaitu, PaoPao, and Teavaro marked. b Photo of fish being sold by the roadside (note the 0.5 m sizing bar) Full size image Reefs in Moorea continue to be the focus of widespread fishing activity, although major economic and social changes have shifted household livelihoods away from direct dependence on marine resources for food or income toward wage-earning employment. Over half of households fish, with free-dive spearfishing as the preferred method (Leenhardt et al. 2016 ). Most people fish so they can eat and share fresh reef fishes, a fundamental marker of Polynesian life. Reef fishes constitute the bulk of the catch and are prized as symbols of Polynesian identity and cultural pride. It is notable that Moorea’s households are less dependent on marine resources for food security or income than is common in other regions in the Pacific. As citizens of France, they have access to state-subsidized healthcare, education, and social services, so poverty levels are lower than in most of Oceania. Although most households contain fishers, only a small number of fishers fish full-time solely for income. Household surveys and key informant interviews In 2014–2015, we interviewed 351 (approximately 20%) households in the communes of Afareaitu, Papetoai, and Haapiti. On each day of sampling within a commune, the researcher chose two starting locations within the village boundaries based on a stratified approach, so that starting locations were distributed roughly evenly within the village. Starting from one location in the morning, and the second in the afternoon, the researcher systematically approached nearby houses and conducted an interview in each household in which an adult was willing to be interviewed. The result was a sampling design that was spatially unbiased, although necessarily biased toward those willing to be interviewed. The 60–80-min survey interviews were conducted in French or Tahitian, with local Tahitians assisting in the surveys and translating for household heads more comfortable speaking in Tahitian. Interview topics included basic demographic information, fishing effort, livelihoods, catch preferences, consumption patterns, and perceptions of resource conditions. Standardized questions allowed for later comparison, but more open-ended questions were used to discuss important issues and perceptions. Sample size for the standardized questions varied, since not every question was relevant for all respondents. We also conducted 15 semistructured interviews with fishers from around the island, who were considered highly knowledgeable local experts. In 2018, follow-up interviews were carried out with nine key informants to whom results from this paper were presented. Questions explored respondents’ perceptions of postdisturbance changes in the fish assemblages. Fish-seller surveys The sale of most reef fishes takes place from small roadside stands along the perimeter road of the island (which has no fish markets). Fresh reef fishes are strung through the gills and hung from racks (Fig. 1 b, Table S1 ). Each string of fish is sold as a unit, known in Tahitian as a tui . A seller, often the fisher, assembles each tui and 10 or more may be hung for sale. Any single tui may contain a few larger fishes or many small ones of different species. Most fish stands are active early in the morning, and by mid-morning most have sold their catch. To sample the fishes being sold at these roadside stands, a researcher drove Moorea’s ring road early in the morning on weekends, typically the busiest times for fish sales. At each stand, the rack of tui was photographed with a scale bar of known size (0.5 m), and the seller was briefly interviewed. Photographed fishes were later identified to the lowest taxonomic level possible, and the length of each was estimated by comparison with the scale bar (Schneider et al. 2012 ). Catch surveys were conducted in five different years during 2007–2015 (2007, 2008, 2012, 2014, 2015; Table 1 ). Three of the five communes associées were sampled in all 5 years (Afareaitu, Paopao, and Teavaro), and so only data from these regions were analyzed to maintain consistent geographical coverage through time, with data pooled across regions in all analyses. Table 1 Number of fish observed in the reef surveys and in the catch, by year Full size table Reef surveys We assessed reef fish populations using data from the NSF-funded Moorea Coral Reef Long Term Ecological Research (MCR LTER) project that collects time series data at 18 locations around Moorea (Brooks 2017 ), and the SO CORAIL-PGEM monitoring program that collects data from 13 locations around the island (Lamy et al. 2015 ). Here we used data collected annually from 2007 to 2015 (Table 1 ), and included data only from transects located on reefs offshore of the three focal communes (Afareaitu, Paopao, Teavaro), as most targeted fishes are territorial, and most fishers fish near where they live. The MCR LTER surveys are conducted by SCUBA divers between 0900 and 1600 h during late July or early August. Abundances of all mobile taxa of fishes observed are recorded on fixed 5 m × 50 m transects that extend from the surface of the reef through the water column. The abundances of all nonmobile or semicryptic taxa of fishes are also counted along the same transect lines in a 1 m wide transect. The total length of each fish observed is estimated to the nearest 0.5 cm. The SO CORAIL-PGEM monitoring program has sampled similar habitats in each of these years, counting and estimating sizes of all fishes within 5 m × 25 m transects. Fish biomass (kg) is calculated based on species-specific scaling parameters (Brooks 2011 ). Fishing selectivity and fishable biomass Spearfishing is a highly selective fishing method in which the size and species of targets can be observed before they are harvested. We tested for selectivity in size by comparing the fishes being sold by the roadside to the sizes of fishes observed during reef surveys (pooling data across the 5 years for which we have catch data). We defined a minimum fishable size (15 cm) across all species based on sizes observed in the catch (< 2% of fishes were below this size). We determined which taxa were targeted based on the relative abundances of each genus observed in the catch and on the reef. We defined fished taxa as genera making up more than 0.1% of the total catch, which included 23 genera, constituting 99% of all fishes and 95% of all biomass being sold. Parrotfishes from the genera Scarus and Chlorurus were combined in all analyses because species from these genera often could not be reliably distinguished in our photographs of tuis . We note that some excluded species may be highly prized but rare in the catch because they are rare on the reef. Subsetting the ecological survey data based on our list of 23 targeted genera and the minimum fishable size, we calculated how the total fishable biomass and the fishable biomass of different targeted groups changed from 2007 to 2015. Taxonomic composition of the catch We evaluated the degree to which variation in the biomass of each taxon on the reef predicts variation in the taxonomic composition of the catch by comparing the relative biomass of the seven most common taxa in the catch with their relative biomass on the reef. We excluded soldierfishes ( Myripristis spp.) from this analysis because they are nocturnal and were poorly sampled in our (diurnal) reef surveys, when they shelter within reef structures. Other species may shift habitats on a daily cycle, but any such movements are well within the spatial scale of our sampling. Because sampling effort of the catch (during roadside surveys) was not consistent over time, we cannot determine how total catch changed. Results Household surveys and interviews The household surveys revealed that a substantial majority of households reported regular consumption of fish, with 67% reporting that they eat fish at least three times per week, and more than half of those eating fish six to seven times per week. Most households (76%) reported at least one member who actively participated in the local reef fishery. There was great consistency in the species that households preferred to eat and preferred to catch (Table 2 ). All of these species are commonly caught and highly prized for the taste and texture of their meat. An exception to the focus on reef fishes is tuna ( thon in French), which has become an increasingly important component of diets in Moorea but which is caught by a small number of pelagic fishers operating with specialized boats offshore. Table 2 Fish most frequently reported eaten or caught in household surveys ( N = 326 surveys) Full size table There was considerably more variability in how households reported any changes in their behavior in response to the outbreak of COTS ( taramea in Tahitian) and the cyclone (Table 3 ). Although 40% remembered the COTS outbreak and 100% remembered Cyclone Oli, few reported modifying the kinds of fishes they ate or bought (1.5% and 10%, respectively). Of those that reported responding to the COTS outbreak, responses included removing COTS from the reefs (30%), avoiding fishing in COTS-dominated areas (18%), or changing their fishing areas (6%). Of those that reported responding to Cyclone Oli, responses included waiting until the lagoon was clean from runoff before resuming fishing (30%), fishing in different locations because the fishes moved to different areas of the lagoon (16%), fishing less in the lagoon than prior to the cyclone (13%), or fishing less overall after the cyclone (10%). Table 3 Percentage of households who responded affirmatively to the questions related to the COTS outbreak and Cyclone Oli Full size table In-depth interviews with expert fishers revealed that they are aware of COTS outbreaks and they recognize that COTS kill coral. Two expert fishers described how in the past, parts of the sea stars’ bodies were applied as garden pesticide. Other expert fishers mentioned that the Papetoai school and local fisher organizations (in Haapiti and Afareaitu) organized outings where local people removed COTS from the reefs. One fisher noted that this practice was “a new thing” and that “the oldtimers never mentioned this kind of practice happening in the past.” Most fishers acknowledged a relationship between live coral cover and reef fish abundance. However, few indicated that the dramatic loss of live coral cover caused by the COTS outbreak or Cyclone Oli had an impact on the composition of fish assemblages or the relative abundances of the main targeted taxonomic groups. Fishing selectivity Roadside fish sellers mostly caught fishes on the reef (77%), largely from the lagoon (69%), and the most common gear used was the spear gun (83%), followed by fishing with nets (11%) and hook and line (5%). Fishes sold in the morning were mostly caught at night (90% between 1800 and 0600 h), so our surveys of fishes sold by the roadside (hereafter, “the catch”) may not be representative of fishing activities undertaken at other times. Fishes in the catch represent a nonrandom distribution of sizes relative to those observed on the reef (Fig. 2 ). Harvested fishes were significantly larger on average than fishes on the reef (23 vs. 8 cm; P < 0.0001, Wilcoxon rank-sum test). More than 98% of fishes in the catch were at least 15 cm in length, suggesting this is a minimum bound on the size of fishes that are targeted. The relative abundance of taxa observed in the catch also diverged substantially from the community found on the reef, even when only individuals of fishable size were considered. More than 99% of the fishes in the catch were from 23 genera (Table 4 ) with almost 60% of the catch made up of unicornfishes ( Naso spp.), parrotfishes ( Scarus and Chlorurus spp.), soldierfishes ( Myripristis spp.), and rabbitfishes ( Siganus spp.). The composition of the catch contrasted with the most abundant taxa on the reef (based on fishable sized individuals; Table 4 , Table S1 ). In particular, while Scarids and Naso were both abundant on the reef, Myripristis and Siganus were rarely observed in the reef surveys (the 38th and 29th most abundant taxa, respectively; Table S1 ). Furthermore, several of the most abundant taxa on the reef were completely absent from the catch, most notably surgeonfishes from the genus Ctenochaetus (25% of fishes on the reef). Fig. 2 Size distributions of all fish taxa observed on the reef, the subset of targeted taxa on the reef, and the taxa found in the catch. Curves are kernel density estimations (bandwidth smoothing parameter = 0.02) Full size image Table 4 Relative abundance of taxa observed in the catch and their corresponding % contribution to abundance on the reef (considering only fish of targetable size, > 0.15 m). The top 23 genera observed in the catch are listed, representing more than 99% of the catch. The genera Chlorurus and Scarus have been combined because they can be difficult to distinguish in the photos of the catch. Stars (*) indicate taxa reported commonly eaten in more than 5% of household surveys Full size table Fishable biomass The amount of fishable biomass (fishes > 15 cm in length from 23 targeted genera) on the reef was relatively stable from 2007–2015. Although there was some variation from year to year (Fig. 3 ), including a spike in 2010, there was no sustained shift in fishable biomass coinciding with the disturbances that occurred in 2009–2010. By contrast, there was substantial change in the abundances of some taxonomic groups on the reef over the time period. Most dramatically, Naso biomass fell from 21 to ~ 4 kg ha −1 . This decline was offset to some degree by an increase in the biomass of parrotfishes of the genus Scarus . While the biomass of other taxa varied substantially from year to year, there was no apparent secular trend in their abundances. Fig. 3 Fish biomass on the reef through the time period spanning the 2009–2010 disturbances. “Small Fish” indicates biomass of fish smaller than 0.15 m, while “Fishable Size” represents larger (> 0.15 m) fish from nontargeted taxa. The remaining areas represent biomass of fish > 0.15 m which are commonly found in the catch (Table 4 ), with 8 taxa broken out, and the remainder combined into “Other Fishable.” The timing of the peak disturbance is indicated with a dashed line Full size image Taxonomic composition of the catch The changes in the taxonomic composition on the reef were roughly mirrored by trends in the catch (Fig. 4 ). For example, Naso comprised more than a third of the catch prior to the disturbances, and less than 10% after. By contrast, the proportion of the catch composed of parrotfishes from the genera Chlorurus and Scarus increased over time from 56 to 66%. Naso, Chlorurus and Scarus collectively composed the bulk of the fishable biomass on the reef (48–66%) and a roughly similar total proportion of the catch (43–65%). Fig. 4 The relative biomass of fishable taxa (including only individuals > 15 cm) on the reef ( a ) and in the catch ( b ). The timing of the peak disturbance is indicated with a dashed line Full size image For the taxa that were well sampled in our reef surveys, there was a surprisingly high correlation between the biomass of each taxon on the reef and its annual contribution to the catch, with high correlations observed for the most common taxa (Fig. 5 ). Indeed, the correlation for unicornfishes is above 0.99, which suggests both that our reef surveys captured variation in their abundances over time and that the variation in the abundance within the ecological community may explain the observed pattern of variation in catch. Fig. 5 The relationship between the relative biomass of each taxonomic group on the reef and the relative biomass of that group in the catch plotted by year. The time-averaged biomasses in the catch and on the reef for each taxon are also plotted ( h ). In this latter panel, the symbols for each species match those in a – g and the 1:1 line is plotted Full size image Discussion In this study, we coupled data from intensive sampling of both the ecological community and human resource users to provide new insights into how fishes and Pacific Island fishing communities interact during periods of substantial ecological change, and how the fishing communities perceive the changes. Each method provided a different view of these feedbacks. Household surveys confirmed that residents of Moorea were aware of the major disturbances that occurred on the reef, but revealed that little explicit change occurred in fishing behavior or perceptions of resources harvested. This contrasts with the marked shifts in the taxonomic composition of the catch that we observed, particularly the significant decrease of Naso spp., one of the most highly prized fishes due to its palatability. Those taxonomic shifts mirrored changes we observed in fish communities on the reef, implying that the composition of the catch is highly dependent on reef state despite the high selectivity of the fishery and local perceptions that fishing and fished resources had not changed. Fishing selectivity Our results revealed high selectivity in the Moorea reef fishery, both in terms of body size and taxonomy, consistent with observations of other spearfishing-focused fisheries in the Pacific (Dalzell et al. 1996 ). Fishers showed a preference for fishes that are larger on average than those encountered on the reef. Even when size selectivity was accounted for, we found strong taxonomic selectivity for a handful of taxa, with some being disproportionately abundant in the catch relative to their abundances on the reef (e.g., Naso spp. and Myripristis spp.) while others were greatly under-represented in the catch (e.g., Ctenochaetus spp.). This high degree of size and taxonomic selectivity is not surprising given the prevalence of spearfishing on the island. Spearfishers visually identify and evaluate each fish before it is harvested (Frisch et al. 2008 ). The resultant selectivity affords them greater latitude for adapting to ecological shifts than other capture techniques, such as hook and line or gill netting, in which the fishes are invisible to the fisher before capture. The suite of preferred species on Moorea is not limited to larger-bodied species. Soldierfishes ( Myripristis ), for example, are relatively small-bodied but represent the third most fished genus (in terms of numbers and biomass in the catch), as they are prized for the taste and the texture of their meat rather than their large filets. In commercially oriented fisheries, size selectivity can be linked to higher market demand or value for fishes of particular sizes, e.g., large enough to filet or sized to fit on a dinner plate (Reddy et al. 2013 ). In Moorea, spearfishers commonly describe their fishing decisions through idioms of cooking and eating, and will seek out certain species based on how they want to cook their meal that day, underscoring the noneconomic nature of the fishery. Elsewhere, Pacific Islanders commonly target piscivores, such as emperors and groupers, but in fisheries where spearfishing is the primary mode of capture, herbivorous fishes such as unicornfishes and parrotfishes often dominate the catch (Jennings and Polunin 1995 ; Gillett and Moy 2006 ). Contemporary reef fish preferences in Moorea may be the result of the gear type used or an outcome of overfishing and fishing down the food web (Pauly et al. 1998 ) from piscivores to herbivores. More historical work could shed light on this possibility by detailing the trajectory of taxonomic selectivity over the last several centuries. We also note that Moorea fishers show a strong selectivity against harvesting Ctenochaetus and Acanthurus ( maito in Tahitian) even though they are some of the most abundant species on the reef. These fish are known to be ciguatoxic, and the sale of Ctenochaetus was banned by the territorial government in the 1960s (Walter 1968 ). Taxonomic composition on the reef and in the catch over time Our roadside surveys indicate that the taxonomic composition of the catch shifted substantially after the disturbance (Fig. 4 ). Changes in the catch largely correlated with shifts in the taxonomic composition of the reef community, particularly for species that made up a substantial proportion of the catch (Fig. 5 ). However, there is wide variation in the strength of this relationship. The unexplained variation may stem from analyzing catch at the genus level, likely combining species of different desirability within the same category. For example, dynamics on the reef and in the catch were poorly correlated for Acanthurus . There are five species commonly observed in the catch within this genus; if some of these are targeted and some are not (possibly based on ciguatera risk), then trends in the biomass of the genus on the reef may not represent trends in the preferred species within that genus, obscuring a tighter relationship at the species level. By contrast, one species ( Naso lituratus ) makes up more than 90% of the fishable-size individuals of that genus on the reef, so variation in the abundance of that species translates more directly to our genus-level analyses. The composition of the catch is a joint product of the availability of resources and the demand for each from the fishing communities. If the catch primarily reflects demand for different species, we might expect to see little change in the composition of the catch as the ecosystem changes, particularly in such a highly selective fishery. Instead, the high correlations between biomass on the reef and in the catch for unicornfishes ( Naso spp.) and parrotfishes ( Scarus spp. /Chlorurus spp.) indicate that shifting relative abundances result in different compositions of the catch, and suggest that there is considerable flexibility in harvest and consumption behaviors. Perceptions of change Our household surveys and key informant interviews suggest that Moorea’s fishers generally were aware of the COTS outbreak and Cyclone Oli and that they understood the ecological impacts of these disturbances. This in-depth understanding is not surprising given most engage in fishing on a regular basis and thus have frequent experiential contact with the marine environment. It is widely acknowledged that in Pacific Island contexts where communities depend on marine resources, islanders maintain rich, site-specific knowledge of the marine environment as well as sophisticated understanding of ecological processes (Johannes 1981 ; Lauer 2017 ). Despite their awareness of the disturbances, few households saw these as a change that warranted modification of their fishing strategies, or altering what species of fish they ate. This narrative is in striking contrast to the shifts we documented with our roadside surveys conducted before and after the disturbances. Most surprisingly, the significant decrease of Naso spp. in the reef counts, while reflected in the catch, was not expressed in informants’ responses. There are several possible explanations for this apparent discrepancy. For one, the relative abundances of species shifted after the disturbances, but the suite of species caught did not, with the same top five species caught before and after the disturbances. It may be that Moorea fishers would only report a more radical shift (e.g., the complete disappearance of a targeted fish) in the taxonomic composition of their diet and catch. Furthermore, fishers speak less of shifts in abundance per se but rather about changes in fishes’ behaviors and their habitat choices. When asked about the decline in abundance of Naso spp. in the catch surveys, several fishers stated that unicornfishes have learned, as a result of heavy fishing pressure, to swim into deeper waters. Yet these behavioral changes of Naso spp. do not necessarily result in fewer fish caught for the best spearfishers. As one fisher stated, “a good spearfisher will find and catch the fish he desires.” The discrepancy between what constitutes noteworthy changes for Moorea’s fishers and western scientists could also be related to the different ways each group conceptualizes marine environments (Johannes 1981 ; Hviding 1996 ). Ethnographic material indicates that Pacific Islanders cognize marine and terrestrial environments holistically, with more attention focused on the components and interactions of an integrated whole, than on discrete ecological attributes. The most vivid Islander expressions of this ecosystem-like understanding are the wedge-shaped, ridge-to-reef resource management units that have been described across Oceania (Ruddle et al. 1992 ; Lauer 2016 ). These land–sea concepts emphasize the intrinsic entangling of physical and biological components with the social and cultural world. In addition to a holistic worldview of coral reef social–ecological systems, island societies like Moorea also emphasize the unpredictability and unknowability of these systems. In fact, many nonwestern societies, including those in Oceania, grasp the nature of ecosystems in ways similar to nonequilibrium ecosystem science, a framework that emphasizes surprise and nonlinearity, threshold effects, and systems flips instead of predictability, stable states, and homeostasis. The magnitudes of ecological and fishing changes we observed likely fall within the bounds of Pacific Islanders’ cultural expectations for normal fluctuations in their diets and catch. In other words, the disturbances deemed dramatic from a Western scientific perspective, and perceived as significant events to fishers, are also inscribed for Pacific Islanders within a ‘normal’ cyclical pattern of disturbances and recoveries. Indeed, the ecological observations of the COTS outbreak and Cyclone Oli span relatively short timeframes (barely a decade) relative to individuals’ own lifespans. In addition, the fore reef of Moorea has proven very resilient to disturbances that reduce coral cover, with several major disturbance events and subsequent recovery of the reef since the 1970s (Adam et al. 2011 ; Trapon et al. 2011 ; Holbrook et al. 2018 ). In the case of the most recent disturbances considered here, many areas of the fore reef regained their predisturbance levels of live coral within 5 years (Holbrook et al. 2018 ). The resilience of the reef ecosystem, when considered at the scale of the individuals’ lifespans, may contribute to the perceptions of our informants (whose mean age = 47 years) of the limited impacts the disturbances had on their fishing behavior and dietary choices. Future archeological research, similar to that carried out on Hawaii and Rapa Nui (Kirch and Hunt 1997 ), exploring the long-term socioecological dynamics on Moorea, could shed light on the scale and intensity of social–ecological changes on Moorea in the context of disturbance frequency. Conclusions Although this study focuses on fisher–fish interactions in Moorea, our results are of general relevance for coral reef ecosystems. Coral reefs globally are experiencing increasing disturbances, in many cases causing major changes in benthic and fish communities (Holbrook et al. 2008 ). Understanding how fishers conceive and respond to these ecological changes is crucial to predicting how social–ecological feedbacks might enhance or erode ecosystem resilience (Leenhardt et al. 2016 , 2017 ). Such feedbacks are particularly likely in places like Moorea where the most commonly targeted fishes are herbivores, which control macroalgae and confer resilience on the coral-dominated reef state (Mumby et al. 2007 ; Holbrook et al. 2016 ). Fishing on such species has often been linked to switches between coral and algal community states (Hughes et al. 2007 ; Rasher and Hay 2010 ), and thus the details of fishing behavior may be critical for understanding the resilience of these alterative states. More broadly, our analysis has implications about researching knowledge production and formulating management initiatives in socioecological systems. The disconnect between Moorea’s fishers’ reporting of changes, those apparent in the catch data, and the characterizations of reef change offered by ecologists, highlights a critical issue—western scientists and other stakeholders may produce knowledge grounded in different epistemological and ontological assumptions about the world and what constitutes ‘change’ (Barnes et al. 2013 ). In complex social–ecological systems like the one studied here, we should not expect singular, incontrovertible knowledge about the system, and there will be significant differences between and gaps within both local and ecological knowledge that may only widen with the uncertainty of the Anthropocene era. Thus, it is likely to be increasingly useful to understand how all stakeholders (e.g., scientists, conservation practitioners, fishers, tourist operators, etc.) produce in situ site-specific knowledge and form social–ecological relations. Scientist–resource user collaborations for research and resource monitoring can increase trust between stakeholders, improve adaptive management strategies, and help keep pace with unforeseen social–ecological transformations of the Anthropocene.
Results of a new study looking at coral reef disturbances, fish abundance and coastal fishers' catches suggest that ecologists and community anglers may perceive environmental disruptions in very different ways. The apparent disconnect between data-driven scientists and experience-driven fishing communities has implications for the management and resilience of coral reefs and other sensitive marine ecosystems. Lead study author Andrew Rassweiler of Florida State University (FSU), who worked with collaborators at the University of California, Santa Barbara (UCSB) and San Diego State University to conduct fishing surveys and fish population assessments on the French Polynesian island of Moorea, said the research "shows that different groups have different perceptions of change and ecosystem health." The findings are published this week in the journal Ambio. Ecological Distress Coral reefs around the world experience pressure from human activities. As ecosystems react to declines in biodiversity, tropical coastal fishers—whose livelihoods often depend on coral reefs—become less economically and culturally secure. In Moorea, some parts of the island's lagoons support thriving coral communities, while other areas are giving way to overgrowth by seaweed. Scientists looked at fish abundance on the reef in comparison to fish species sold on the island. Credit: Sarah Lester An outbreak of coral-devouring crown-of-thorns sea stars in 2009 and a destructive cyclone in 2010 reduced live coral cover by some 95 percent in many locations. These events threw the ecosystem into disarray, scientists say. In addition to widespread coral losses, effects included abrupt changes in fish populations, with algae-eating, herbivorous fish swarming the area to graze on seaweed growing on the skeletons of dead coral. This influx of seaweed-eating fish wasn't necessarily a surprise, the researchers maintain. Studies at the National Science Foundation's (NSF) Moorea Coral Reef Long-Term Ecological Research (LTER) site on the island had identified the role of herbivorous fish in keeping seaweed forests in check. But Moorea's local fishing communities, where more than three quarters of households have a member who actively fishes the reef, knew less about how this rapid shift in fish abundance occurred. The marine biologists used sizing cards to determine the sizes of fish that were caught and sold. Credit: Terava Atger "Everyone around the island is fishing, but we know very little about how fishers decide where to fish and what fish to target," Rassweiler said. "This research was a first step in looking at how fishing behavior changed following a big change in the fish community itself." Russ Schmitt, a marine ecologist at UCSB and principal investigator of the NSF Moorea Coral Reef LTER site, added "It turned out that the fishers in Moorea barely noticed the massive ecological shift and reported they didn't change their fishing practices, yet the composition of fish in their catches changed dramatically." Local Perceptions By comparing fish caught and sold to fish observed on the reef, the team determined that the shifting catches did in fact reflect shifting abundances of reef-dwelling fish. Notably, Naso, or unicornfish, which islanders ate multiple times per week, decreased, and algae-eating parrotfish increased, appearing in higher concentrations after the mass coral die-off. Understanding which fish were offered at markets was important to the study. Credit: Mark Strother Fishers prized both unicornfish and parrotfish before the ecological disturbances. But the local fishers didn't perceive the changes in their concentrations as significant. Surveys conducted by the team indicated that while residents of the island were aware of shifts, the disturbances did not prompt a change in fishing behavior and, puzzlingly for researchers, did not result in reported changes in the composition of fish caught, sold and eaten. "Fish consumers can have very different perceptions than scientists and resource managers, even in a place like Moorea where locals are closely connected to reefs," said Dan Thornhill, a program director in the National Science Foundation's (NSF) Division of Ocean Sciences, which funded the study along with NSF's Dynamics of Coupled Natural and Human Systems program. The latter is part of NSF's Environmental Research and Education (ERE) portfolio. Noting the different perceptions "is an important consideration going forward in the sustainable management of reefs and the fisheries they support," Thornhill said. Added study co-author and UCSB marine ecologist Sally Holbrook, "Moorea's fishers view the environment as naturally variable, and changes in abundances of fish on the reef are normal occurrences for them." The researchers charted courses around the island to survey which fish were at roadside stands. Credit: Andrew Rassweiler Scientific Perspective The changes concerned the scientists. Seemingly small changes in population abundances could be portents of deeper ecological dysfunction, they said. "We demonstrated that these shifts are ecologically important," Rassweiler said. "This is part of a bigger project in which we're working with fishers to think about reef health and management. It's been enlightening because they have unique insights into the status of different species." San Diego State University anthropologist Matthew Lauer added, "It's fascinating that marine scientists and Polynesian fishers, both of whom spend a huge amount of time on these reefs, have such radically different views about ecosystem change. "Getting a handle on their views about marine health will help us learn more about these reefs, and contribute to more effective and collaborative resource management." The team worked to understand how fishing behavior changed in response to changes on the reef. Credit: Mark Strother
dx.doi.org/10.1007/s13280-019-01154-5
Medicine
Mast cells shown to have an important impact on the development of chronic myeloid leukemia
Melanie Langhammer et al, Mast cell deficiency prevents BCR::ABL1 induced splenomegaly and cytokine elevation in a CML mouse model, Leukemia (2023). DOI: 10.1038/s41375-023-01916-x Journal information: Leukemia
https://dx.doi.org/10.1038/s41375-023-01916-x
https://medicalxpress.com/news/2023-05-mast-cells-shown-important-impact.html
Abstract The persistence of leukemic stem cells (LSCs) represents a problem in the therapy of chronic myeloid leukemia (CML). Hence, it is of utmost importance to explore the underlying mechanisms to develop new therapeutic approaches to cure CML. Using the genetically engineered ScltTA/TRE-BCR::ABL1 mouse model for chronic phase CML, we previously demonstrated that the loss of the docking protein GAB2 counteracts the infiltration of mast cells (MCs) in the bone marrow (BM) of BCR::ABL1 positive mice. Here, we show for the first time that BCR::ABL1 drives the cytokine independent expansion of BM derived MCs and sensitizes them for FcεRI triggered degranulation. Importantly, we demonstrate that genetic mast cell deficiency conferred by the Cpa3 Cre allele prevents BCR::ABL1 induced splenomegaly and impairs the production of pro-inflammatory cytokines. Furthermore, we show in CML patients that splenomegaly is associated with high BM MC counts and that upregulation of pro-inflammatory cytokines in patient serum samples correlates with tryptase levels. Finally, MC-associated transcripts were elevated in human CML BM samples. Thus, our study identifies MCs as essential contributors to disease progression and suggests considering them as an additional target in CML therapy. Mast cells play a key role in the pro-inflammatory tumor microenvironment of the bone marrow. Shown is a cartoon summarizing our results from the mouse model. BCR::ABL1 transformed MCs, as part of the malignant clone, are essential for the elevation of pro-inflammatory cytokines, known to be important in disease initiation and progression. Introduction Chronic myeloid leukemia (CML) represents about 20% of all adult leukemia cases and is caused by a chromosomal translocation between chromosomes 9 and 22, leading to the expression of the fusion kinase BCR::ABL1 [ 1 ]. BCR::ABL1 organizes a multimeric signaling network with various components such as the docking protein GAB2 (GRB-associated-binding protein 2). GAB2 serves as an assembly platform downstream of cytokine and growth factor receptors [ 2 ]. By binding via the adaptor protein, GRB2, GAB2 amplifies the signaling into SHP2/RAS/ERK, PI3K/AKT, and STAT5 pathways, leading to survival, proliferation, and migration [ 2 , 3 , 4 ]. Due to its role in these oncogenic pathways, GAB2 is implicated in both solid tumors and leukemia [ 5 ]. Using GAB2 deficient mice [ 6 ], we previously showed that GAB2 serves as an important effector of the oncogenic FLT3-ITD receptor tyrosine kinase in acute myeloid leukemia (AML) [ 7 , 8 ] and of BCR::ABL1 in CML [ 9 , 10 , 11 , 12 ]. We demonstrated that GAB2 confers resistance to clinically approved BCR::ABL1 inhibitors, including the third-generation inhibitor ponatinib [ 9 , 12 ]. We also showed that GAB2 is increasingly expressed in myeloid cells from CML patients with TKI-refractory disease [ 12 ] or blast crisis [ 13 ]. Using GAB2 knock-out mice [ 6 ], we analyzed the in vivo role of GAB2 in a chronic-phase CML mouse model in which a tetracycline (tet) regulated BCR::ABL1 transgene is expressed in hematopoietic stem cells in their native microenvironment [ 10 , 14 ]. We demonstrated that GAB2 deficiency impairs disease development in a steady-state in vivo setting [ 10 ]. Surprisingly, we also detected increased numbers of mast cells (MCs) in the bone marrow (BM) and kidneys from BCR::ABL1 expressing mice. As reported previously for this mouse model [ 14 ], we observed uni- or bilateral hydronephrosis in BCR::ABL1 positive mice driven by urinary obstruction due to myeloid infiltration within the renal pelvis and ureters. Interestingly, we identified MCs as the predominant infiltrating cell type in the kidney, suggesting their contribution to hydronephrosis. Strikingly, Gab2 −/− mice showed neither MC infiltration in the BM or kidney nor hydronephrosis at all. This might be explained by a synergistic effect of GAB2 as a common downstream signaling effector of BCR::ABL1 and cytokine receptor signaling pathways [ 2 ]. In line with this, GAB2 has been shown to be critical for MC development and KIT signaling [ 15 ]. MCs play a role in different diseases such as allergy, as contributors to a pro-inflammatory tumor microenvironment, and as carriers of oncogenic mutations they cause mastocytosis or MC leukemia [ 16 , 17 ]. Very little, however, is known about MCs in the context of CML. It was shown that MCs are increased in the BM of CML patients compared to healthy individuals and that the TKI imatinib depletes normal and neoplastic MCs in these patients [ 18 ]. However, as imatinib targets both BCR::ABL1 and KIT, it remains unclear whether BCR::ABL1 positive MCs still rely on KIT and whether the effect of this TKI reflects the inhibition of one or both targets. In addition, Askmyr et al. observed an aberrant CD25 + phenotype reminiscient of systemic mastocytosis in xenografts of BCR::ABL1 transduced human cord blood cells [ 19 ]. Interesting to note, basophils, which share many features and a bipotent progenitor with MCs [ 20 ], are often elevated in CML patients and used as a prognostic marker [ 21 ]. These mostly descriptive studies on MCs in BCR::ABL1 mediated transformation and our recent data from Gab2 −/− mice provided the rationale for further analysis of MCs in CML. Therefore, we aimed to analyze the role of MCs in CML in more detail. In particular, we were interested whether this MC accumulation could be driven by BCR::ABL1 itself or whether these cells reacted as bystanders to an inflammatory reaction induced by leukemic infiltrates. Using the ScltTA/TRE-BCR::ABL1 CML mouse model, we now show for the first time that BCR::ABL1 drives the cytokine independent expansion of BM derived mast cells (BMMCs) and sensitizes them for degranulation, IL-6 and TNF release. Importantly, by crossing in the MC-deficient Cpa3Cre mouse line, we discover a crucial role of MCs in CML development. We demonstrate that MC deficiency prevents BCR::ABL1 induced splenomegaly and elevation of pro-inflammatory cytokines. Furthermore, we provide supportive data from CML patients showing that splenomegaly is associated with high BM MC counts and that upregulation of pro-inflammatory cytokines in patient serum samples correlates with tryptase levels. In addition, we detected an increase of MC-associated transcripts in BM samples of CML patients. Methods Mice ScltTA/TRE-BCR::ABL1 mice [ 14 ] were either bred with Gab2 −/− mice [ 6 ] (mixed C57BL/6 x 129SV background as described previously [ 10 ]) or with Cpa3 Cre/+ mice [ 22 ] (C57BL/6 J background). Mice were raised under specific-pathogen free conditions, with standard food and water ad libitum . Animal experimentation was approved by local authorities (RP Freiburg: AZ G17/69, G19/53). For genotyping primers see supplementary methods. Western blotting Western Blotting was performed as described previously [ 23 ]. Antibodies are listed in supplementary methods. Flow cytometry For intracellular staining, cells were fixed and permeabilized with formaldehyde and ice-cold methanol. Cell viability was assessed using 7-AAD. For immunophenotyping, BMMCs, BM and spleens cells were stained. Antibodies are listed in supplementary methods. β-hexosaminidase assay and ELISA BMMCs were starved, loaded with dinitrophenylated human serum albumin (DNP-HSA)-specific IgE overnight and stimulated with DNP-HSA. Degranulation was quantified by measuring β-hexosaminidase activity and cytokine release by ELISA. Detailed procedure can be found in supplementary methods. Multiplex cytokine analysis Murine BM and spleen cell lysates or serum samples from CML patients were subjected to a multiplex cytokine analysis (Bio-Plex Mouse Cytokine 23-plex or Human Cytokine 48-plex) using the Bio-Plex 200 System (Bio-Rad). Transcriptome analysis Gene expression array data (Affymetrix Human Gene 1.0 ST Array) was obtained from the Gene Expression Omnibus database (accession number GSE47927) [ 24 ] for different subpopulations of cells from CML patients in chronic phase, blast crisis or from healthy individuals. Detailed procedure can be found in supplementary methods. Patient samples CML patient samples from the University Hospitals Aachen and Freiburg (Supplementary Tables S2 , S3 ) were analyzed after written informed consent according to the Declaration of Helsinki approved by the local institutional ethics committees (EK Aachen: 206/09, 391/20; EK Freiburg: 20-1253). Data analysis and statistics Statistical analysis was performed using GraphPad Prism 9 and one- or two-way ANOVA and t-tests were performed as described in the figure legends. Data are presented as mean ± SEM and p values < 0.05 were considered statistically significant (* P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001). Results BCR::ABL1 drives infiltration and survival of malignant mast cells First, we performed BM transplantations, using BCR::ABL1 positive donor mice with different Gab2 genotypes and myeloablative irradiated C57BL/6 N mice as recipients. (Supplementary Fig. S1A–C ). Strikingly, we observed high MC counts in the BM and kidney as well as hydronephrosis in some recipients, demonstrating the cell-autonomous properties of the BCR::ABL1 positive donor cells (Supplementary Fig. S1A–C ). Next, we were interested whether these cells were derived from neoplastic BCR::ABL1 transformed MC precursors or whether MCs proliferated due to secondary effects of the leukemic disease, such as enhanced growth factor expression. Therefore, we isolated BM cells from BCR::ABL1 transgenic mice and subjected them to a MC differentiation protocol by adding IL-3 to the culture medium (Fig. 1A ). Differentiation over time was monitored by flow cytometry using the MC markers KIT and FcεRIα (Fig. 1B ). After eight weeks in culture and onwards, over 90% of the cells stained positive for both markers. Interestingly, BMMCs from GAB2 deficient, BCR::ABL1 negative mice, displayed only a purity between 70% and 90%. Strikingly, after IL-3 deprivation only BMMCs from BCR::ABL1 positive mice survived (Fig. 1C ). Furthermore, we established an intracellular flow cytometry staining of pBCR in CML cell lines (Supplementary Fig. S1D, E ) and subjected this protocol to our cohort of TKI- or tet-treated BMMCs (Supplementary Fig. S1F ). We observed a decrease in pBCR upon treatment with either dasatinib or the allosteric and hence highly specific BCR::ABL1 inhibitor GNF-5 [ 25 ]. This effect was mimicked by tet application, which genetically suppresses transgenic BCR::ABL1 expression (Fig. 1D ; Supplementary Fig. S1G ). Similar results could be obtained by using pCRKL as an alternative marker for BCR::ABL1 activity (Fig. 1E ). Interestingly, the inhibitor treatments induced cell death under cytokine free conditions (Fig. 1F ; Supplementary Fig. S1H ). In addition, BCR::ABL1 expression was confirmed by RT-PCR and Western Blotting using anti ABL, BCR and phosphorylated BCR (pBCR) antibodies (Fig. 1G, H ). Moreover, lysates from BMMCs were subjected to an array covering 110 cytokines (Supplementary Fig. S1I ). Interestingly, BCR::ABL1 positive BMMCs displayed higher expression levels of most of the cytokines compared to their BCR::ABL1 negative counterparts. Only RANTES/CCL5, IL-3 and IL-23 were expressed at lower level in BCR::ABL1 positive BMMCs (Supplementary Fig. S1J ). Fig. 1: BCR::ABL1 drives differentiation and survival of mast cells. A Schematic overview. B BM was isolated and cultured in IL-3 containing media. The differentiation into MCs was monitored by flow cytometry using the MC markers FcεRIα and CD117 (KIT). Shown is one representative isolation (mouse #1, BCR::ABL1 positive). C BMMCs from BCR::ABL1 positive and negative mice with the indicated Gab2 genotypes were cultured in the presence or absence of IL-3 for two weeks and monitored by flow cytometry using the MC markers FcεRIα and CD117 (KIT). D – F BCR::ABL1 negative and positive BMMCs were cultivated in standard or IL-3 containing medium and exposed to the indicated inhibitors (dasatinib=DST, 1 µM; GNF-5, 5 µM) or tetracycline (Tet, 1 µg/ml) for four days and analyzed by flow cytometry. D Intracellular staining for pBCR to analyze BCR::ABL1 activity. Each dot represents the BMMCs of an individual mouse and the mean of three independent performed experiments. E Intracellular staining for pCRKL to analyze BCR::ABL1 activity. Shown is the mean of three independent performed experiments. F Cell viability staining using 7-AAD. Each dot represents the BMMCs of an individual mouse and the mean of three independent performed experiments. Statistics were performed using a two way ANOVA (Fisher’s LSD test) and relevant statistically significant effects are indicated by asterisks. G Isolated mRNA from BMMCs (mouse #1 and #3) was reverse-transcribed into cDNA and subjected to 35 cycles of RT-PCR using BCR::ABL1 and GAPDH primers. H BCR::ABL1 negative and positive BMMCs were exposed to dasatinib (DST, 1 µM) for one hour and analyzed by Western Blotting using the indicated antibodies. Full size image BCR::ABL1 enhances degranulation, cytokine release and signaling in BMMCs Next, we assessed MC functionality and signaling by degranulation assays and cytokine release. To this end, we loaded BMMCs with DNP-HSA-specific IgE, stimulated them with antigen (DNP-HSA) and measured β-hexosaminidase activity to quantify degranulation. First, we titrated the amount of IgE and DNP-HSA (Fig. 2A ; Supplementary Fig. S2A ), followed by an analysis of more samples including BMMCs from GAB2 deficient mice (Fig. 2B ; Supplementary Fig. S2B ). Interestingly, BCR::ABL1 positive BMMCs were more sensitive towards antigen stimulation and displayed a stronger degree of degranulation. Strikingly, GAB2 deficient and BCR::ABL1 positive BMMCs showed only a marginal elevation in their degranulation levels compared to their BCR::ABL1 negative counterparts. In line with these results, we observed higher levels of secreted IL-6 and TNF in BCR::ABL1 positive BMMCs after DNP-HSA stimulation (Fig. 2C, D ; Supplementary Fig. S2C, D ). Again, GAB2 deficiency reduced the elevation of IL-6 secretion in BCR::ABL1 positive BMMCs. In addition, treatment of BMMCs with GNF-5 or the MEK inhibitor trametinib counteracted the BCR::ABL1 induced up-regulation of IL-6 and TNF (Fig. 2E, F ). As GAB2 deficiency prevented BCR::ABL1 positive BMMCs from secreting elevated IL-6 levels, we were interested whether GAB2 promotes IL-6 secretion also in a human CML model. To this end, we analyzed IL-6 secretion in the cell line K562, in which GAB2 expression was suppressed by inducible shRNAs (Supplementary Fig. S2E, F ). Strikingly, GAB2 depletion also reduced IL-6 secretion in this model. As GAB2 amplifies the signaling from BCR::ABL1 to PI3K and via SHP2 to the ERK pathway [ 2 ], we performed inhibitor experiments targeting BCR::ABL1, the PI3K and the ERK pathway (Supplementary Fig. S2G, H ). As expected, the inhibition of BCR::ABL1 by imatinib or dasatinib suppressed IL-6 secretion. In line with our data from BMMCs, the inhibition of the ERK pathway by targeting SHP2 with SHP099 or MEK with trametinib also reduced IL-6 secretion, whereas the treatment with the dual PI3K/mTOR inhibitor BEZ-235 strongly increased the secretion of IL-6. Next, we analyzed FcεRI signaling of BCR::ABL1 positive BMMCs after stimulation with DNP-HSA by Western Blotting (Fig. 2G ). Independent of DNP stimulation, we already observed an increase in BCR, STAT5 and overall tyrosine phosphorylation in BCR::ABL1 positive compared to BCR::ABL1 negative BMMCs, while upregulation of pMEK and pAKT levels still required FcεRI activation. Fig. 2: BCR::ABL1 leads to enhanced degranulation, cytokine release and signaling in BMMCs. A BCR::ABL1 negative (mouse #3) and positive (mouse #1) BMMCs were loaded with 50 ng/ml anti-DNP IgE overnight and stimulation with the indicated concentrations of DNP-HSA. Degranulation was assessed by β-hexosaminidase activity. B BCR::ABL1 negative (−) and positive (+) and Gab2 +/+ or Gab2 −/− BMMCs were left untreated or loaded with 50 ng/ml anti-DNP IgE overnight and stimulated with 5 ng/ml DNP-HSA. Degranulation was assessed by β-hexosaminidase activity. C BCR::ABL1 negative (−) and positive (+) and Gab2 +/+ or Gab2 −/− BMMCs were loaded with 150 ng/ml anti-DNP IgE overnight and either left unstimulated (Control) or stimulated for 3.5 h with 5 ng/ml DNP-HSA. The amount of secreted IL-6 was measured using an ELISA. D BCR::ABL1 negative (neg.) and positive (pos.) BMMCs were loaded with 150 ng/ml DNP-HSA-specific IgE overnight and either left unstimulated (Control) or stimulated for 3 h with 2 and 5 ng/ml DNP-HSA. The amount of secreted TNF was measured using an ELISA. E , F BCR::ABL1 negative and positive BMMCs were loaded with 150 ng/ml DNP-HSA-specific IgE overnight, treated with inhibitors (GNF-5, 5 µM, 60 min; trametinib=Trame, 50 nM, 30 min) and stimulated for 2 h with 2 ng/ml DNP-HSA. The amount of secreted IL-6 ( E ) and TNF ( F ) was measured using an ELISA. A – F Statistics were performed using a two-way ANOVA (Fisher’s LSD) and relevant statistically significant effects are indicated by asterisks. B – F Each dot represents the BMMCs of an individual mouse and the mean of three independent performed experiments conducted in triplicates. G BCR::ABL1 negative and positive BMMCs were loaded with 150 ng/ml DNP-HSA-specific IgE overnight, stimulated with the indicated concentrations of DNP-HSA and analyzed by Western Blotting using the indicated antibodies. Full size image Mast cell deficiency impairs CML development in ScltTA/TRE-BCR::ABL1 mice To further investigate the role of BCR::ABL1 in MCs in CML pathogenesis, we applied the MC-deficient Cpa3 Cre/+ mouse line [ 22 ] in two genetic approaches. First, we retrovirally transduced BM from these mice using vectors expressing GFP, either singly or in combination with BCR::ABL1. The BM was then transplanted into C57BL/6 J recipients that were analyzed 25 days later (Supplementary Fig. S3A ). We observed expansion of neutrophilic cells as shown by an increase of immature, CD11b + / GR-1 + cells in BM and spleen from mice that received BCR::ABL1 positive BM compared to BCR::ABL1 negative controls. This was accompanied by a decrease of B220 + cells in the CML mice (Supplementary Fig. S3C, D ). Interestingly, the increase of immature neutrophilic cells (CD11b + / GR-1 low ) was significantly lower in BM from mice transplanted with BCR::ABL1 positive Cpa3 Cre/+ cells. In the spleen, an unsignificant trend was pointing in the same direction. Spleen weight was elevated in the BCR::ABL1 positive groups, but we did not observe a difference between the WT and Cpa3 Cre/+ group (Supplementary Fig. S3E ). The percentage of LSK cells was not altered between the groups (Supplementary Fig. S3F, G ). Next, we analyzed the mRNA expression of IL-1β and TNF by qRT-PCR in the BM of these mice (Supplementary Fig. S3H, I ). Interestingly, the expression of both cytokines was significantly lower for the BCR::ABL1 positive Cpa3 Cre/+ condition compared to the WT control. As we have used WT recipients here, we cannot exclude that residual MCs from the recipients were able to re-expand in this model. Therefore, we next implemented a transgenic approach to completely abolish MC development. In this approach, we crossed Cpa3 Cre/+ mice with ScltTA/TRE-BCR::ABL1 mice. In addition, we included mice lacking GAB2 in our analysis (Fig. 3A ). Mice were sacrificed 60 days after BCR::ABL1 induction by tet withdrawal, and BM and spleen were analyzed. BCR::ABL1 positive mice displayed enlarged spleens with up to 5-fold increase in spleen weight compared to WT mice in keeping with a CML phenotype. Strikingly, GAB2 or MC deficient animals within the BCR::ABL1 positive group showed no signs of splenomegaly (Fig. 3B ), which suggested that the CML phenotype required the presence of MCs. Body weight was not altered between the groups (Fig. 3C ). Next, we assessed the compositions of the cells by surface markers. We observed a significant decrease in B220+, Ter119+ and CD41+ cells and an expansion of immature, CD11b + / GR-1 low cells in BM from BCR::ABL1 positive mice compared to WT mice (Fig. 3D–I ). The cellularity from the spleen of these mice was not altered significantly. Interestingly, there was no decrease in B220+, Ter119+ and CD41+ cells in the BM of BCR::ABL1 positive Cpa3 Cre/+ mice and immature cells showed only a mild expansion. The latter was also observed for the BM of BCR::ABL1 positive Gab2 −/− mice. In addition, we observed an expansion of KIT positive cells in BM and spleen cells from BCR::ABL1 positive mice, but only in those which were deficient for MCs or GAB2 (Fig. 3J ). Fig. 3: Mast cell deficiency protects against the development of leukemia symptoms in ScltTA/TRE-BCR::ABL1 transgenic mice. A ScltTA/TRE-BCR::ABL1 double transgenic mice were either crossed with Gab2 −/− or Cpa3 Cre/+ mice. B Spleen weight of mice 60 days after tetracycline withdrawal. C Weight of mice 60 days after tetracycline withdrawal. D – J Composition of BM and spleen cells of mice 60 days after tetracycline withdrawal assessed by flow cytometry for the indicated markers. Each dot represents the biopsy of one individual mouse. All statistics were performed using a one-way ANOVA (Fisher’s LSD test) and relevant statistically significant effects are indicated by asterisks. Full size image Mast cell deficiency prevents BCR::ABL1 induced cytokine elevation in BM and spleen from ScltTA/TRE-BCR::ABL1 mice As cytokines play a key role in the pathogenesis of CML [ 24 ], we next assessed their expression levels in BM and spleen from ScltTA/TRE-BCR::ABL1 mice either crossed with Cpa3 Cre/+ or Gab2 −/− mice (Fig. 4 ; Supplementary Fig. S4 ). Mice were sacrificed 60 days after induction of BCR::ABL1 and BM and spleen were isolated. We observed a significant elevation of IL-1α, IL-1β, IL-4, IL-6, MIP1-α, MIP1-β and GM-CSF in the BM of BCR::ABL1 positive mice compared to WT control (Fig. 4 ; Supplementary Fig. S4 ). Interestingly, there was no BCR::ABL1 induced upregulation of IL-1α and GM-CSF visible in GAB2 deficient BM (Fig. 4A, C, I ). Even more remarkable was the comparison with BCR::ABL1 positive Cpa3 Cre/+ mice. IL-4, MIP-1α, and MIP-1β were only increased in some samples and to a lesser extent compared to BCR::ABL1 positive WT mice (Fig. 4A, E, G, H ). Furthermore, IL-1α, IL-1β, IL-6, and GM-CSF were even reduced compared to WT (Fig. 4A, D, F, I ). In addition, we detected a range of cytokines, which were not or only slightly altered by BCR::ABL1, but again downregulated in BM from BCR::ABL1 positive Cpa3 Cre/+ mice. Among these cytokines were TNF, IFNγ, IL-2, IL-3, IL-5, IL-9, IL-10, IL-12, IL-13, IL-17, KC, and MCP-1 (Fig. 4A, J ; Supplementary Fig. S4 ). Next, we analyzed the spleens of these mice and observed similarities but also differences compared to the BM. MIP1α, MIP1β, IL-1β and IL-4 were again upregulated in BCR::ABL1 positive cells compared to WT. These cytokines were significantly lower or not increased in samples from Cpa3 Cre/+ mice and, in contrast to BM, also from Gab2 −/− mice (Fig. 4B, D, E, G, H ). In line with our data from BM, we detected a strong downregulation of cytokines in BCR::ABL1 positive Cpa3 Cre/+ samples compared to WT control (Fig. 4B ; Supplementary Fig. S4 ). Fig. 4: Mast cell deficiency blocks BCR::ABL1 induced cytokine elevation in bone marrow and spleen from ScltTA/TRE-BCR::ABL1 transgenic mice. A – J Total cell lysates from BM and spleen of mice 60 days after tetracycline withdrawal were subjected to a multiplex cytokine analysis. A , B Shown is a violin blot, data is normalized to WT control and log2 transformed. C – J Shown is the flouresecense intensity of individual cytokines. Each dot represents the biopsy of one individual mouse. All statistics were performed using a one-way ANOVA (Fisher’s LSD test) and relevant statistically significant effects are indicated by asterisks. Full size image MC-associated transcripts, tryptase and pro-inflammatory cytokines were elevated in CML patient samples Finally, we analyzed gene expression profiles, spleen size, tryptase and cytokines levels from CML patients (Fig. 5A ). First, we characterized the gene expression profiles of MC associated genes in different subpopulations obtained from CML patients in chronic phase and blast crisis or from healthy individuals (GEO accession number GSE47927) (Fig. 5B ). Interestingly, most of these genes were upregulated in chronic phase, particularly in the HSC and GMP subpopulations. This upregulation was even more pronounced in blast crisis for IL1RL1 , TPSAB1 , KIT , and HDC , while CPA3 , RASGRP4 , and MS4A2 were downregulated in this setting. Next, we stained MCs by MC tryptase (MCT) in BM biopsies of 20 CML patients at diagnosis (Fig. 5C ; Supplementary Table S2 ). Interestingly, we detected a strong trend indicating that high MC counts correlate with splenomegaly (Fig. 5D ). Furthermore, we determined the serum levels of tryptase and cytokines at diagnosis in an independent cohort of 27 CML patients (Fig. 5E-O ; Supplementary Fig. S5 and Table S3 ). Strikingly, patients presenting with enlarged spleens in combination with elevated tryptase levels at diagnosis have a higher risk for an insufficient therapy response, determined by BCR::ABL1 /ABL1 ratios ( BCR::ABL1 [IS]) after three month of TKI therapy (Fig. 5E ). Moreover, we detected an upregulation of pro-inflammatory cytokines in CML samples compared to healthy controls (Fig. 5F–O ; Supplementary Fig. S5 ). Remarkably, this upregulation was significantly more pronounced in the serum of patients with higher tryptase levels. This was especially the case for MIP1β, TNF, VEGF, PDGF, HGF, MIF and CXCL12 (Fig. 5G–O ). Fig. 5: MC-associated transcripts, tryptase and pro-inflammatory cytokines were elevated in CML patient samples. A Scheme summarizing the patient data from CML patients. B Transcriptome analysis of MC associated genes (GEO accession GSE47927). Note: For TPSAB1, two distinct probe IDs were available. P -values were corrected for multiple testing with the Benjamini-Hochberg procedure. C Shown is an exemplary MCT staining in the BM of a CML patient with and without an enlarged spleen. MCs are highlighted with red arrows. D Quantification of MCT-stained MCs in the BM of CML patients with and without an enlarged spleen. E Tryptase levels in the serum of CML patients with and without an enlarged spleen at diagnsosis. Patients are grouped according to treatment response. F – O Serum samples from CML patients at diagnosis and healthy controls were subjected to a multiplex cytokine analysis. F Shown is a violin blot, primary data is normalized to healthy control and log2 transformed. G – O Shown is the calcualted concentration of individual cytokines. Each dot represents the biopsy of one individual. Statistics were performed using an unpaired t -test (two-tailed) ( D ), a two-way ANOVA (Fisher’s LSD test) ( F ) and a one-way ANOVA (Fisher’s LSD test) ( G – O ) and relevant statistically significant effects are indicated by asterisks. Full size image Discussion MCs play a key role in allergic responses, in the pathogenesis of immunologic disorders and are implicated in cancer due to their contribution to a pro-inflammatory tumor microenvironment [ 16 , 17 ]. Here, we show for the first time that MCs play an important role in a chronic phase CML mouse model for this disease. First, we observed infiltration of MCs in the BM and kidney of mice, which were transplanted with BCR::ABL1 positive BM cells (Supplementary Fig. S1A–C ). This observation was in line with our previous results from the primary setting of this mouse model [ 10 ] and demonstrates that these alterations were caused by the BCR::ABL1 positive donor cells. Furthermore, we show that GAB2 deficiency protects from the infiltration of MCs in these organs (Supplementary Fig. S1A, B ). This might be explained by the fact that GAB2 signals downstream of both BCR::ABL1 and KIT, which is essential for MC development [ 15 ]. In addition, we demonstrated that BCR::ABL1 can drive the expansion of MCs under cytokine free conditions (Fig. 1C ). Consequently, these BCR::ABL1 positive BMMCs were sensitive towards inhibition or genetic ablation of BCR::ABL1 (Fig. 1D–F ). This is further supported by our observation that BCR::ABL1 positive BMMC displayed constitutive and enhanced STAT5 signaling (Fig. 2G ), a critical driver of MC development and survival [ 26 ]. Our findings agree with earlier reports showing that BCR::ABL1 transduced hematopoietic progenitors and human CML xenografts can generate MCs [ 19 , 27 , 28 , 29 ] or MC-related basophils as in the human CML line KU812 [ 30 ]. Consistent with this, we detected an upregulation of MC-associated genes in BM samples from CML patients, particularly in the pathologically relevant HSC and GMP subpopulations, compared to healthy individuals (Fig. 5B ). With the exception of a few, such as RASGRP4 , which was downregulated, upregulation was more pronounced in blast crisis than in chronic phase, suggesting MC expansion along with disease progression. The fact that RASGRP4 is downregulated supports our hypothesis of the development of a malignant MC pool, as functionally inactive RASGRP4 mutants were also found to be expressed in patients with mastocytosis and MC leukemia [ 31 ]. Next, we assessed MC functionality of the murine BMMCs by degranulation assays and cytokine release. Importantly, BCR::ABL1 positive cells were more sensitive towards antigen stimulation and displayed a stronger degranulation and higher levels of secreted IL-6 and TNF compared to BCR::ABL1 negative controls (Fig. 2A–D ). This suggests a positive influence of BCR::ABL1 on the proximal FcεRI signaling cascade. The resulting elevated degranulation and release of pro-inflammatory cytokines could explain the hydronephrosis observed in our CML mouse model. Commensurate with these results, GAB2 deficient cells from BCR::ABL1 transgenic mice showed neither elevated degranulation levels nor increased IL-6 release (Fig. 2B, C ), which agrees with the role of GAB2 downstream of FcεRI [ 32 ]. This is further supported by earlier studies with an independently generated Gab2 −/− mouse strain showing less degranulation and cytokine gene expression after antigen stimulation [ 32 ]. We confirmed the relevance of GAB2 for cytokine production in the human CML cell line K562, in which its depletion also reduces IL-6 expression (Supplementary Fig. S2G, F ). As GAB2 broadens the oncogenic signals from BCR::ABL1 into the ERK and PI3K pathway [ 2 ], we performed inhibitor experiments in BMMC and K562 cells (Fig. 2E, F ; Supplementary Fig. S2G, H ). Interestingly, the clinically applied MEK inhibitor trametinib strongly downregulates IL-6 and TNF levels, pointing towards an implication of the ERK pathway (Fig. 2E, F ). Notably, we detected a stronger increase in MEK phosphorylation in BCR::ABL1 positive compared to BCR::ABL1 negative BMMCs after DNP-HSA stimulation (Fig. 2G ). In contrast, the treatment with BEZ-235 increases IL-6 expression in K562 cells (Supplementary Fig. S2G, H ), suggesting an inhibitory role of the PI3K pathway, for example by reducing its negative crosstalk with the ERK pathway [ 33 ]. Encouraged by these results, we further probed the role of MCs in CML by using Cpa3 Cre/+ mice characterized by genetically induced MC deficiency [ 22 ]. Mice from this model display a normal immune system, apart from the lack of MCs and reduced basophils. In a first attempt, we used a retroviral model in which BCR::ABL1 was introduced into BM cells from Cpa Cre/+ mice and then transplanted into WT recipients (Supplementary Fig. S3 ). Interestingly, the loss of MCs in the BM of mice transplanted with BCR::ABL1-positive Cpa3 Cre/+ donor cells attenuates the increase in immature, CD11b + /GR-1 low , cells compared to their MC-competent counterparts (Supplementary Fig. S3C ). As we did not observe any impact of MC deficiency on spleen weight, which represents one of the critical prognostic markers of the Sokal score [ 34 ], we switched our approach and crossed the Cpa Cre allele into the ScltTA/TRE-BCR::ABL1 model. Remarkably, and in contrast to the retroviral model, BCR::ABL1-positive MC-deficient mice showed no signs of splenomegaly (Fig. 3B ). These at first glance controversial results might be explained by the main differences between retroviral transduction/transplantation and genetically engineered mouse models. In the latter, the BCR::ABL1 transgene is expressed in hematopoietic stem cells in their native microenvironment, allowing analysis under steady-state conditions [ 10 , 14 ]. This circumvents the main disadvantages of the retroviral model, such as the variability in BCR::ABL1 overexpression and disease phenotype between recipients. Furthermore, the retroviral model shows rapid disease onset with fatal outcome shortly after transplantation and hence rather resembles an acute leukemia, while the genetic model recapitulates more the chronic phase of the disease [ 35 ]. In addition, there is also the possibility that BCR::ABL1 was expressed only in MCs of the transgenic but not of the retroviral mouse model. Importantly and in line with the data from the genetic model, we observed a correlation of splenomegaly and high BM MC counts in our patient cohort (Fig 5D ). As shown in our previous study [ 10 ], BCR::ABL1 positive GAB2 deficient mice, which were also included in this study, displayed a similar phenotype as the MC deficient mice (Fig. 3B ). Interestingly, we observed only a mild expansion of immature cells in the BM of BCR::ABL1 positive MC and GAB2 deficient mice compared to their proficient counterparts (Fig. 3H ). Cytokines in the BM niche are often deregulated in CML and implicated in disease progression. In particular, pro-inflammatory cytokines, such as IL-1α [ 36 ], IL-1β [ 37 ], IL-6 [ 38 , 39 ], TNF [ 36 , 38 ] and MIP-1β [ 36 ], are upregulated in the serum or BM of CML patients. This is supported by our own serum analysis of a small patient cohort, in which we also detected significantly higher levels of pro-inflammatory cytokines compared to healthy individuals (Fig 5F–O ; Supplementary Fig. S5 and Table S3 ). Therefore, we analyzed the cytokine profile of our mouse cohort. In line with a previous study by Zhang et al. [ 36 ], we now show elevated levels of IL-1α, IL-1β, IL-4, IL-6, TNF, GM-CSF, MIP-1α and MIP-1β in the BM of ScltTA/TRE-BCR::ABL1 mice (Fig. 4A, C–J ). Strikingly, we were not only able to confirm the results, but also show that the loss of MCs counteracts the BCR::ABL1-induced increase of these cytokines or even leads to a downregulation compared to WT. In addition, we presented similar data for the spleen, as here IL-1β, IL-4, MIP-1α and MIP-1β were upregulated by BCR::ABL1 and again not altered or downregulated with the loss of MCs (Fig. 4B, D, G, H ). Moreover, we demonstrated in our patient cohort that higher serum levels of tryptase correlate with significantly higher levels of MIP1β, TNF, VEGF, PDGF, HGF, MIF and CXCL12 (Fig. 5F–O ). Taken together, this suggests that BCR::ABL1 positive MCs either express these pro-inflammatory cytokines themselves or at least stimulate other cells to do so. This is further supported by our observation that these cytokines are also upregulated in BCR::ABL1 driven BMMC (Supplementary Fig. S1J ). The upregulation of these cytokines is of particular interest as a pro-inflammatory environment provides a selective advantage for leukemic stem cells (LSCs) [ 40 ]. Several studies demonstrated that IL-1α/β [ 41 , 42 , 43 ], IL-6 [ 44 , 45 ], GM-CSF [ 46 ] and MIP-1α [ 47 , 48 ] exert positive regulatory effects to expand primitive CML cells. Furthermore, IL-4 has been shown to maintain survival of CML cells on TKI inhibition [ 49 ] and is known to antagonize MHC-II and CIITA expression [ 50 , 51 ], which promotes immune evasion [ 52 ]. By contrast, chronic exposure to MIP-1α and IL-1α/β exhausts normal HSC [ 53 , 54 , 55 ]. Furthermore, we observed that a range of cytokines such as TNF, IFNγ, IL-2, IL-3, IL-5, IL-9, IL-10, IL-12, IL-13, IL-17, KC and MCP-1 were downregulated in BM from BCR::ABL1 positive Cpa3 Cre/+ mice, suggesting that BCR::ABL1 positive MCs are also involved in the regulation of these cytokines (Fig. 4A ; Supplementary Fig. S4 ). This is of particular interest as some of these cytokines are described to support CML progression and therapy resistance. For example, TNF supports the survival of CML stem and progenitor cells [ 56 ] and IFNγ reduces the sensitivity towards TKIs [ 57 ]. We were also able to show, that the loss of GAB2, as an important signaling amplifier in MCs, counteracts the BCR::ABL1 induced elevation of some of these cytokines such as IL-1α in the BM and MIP-1α, MIP-1β and IL-4 in the spleen (Fig. 4A–C, E, G, H ). Finally, we demonstrated in our patient cohort that enlarged spleens in combination with elevated serum tryptase levels correlate with a diminished response to therapy (Fig. 5E ). This supports the concept of the modified EUTOS score, in which basophils are replaced with serum tryptase. This EUTOS-T score was evaluated in a large patient cohort and shows a more accurate prediction of treatment response [ 58 ]. In summary, we demonstrate that BCR::ABL1 can drive the expansion of murine MCs and that these BCR::ABL1 transformed MCs, as part of the malignant clone, are essential for the disease associated development of splenomegaly and for the elevation of pro-inflammatory cytokines, known to be important in disease initiation and progression. These data are supported by our CML patient analyzes in which we show that splenomegaly is associated with high BM MC counts and that upregulation of pro-inflammatory cytokines in patient serum samples correlates with tryptase levels. Thus, our study suggests that MCs play an essential role in CML and might serve as an additional target in the clinic. This is of particular relevance, as BCR::ABL1 positive MCs might be resistant against TKIs in the cytokine rich BM niche. This is supported by our observation that IL-3 protects BCR::ABL1 positive BMMCs from TKI induced cell death. As pro-inflammatory cytokines are known to be important for many other cancer entities, our data also invites for the evaluation on the role of MCs beyond CML. In addition, this data and our previous work on GAB2 also highlights the possibility that GAB2, as a common player in BCR::ABL1 and MC signaling could serve as an additional target in the treatment of CML. Data availability All data supporting the findings of this study are available within the article and its supplementary information and from the corresponding author upon reasonable request.
Chronic myeloid leukemia (CML) is a type of blood cancer that arises from malignant changes in blood-forming cells of the bone marrow. It mainly occurs in older individuals and represents about 20% of all adult leukemia cases. A research team led by Dr. Sebastian Halbach, Melanie Langhammer and Dr. Julia Schöpf from the Institute of Molecular Medicine and Cell Research at the University of Freiburg has now demonstrated for the first time that mast cells play a crucial role in the development of CML. Mast cells could therefore serve as an additional therapeutic target in the clinic. "It was really impressive to see that mice lacking mast cells no longer developed severe CML," says study leader Halbach. The results were recently published in the journal Leukemia. Significantly elevated cytokine levels Mast cells are cells of the immune system that play a decisive role in the defense against pathogens, but also in allergies. In this context, mast cells release inflammation inducing messenger molecules, so-called proinflammatory cytokines, which are crucial for the immune response. However, proinflammatory cytokines are also frequently found in the microenvironment of tumors and are suspected of decisively promoting cancer development. Using a mouse model for CML, the scientists were able to demonstrate for the first time that cytokines in CML could indeed originate from mast cells. First, the researchers found an unusually high number of mast cells in the bone marrow of mice showing leukemia symptoms. In subsequent experiments, they were able to demonstrate that the oncogene Bcr-Abl, as the cancer-causing protein in CML, had taken control of these mast cells. This resulted in a significantly increased release of proinflammatory cytokines. Consequently, mice lacking mast cells due to their genetic predisposition did not show an increase in proinflammatory cytokines. Moreover, these animals did not develop splenomegaly, a pathological enlargement of the spleen frequently observed in leukemias. Clinical data support findings For the study, the team collaborated with Prof. Dr. Tilman Brummer, Professor for Signal Transduction and Medical Cell Research at the University of Freiburg, Dr. Khalid Shoumariyeh and Prof. Dr. Heiko Becker from the University Medical Center Freiburg, and Dr. Mirle Schemionek-Reinders and Prof. Dr. Michael Huber from the University Medical Center Aachen. With the help of the partners, the findings from the animal model could finally be supported by clinical data from CML patients: On the one hand, it was shown that patients with severe splenomegaly often have an increased number of mast cells in their bone marrow. On the other hand, patients with increased concentrations of tryptase, a lead enzyme of mast cells, also had increased levels of proinflammatory cytokines in their blood. "These results could be the basis for new therapeutic approaches," Halbach explains. The discovery of the Bcr-Abl oncogene as the trigger for CML has made it possible to develop so-called tyrosine kinase inhibitors (TKIs), which revolutionized the therapy. However, it is often not possible to eliminate all malignant cells with these drugs, especially the leukemia stem cells in the bone marrow, which is why lifelong treatment is necessary. In addition, resistances to the TKIs can develop during therapy, leading to relapse. Moreover, a lifelong use of TKIs is associated with a high burden of side effects for patients. "It is therefore of great importance to develop new and more effective therapies," says Halbach. And the study also offers suggestions for further research into many types of cancer beyond CML. "I am convinced that mast cells also play an important role in other cancers, since proinflammatory cytokines are often found upregulated here as well."
10.1038/s41375-023-01916-x
Medicine
Blood test could provide rapid, accurate method of detecting solid cancers
Study paper: dx.doi.org/10.1038/nm.3519 Journal information: Nature Medicine
http://dx.doi.org/10.1038/nm.3519
https://medicalxpress.com/news/2014-04-blood-rapid-accurate-method-solid.html
Abstract Circulating tumor DNA (ctDNA) is a promising biomarker for noninvasive assessment of cancer burden, but existing ctDNA detection methods have insufficient sensitivity or patient coverage for broad clinical applicability. Here we introduce cancer personalized profiling by deep sequencing (CAPP-Seq), an economical and ultrasensitive method for quantifying ctDNA. We implemented CAPP-Seq for non–small-cell lung cancer (NSCLC) with a design covering multiple classes of somatic alterations that identified mutations in >95% of tumors. We detected ctDNA in 100% of patients with stage II–IV NSCLC and in 50% of patients with stage I, with 96% specificity for mutant allele fractions down to ∼ 0.02%. Levels of ctDNA were highly correlated with tumor volume and distinguished between residual disease and treatment-related imaging changes, and measurement of ctDNA levels allowed for earlier response assessment than radiographic approaches. Finally, we evaluated biopsy-free tumor screening and genotyping with CAPP-Seq. We envision that CAPP-Seq could be routinely applied clinically to detect and monitor diverse malignancies, thus facilitating personalized cancer therapy. Main Analysis of ctDNA has the potential to revolutionize detection and monitoring of tumors. Noninvasive access to cancer-derived DNA is particularly attractive for solid tumors, which cannot be repeatedly sampled without invasive procedures. In NSCLC, PCR-based assays have been used to detect recurrent point mutations in genes such as KRAS (encoding kirsten rat sarcoma viral oncogene homolog) or EGFR (encoding epidermal growth factor receptor) in plasma DNA 1 , 2 , 3 , 4 , but the majority of patients lack mutations in these genes. Recently, approaches employing massively parallel sequencing have been used to detect ctDNA 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 . However, the methods reported to date have been limited by modest sensitivity 13 , applicability to only a minority of patients, the need for patient-specific optimization and/or cost. To overcome these limitations, we developed a new strategy for analysis of ctDNA. Our approach, called CAPP-Seq, combines optimized library preparation methods for low DNA input masses with a multiphase bioinformatics approach to design a 'selector' consisting of biotinylated DNA oligonucleotides that target recurrently mutated regions in the cancer of interest. To monitor ctDNA, the selector is applied to tumor DNA to identify a patient's cancer-specific genetic aberrations and then directly to circulating DNA to quantify them ( Fig. 1a ). Here we demonstrate the technical performance and explore the clinical utility of CAPP-Seq in patients with early- and advanced-stage NSCLC. Figure 1: Development of CAPP-Seq. ( a ) Schematic depicting design of CAPP-Seq selectors and their application for assessing ctDNA. ( b ) Multiphase design of the NSCLC selector. Phase 1: genomic regions harboring known and suspected driver mutations in NSCLC are captured. Phases 2–4: addition of exons containing recurrent SNVs using WES data from lung adenocarcinomas and squamous cell carcinomas from TCGA ( n = 407). Recurrence index (RI) is equal to total unique patients with mutations covered per kb of exon. Phases 5 and 6: addition of exons of predicted NSCLC drivers 15 , 16 and introns and exons harboring breakpoints in rearrangements involving ALK , ROS1 and RET . Bottom: increase of selector length during each design phase. ( c ) Analysis of the number of SNVs per lung adenocarcinoma covered by the NSCLC selector in the TCGA WES cohort (Training; n = 229) and an independent lung adenocarcinoma WES data set (Validation; n = 183) 20 . Results are compared to selectors randomly sampled from the exome ( P < 1.0 × 10 −6 for the difference between random selectors and the NSCLC selector; Z -test, Online Methods ). ( d ) Analytical modeling of CAPP-Seq, whole-exome sequencing and whole-genome sequencing for different detection limits of ctDNA in plasma. Calculations are based on the median number of mutations detected per NSCLC for CAPP-Seq (i.e., 4) and the reported number of mutations in NSCLC exomes and genomes 21 . Additional details, including assumed sequencing throughput (i.e., bases) per lane, are described in Online Methods . The vertical dashed line represents the median fraction of ctDNA detected in plasma from patients in this study. Source data Full size image Results Design of a CAPP-Seq selector for NSCLC For the initial implementation of CAPP-Seq, we focused on NSCLC, although our approach is generalizable to any cancer for which recurrent mutations have been identified. To design a selector for NSCLC ( Fig. 1b , Supplementary Table 1 and Online Methods), we began by including exons covering recurrent mutations in potential driver genes from the Catalogue of Somatic Mutations in Cancer (COSMIC) 14 and other sources 15 , 16 . Next, using whole-exome sequencing (WES) data from 407 patients with NSCLC profiled by The Cancer Genome Atlas (TCGA), we applied an iterative algorithm to maximize the number of missense mutations per patient while minimizing selector size ( Supplementary Fig. 1 and Supplementary Table 1 ). Approximately 8% of NSCLCs harbor rearrangements involving the receptor tyrosine kinase genes ALK (encoding anaplastic lymphoma receptor tyrosine kinase), ROS1 (encoding c-ros oncogene 1 tyrosine kinase) or RET proto-oncogene 17 , 18 , 19 , 20 , 21 . To utilize the low false detection rate inherent in the unique junctional sequences of structural rearrangements 5 , 6 , we included the introns and exons spanning recurrent fusion breakpoints in these genes in the final design phase ( Fig. 1b ). To detect fusions in tumor and plasma DNA, we developed a breakpoint-mapping algorithm optimized for ultradeep coverage data ( Supplementary Methods ). Application of this algorithm to next-generation sequencing (NGS) data from two NSCLC cell lines known to harbor fusions with previously uncharacterized breakpoints 22 , 23 readily identified the breakpoints at nucleotide resolution ( Supplementary Fig. 2 ). Collectively, the NSCLC selector design targets 521 exons and 13 introns from 139 recurrently mutated genes, in total covering ∼ 125 kb ( Fig. 1b ). Within this small target (0.004% of the human genome), the selector identifies a median of four single nucleotide variants (SNVs) and covers 96% of patients with lung adenocarcinoma or squamous cell carcinoma. To validate the number of mutations covered per tumor, we examined the selector region in WES data from an independent cohort of 183 patients with lung adenocarcinoma 20 . The selector covered 88% of patients with a median of four SNVs per patient, approximately fourfold more than would be expected from random sampling of the exome ( P < 1.0 × 10 −6 ; Fig. 1c ), thus validating our selector design algorithm. Methodological optimization and performance assessment We performed deep sequencing with the NSCLC selector to achieve ∼ 10,000× coverage (preduplication removal) based on considerations of sequencing depth, median number of reporters and ctDNA detection limit ( Fig. 1d ). We profiled a total of 90 samples, including two NSCLC cell lines, 17 primary tumor samples with matched peripheral blood leukocytes (PBLs) and 40 plasma samples from 18 human subjects, including 5 healthy adults and 13 patients with NSCLC ( Supplementary Table 2 ). To assess and optimize selector performance, we first applied it to circulating DNA purified from healthy control plasma and observed efficient and uniform capture of genomic DNA ( Supplementary Table 2 ). Sequenced plasma DNA fragments had a median length of ∼ 170 bp ( Fig. 2a ), which closely corresponds to the length of DNA contained within a chromatosome 24 . By optimizing library preparation from small quantities of plasma DNA, we increased recovery efficiency by >300% and decreased bias for libraries constructed from as little as 4 ng of DNA ( Supplementary Fig. 3 ). Consequently, fluctuations in sequencing depth were minimal ( Fig. 2b,c ). Figure 2: Analytical performance. ( a – c ) Quality parameters from a representative CAPP-Seq analysis of plasma DNA, including length distribution of sequenced circulating DNA fragments (fragment counts are represented on the y axis) ( a ) and depth of sequencing coverage ( y axis) across all genomic regions in the selector ( b ) (details of all plasma DNA samples sequenced are shown in Supplementary Table 2 ). ( c ) Variation in sequencing depth ( y axis) across plasma DNA samples from four patients. Orange envelope represents mean ± s.e.m. ( d ) Analysis of allelic background rate for 40 plasma DNA samples collected from 13 patients with NSCLC and 5 healthy individuals. The y axis denotes the fraction of all alleles and selector positions tested. Details are given in Supplementary Methods . Perc., percentile. ( e ) Analysis of biological background in d focusing on 107 recurrent somatic mutations from a previously reported SNaPshot panel 25 . Mutations found in a given patient's tumor were excluded. The mean frequency over all subjects was ∼ 0.01%. A single outlier mutation ( TP53 R175H) is indicated by an yellow diamond. ( f ) Individual mutations from e ranked by most to least recurrent, according to mean frequency across the 40 plasma DNA samples. The P value threshold of 0.01 (dotted line) corresponds to the 99th percentile of global selector background in d . ( g ) Dilution series analysis of expected versus observed frequencies of mutant alleles using CAPP-Seq ( n = 14 reporter alleles). Five concentrations of fragmented HCC78 DNA spiked into control circulating DNA are shown. ( h ) Analysis of the effect of the number of SNVs considered on the estimates of fractional abundance using data from g . Data are presented as means ± 95% confidence interval. ( i ) Analysis of the effect of the number of SNVs considered on the mean correlation coefficient between expected and observed cancer fractions (blue solid line) using data from panel g . 95% confidence intervals are shown for e , f . Statistical variation for g is shown as mean ± s.e.m. Source data Full size image The detection limit and accuracy of CAPP-Seq are affected by (i) the input number and recovery rate of circulating DNA molecules, (ii) sample cross-contamination, (iii) potential allelic bias in the capture reagent and (iv) PCR or sequencing errors. We examined each of these elements in turn. First, by comparing the number of input DNA molecules per sample with estimates of library complexity ( Supplementary Fig. 4a and Supplementary Methods ), we calculated a circulating DNA molecule recovery rate of ≥49% ( Supplementary Table 2 ). This was in agreement with molecule recovery yields calculated following PCR ( Supplementary Fig. 4b ). Second, by analyzing patient-specific homozygous single nucleotide polymorphisms (SNPs) across samples, we found cross-contamination of ∼ 0.06% in multiplexed plasma DNA ( Supplementary Fig. 4c and Supplementary Methods ), prompting us to exclude any tumor-derived SNV from further analysis if found as a germline SNP in another profiled patient. Next, we evaluated the allelic skew in heterozygous germline SNPs within patient PBL samples and observed minimal bias toward capture of reference alleles ( Supplementary Fig. 4d ). Finally, we analyzed the distribution of nonreference alleles across the selector for the 40 plasma DNA samples, excluding tumor-derived SNVs and germline SNPs. We found mean and median background rates of 0.006% and 0.0003%, respectively ( Fig. 2d ), both considerably lower than previously reported NGS-based methods for ctDNA analysis 8 , 10 . Nongermline plasma DNA could be present in the absence of cancer owing to contributions from preneoplastic cells from diverse tissues, and such 'biological' background may also affect CAPP-Seq sensitivity. We hypothesized that biological background, if present, would be particularly high for recurrently mutated positions in known cancer driver genes and therefore analyzed mutation rates of 107 cancer-associated SNVs 25 in all 40 plasma samples, excluding somatic mutations found in each patient's tumor. Although the median fractional abundance was comparable to the global selector background ( ∼ 0%), the mean was marginally higher at ∼ 0.01% ( Fig. 2e ). Notably, we detected one mutational hotspot (tumor suppressor TP53 , R175H) at a median frequency of ∼ 0.18% across all plasma DNA samples, including those from patients and healthy subjects ( Fig. 2f ). As we observed the frequency of this TP53 mutant allele to be significantly above global background ( P < 0.01), we hypothesize that it reflects true biological clonal heterogeneity and thus excluded it as a potential reporter. To address background more generally, we also normalized for allele-specific differences in background rate when assessing the significance of ctDNA detection ( Supplementary Methods ). As a result, we found that biological background is not a major factor affecting ctDNA quantitation at detection limits above ∼ 0.01%. Next, we empirically benchmarked the detection limit and linearity of CAPP-Seq ( Fig. 2g and Supplementary Fig. 5a ). We accurately detected defined inputs of NSCLC DNA at fractional abundances between 0.025% and 10% with high linearity ( R 2 ≥ 0.994). We observed only marginal improvements in error metrics above a threshold of four SNP reporters ( Fig. 2h,i and Supplementary Fig. 5b,c ), which is equivalent to the median number of SNVs per tumor identified by the selector. Moreover, the fractional abundance of fusion breakpoints, insertions and deletions (indels) and copy number alterations (CNAs) correlated highly with expected concentrations ( R 2 ≥ 0.97; Supplementary Fig. 5d ). Somatic mutation detection and tumor burden quantitation We next applied CAPP-Seq to the discovery of somatic mutations in tumor samples collected from 17 patients with NSCLC ( Table 1 and Supplementary Table 3 ), including formalin-fixed surgical resections, needle biopsy specimens and malignant pleural fluid. At a mean sequencing depth of ∼ 5,000× (preduplicate removal) in tumor and paired germline samples ( Supplementary Table 2 ), we detected 100% of previously identified SNVs and fusions and discovered many additional somatic variants ( Table 1 and Supplementary Table 3 ). Moreover, we characterized breakpoints at base-pair resolution and identified partner genes for each of eight known fusions involving ALK or ROS1 ( Supplementary Fig. 2 ). Tumors containing fusions were almost exclusively from never-smokers and contained fewer SNVs than those lacking fusions, as expected 21 ( Supplementary Fig. 2 ). Excluding patients with fusions, we identified a median of six SNVs (three missense) per patient ( Table 1 ), in line with our selector design-stage predictions ( Fig. 1b,c ). Table 1 Patient characteristics and pretreatment CAPP-Seq monitoring results Full size table Next, we assessed the sensitivity and specificity of CAPP-Seq for disease monitoring and minimal residual disease detection using plasma samples from 5 healthy controls and 35 samples collected from 13 patients with NSCLC ( Table 1 and Supplementary Table 4 ). We integrated information content across multiple instances and classes of somatic mutations into a ctDNA detection index. This index is analogous to a false-positive rate and is based on a decision tree in which fusion breakpoints take precedence because of their nonexistent background and in which P values from multiple reporter types are integrated (Online Methods). When we applied this approach in a receiver operating characteristic (ROC) analysis, CAPP-Seq achieved an area under the curve (AUC) of 0.95, with maximal sensitivity and specificity of 85% and 96%, respectively, for all plasma DNA samples from untreated patients and healthy controls. Sensitivity among patients with stage I tumors was 50%, and among those with stage II–IV tumors, it was 100%, with a specificity for both groups of 96% ( Fig. 3a,b ). Moreover, when considering both pre- and post-treatment samples, CAPP-Seq exhibited robust performance, with AUC values of 0.89 for all stages and 0.91 for stages II–IV ( P < 0.0001, Z -test, Online Methods; Supplementary Fig. 6 ). Furthermore, by adjusting the ctDNA detection index, we could increase specificity up to 98% while still capturing two-thirds of all cancer-positive samples and three-fourths of stage II–IV cancer-positive samples ( Supplementary Fig. 6 ). Thus, CAPP-Seq can achieve robust assessment of tumor burden and can be tuned to deliver a desired sensitivity and specificity. Figure 3: Sensitivity and specificity analysis. ( a ) ROC analysis of plasma DNA samples from pretreatment samples and healthy controls, divided into all stages ( n = 13 patients) and stages II–IV ( n = 9 patients). AUC values are significant at P < 0.0001 ( Z -test, Online Methods ). Sn, sensitivity; Sp, specificity. ( b ) Raw data related to a . TP, true positive; FP, false positive; TN, true negative; FN, false negative. ( c ) Concordance between tumor volume, measured by CT or PET-CT, and concentration (pg ml −1 ) of ctDNA from pretreatment samples ( n = 9), measured by CAPP-Seq. Patients P6 and P9 were excluded owing to inability to accurately assess tumor volume and differences related to the capture of fusions, respectively ( Supplementary Methods ). Of note, linear regression was performed in non-log space; the log-log axes and dashed diagonal line are for display purposes only. Source data Full size image Monitoring of NSCLC tumor burden in plasma samples We next asked whether significantly detectable levels of ctDNA correlate with radiographically measured tumor volumes and clinical responses to therapy. Fractions of ctDNA detected in plasma by SNV and/or indel reporters ranged from ∼ 0.02% to 3.2% ( Table 1 ), with a median of ∼ 0.1% in pretreatment samples. Absolute levels of ctDNA in pretreatment plasma significantly correlated with tumor volume as measured by computed tomography (CT) and positron emission tomography (PET) imaging ( R 2 = 0.89, P = 0.0002; Fig. 3c ). To determine whether ctDNA concentrations reflect disease burden in longitudinal samples, we analyzed plasma DNA from three patients with advanced NSCLC undergoing distinct therapies ( Fig. 4a–c ). As in pretreatment samples, ctDNA levels highly correlated with tumor volumes during therapy ( R 2 = 0.95 for patient 15 (P15); R 2 = 0.85 for P9). This behavior was observed whether the mutation types measured were a collection of SNVs and an indel (P15, Fig. 4a ), multiple fusions (P9, Fig. 4b ) or SNVs and a fusion (P6, Fig. 4c ). Of note, in one patient (P9), we identified both a classic EML4 - ALK fusion and two previously unreported fusions involving ROS1 : FYN - ROS1 and ROS1 - MKX ( Supplementary Fig. 2 ). All fusions were confirmed by quantitative PCR (qPCR) amplification of genomic DNA and were independently recovered in plasma samples ( Supplementary Table 4 ). To the best of our knowledge, this is the first observation of ROS1 and ALK fusions in the same individual with NSCLC. Figure 4: Noninvasive detection and monitoring of ctDNA. ( a – h ) Disease monitoring using CAPP-Seq. Carboplatin (carbo), paclitaxel, cetuximab, cisplatin (cis), pemetrexed (pem), bevacizumab (bev), or crizotinib were administered as combination therapies as indicated. ( a , b ) Disease burden changes in response to treatment in a patient with stage IIIB NSCLC using SNVs and an indel (SNVs/indel) ( a ) and a patient with stage IV NSCLC using three rearrangement breakpoints ( b ). Tu, tumor; Ef, pleural effusion; ND, not detected. ( c ) Concordance between different reporters (SNVs and a fusion) in a patient with stage IV NSCLC. TMEM132D , transmembrane protein 132D gene. ( d ) Detection of a subclonal EGFR T790M resistance mutation in a patient with stage IV NSCLC. The fractional abundance of the dominant clone and T790M-containing clone are shown in the primary tumor (left) and plasma samples (right). ( e , f ) CAPP-Seq results from post-treatment plasma DNA samples are predictive of clinical outcomes in a patient with stage IIB NSCLC ( e ) and a patient with stage IIIB NSCLC ( f ). SD, stable disease; PD, progressive disease; PR, partial response; NED, no evidence of disease; DOD, dead of disease. ( g , h ) Monitoring of tumor burden following complete tumor resection ( g ) and SABR ( h ) for two patients with stage IB NSCLC. CR, complete response. ( i ) Exploratory analysis of the potential application of CAPP-Seq for biopsy-free tumor genotyping or cancer screening. All plasma DNA samples from patients in Table 1 were examined for the presence of mutant allele outliers without knowledge of the primary tumor mutations ( Supplementary Methods ); samples with detectable mutations are shown, along with three samples assumed to be cancer-negative (P1-2, P1-3 and P16-3; Supplementary Methods ). The number following the hyphen in each sample (e.g., -1) represents the plasma time point ( Supplementary Table 4 ). The lowest fraction of ctDNA among positive samples was ∼ 0.4% (dashed horizontal line). Data in d are expressed as mean ± s.e.m. Scale bars ( a , b , e – h ), 10 cm. Source data Full size image We designed the NSCLC CAPP-Seq selector to detect multiple SNVs per tumor. In one patient (P5), this design allowed us to identify a dominant clone with an activating EGFR mutation as well as an erlotinib-resistant subclone with a 'gatekeeper' EGFR T790M mutation 26 . The ratios between clones were identical in a tumor biopsy and in simultaneously sampled plasma ( Fig. 4d ), demonstrating that our method has potential for detecting and quantifying clinically relevant subclones. Patients with stage II or III NSCLC undergoing definitive radiotherapy often have surveillance CT or PET-CT scans that are difficult to interpret owing to radiation-induced inflammatory and fibrotic changes in the lung and surrounding tissues. For patient P13, who was treated with radiotherapy for stage IIB NSCLC, follow-up imaging showed a large mass that was interpreted to represent residual disease. However, ctDNA at the same time point was undetectable ( Fig. 4e ), and the patient remained disease free 22 months later, which supports the ctDNA result. Another patient (P14) was treated with chemoradiotherapy for stage IIIB NSCLC, and follow-up imaging revealed a near-complete response ( Fig. 4f ). However, the ctDNA concentration slightly increased following therapy, suggesting progression of occult microscopic disease. Indeed, clinical progression was detected 7 months later, and the patient ultimately succumbed to NSCLC. These data highlight the promise of ctDNA analysis for identifying patients with residual disease after therapy. We next asked whether the low detection limit of CAPP-Seq would allow monitoring in early-stage NSCLC. Patients P1 ( Fig. 4g ) and P16 ( Fig. 4h ) underwent surgery and stereotactic ablative radiotherapy (SABR) 27 , respectively, for stage IB NSCLC. We detected ctDNA in pretreatment plasma of patient P1 but not at 3 or 32 months following surgery, which suggests that this patient was free of disease and probably cured 28 . For patient P16, the initial surveillance PET-CT scan following SABR showed a residual mass that was interpreted to represent either residual tumor or postradiotherapy inflammation. We detected no evidence of residual disease by ctDNA, supporting the latter hypothesis, and the patient remained free of disease at last follow-up 21 months after therapy. Taken together, these results demonstrate the potential utility of CAPP-Seq for measuring tumor burden in early- and advanced-stage NSCLC and for monitoring ctDNA during distinct types of therapy. Biopsy-free cancer screening and tumor genotyping Finally, we explored whether CAPP-Seq analysis of ctDNA could potentially be used for cancer screening and biopsy-free tumor genotyping. As proof of principle, we blinded ourselves to the mutations present in each patient's tumor and applied a new statistical method to test for the presence of cancer DNA in each plasma sample in our cohort ( Supplementary Fig. 7 ). By implementing our cancer screening method for high specificity, we correctly classified 100% of patient plasma samples with ctDNA above fractional abundances of 0.4% with a false-positive rate of 0% ( Fig. 4i and Supplementary Methods ). CAPP-Seq could therefore potentially improve upon the low positive predictive value of low-dose CT screening in patients at high risk of developing NSCLC 29 . Separately, when we specifically examined the ability of CAPP-Seq to noninvasively detect actionable mutations in EGFR and KRAS 25 , we correctly identified 100% of mutations at allelic fractions greater than 0.1% with 99% specificity. CAPP-Seq may therefore have utility for biopsy-free tumor genotyping in patients with locally advanced or metastatic NSCLC. However, methodological improvements will be required to detect and genotype stage I tumors without prior knowledge of tumor genotype. Discussion In this study, we present CAPP-Seq as a new method for ctDNA quantitation. Its key features include high sensitivity and specificity, lack of a need for patient-specific optimization and coverage of nearly all patients with NSCLC. To our knowledge, CAPP-Seq is the first NGS-based method for ctDNA analysis that achieves both an ultralow detection limit and broad patient coverage at a reasonable cost. Our approach also reduces the potential impact of stochastic noise and biological variability (for example, mutations near the detection limit or subclonal tumor evolution) on tumor burden quantitation by integrating information content across multiple instances and classes of somatic mutations. These features facilitated the detection of minimal residual disease and ctDNA quantitation from stage I NSCLC tumors. Although we focused on NSCLC, our method could be applied to any malignancy for which recurrent mutation data are available. In many patients, levels of ctDNA are considerably lower than the detection thresholds of previously described sequencing-based methods 13 . For example, pretreatment ctDNA concentration is <0.5% in the majority of patients with lung and colorectal carcinomas 1 , 30 , 31 . Following therapy, ctDNA concentrations typically drop, thus requiring even lower detection thresholds. Previously published ctDNA detection methods employing amplicon 8 , 10 , 11 , whole-exome 12 or whole-genome 9 , 24 , 32 , 33 sequencing would not be sensitive enough to detect ctDNA in most patients with NSCLC, even at tenfold or greater sequencing costs ( Fig. 1d and Supplementary Fig. 8 ). To further expand the potential clinical applications of ctDNA quantitation, additional gains in the detection threshold are desirable. Potential approaches include using barcoding strategies that suppress PCR errors resulting from library preparation 34 , 35 and increasing the amount of plasma used for ctDNA analysis above the average of ∼ 1.5 ml used in our study. A second limitation of CAPP-Seq is the potential for inefficient capture of fusions, which could lead to underestimates of tumor burden (for example, P9; Supplementary Methods ). However, this bias can be analytically addressed when other reporter types are present (for example, P6; Supplementary Table 4 ). Finally, although we found that CAPP-Seq could quantitate CNAs, our current selector design did not prioritize these types of aberrations. We anticipate that adding coverage for certain CNAs will prove useful for monitoring various types of cancers. In summary, targeted hybrid capture and high-throughput sequencing of plasma DNA allows for highly sensitive and noninvasive detection of ctDNA in the vast majority of patients with NSCLC at low cost. CAPP-Seq could therefore be routinely applied clinically and has the potential for accelerating the personalized detection, therapy and monitoring of cancer. We anticipate that CAPP-Seq will prove valuable in a variety of clinical settings, including the assessment of cancer DNA in alternative biological fluids and specimens with low cancer cell content. Methods Patient selection. Between April 2010 and June 2012, patients undergoing treatment for newly diagnosed or recurrent NSCLC enrolled in a study approved by the Stanford University Institutional Review Board and provided informed consent. Enrolled patients had not received blood transfusions within 3 months of blood collection. Patient characteristics are listed in Supplementary Table 3 . All treatments and radiographic examinations were performed as part of standard clinical care. Volumetric measurements of tumor burden were based on visible tumor on CT and calculated according to the ellipsoid formula: (length/2) × width 2 . Sample collection and processing. Peripheral blood from patients was collected in EDTA Vacutainer tubes (Becton Dickinson). Blood samples were processed within 3 h of collection. Plasma was separated by centrifugation at 2,500 g for 10 min, transferred to microcentrifuge tubes and centrifuged at 16,000 g for 10 min to remove cell debris. The cell pellet from the initial spin was used for isolation of germline genomic DNA from PBLs with the DNeasy Blood & Tissue Kit (Qiagen). Matched tumor DNA was isolated from formalin-fixed, paraffin-embedded specimens or from the cell pellet of pleural effusions. Genomic DNA was quantified by Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen). Cell-free DNA purification and quantification. Circulating DNA was isolated from 1–5 mL plasma with the QIAamp Circulating Nucleic Acid Kit (Qiagen). The concentration of purified plasma DNA was determined by qPCR using an 81-bp amplicon on chromosome 1 (ref. 24 ) and a dilution series of intact male human genomic DNA (Promega) as a standard curve. Power SYBR Green was used for qPCR on a HT7900 Real Time PCR machine (Applied Biosystems), using standard PCR thermal cycling parameters. Next-generation sequencing library construction. Indexed Illumina NGS libraries were prepared from plasma DNA and shorn tumor, germline and cell line genomic DNA. For patient plasma DNA, 7–32 ng DNA were used for library construction without additional fragmentation. For tumor, germline and cell line genomic DNA, 69–1,000 ng DNA was sheared before library construction with a Covaris S2 instrument using the recommended settings for 200-bp fragments. Details are provided in Supplementary Table 2 . The NGS libraries were constructed using the KAPA Library Preparation Kit (Kapa Biosystems) employing a DNA polymerase possessing strong 3′→5′ exonuclease (or proofreading) activity and displaying the lowest published error rate (i.e., highest fidelity) of all commercially available B-family DNA polymerases 36 , 37 . The manufacturer's protocol was modified to incorporate with-bead enzymatic and cleanup steps using Agencourt AMPure XP beads (Beckman-Coulter) 38 . Ligation was performed for 16 h at 16 °C using 100-fold molar excess of indexed Illumina TruSeq adapters. Single-step size selection was performed by adding 40 μL (0.8×) of PEG buffer to enrich for ligated DNA fragments. The ligated fragments were then amplified using 500 nM Illumina backbone oligonucleotides and 4–9 PCR cycles, depending on input DNA mass. Library purity and concentration was assessed by spectrophotometer (NanoDrop 2000) and qPCR (KAPA Biosystems), respectively. Fragment length was determined on a 2100 Bioanalyzer using the DNA 1000 Kit (Agilent). Library design for hybrid selection. Hybrid selection was performed with a custom SeqCap EZ Choice Library (Roche NimbleGen). This library was designed through the NimbleDesign portal (v1.2.R1) using genome build hg19 NCBI Build 37.1/GRCh37 and with Maximum Close Matches set to 1. Input genomic regions were selected according to the most frequently mutated genes and exons in NSCLC and were chosen to iteratively maximize the number of mutations per tumor while minimizing selector size. These regions were identified from the COSMIC database, TCGA and other published sources as described in the Supplementary Methods . Final selector coordinates are provided in Supplementary Table 1 . Hybrid selection and next-generation sequencing. NimbleGen SeqCap EZ Choice was used according to the manufacturer's protocol with modifications. Between 9 and 12 indexed Illumina libraries were included in a single capture hybridization. Following hybrid selection, the captured DNA fragments were amplified with 12 to 14 cycles of PCR using 1× KAPA HiFi Hot Start Ready Mix and 2 μM Illumina backbone oligonucleotides in four to six separate 50-μL reactions. The reactions were then pooled and processed with the QIAquick PCR Purification Kit (Qiagen). Multiplexed libraries were sequenced using 100-bp paired-end runs on an Illumina HiSeq 2000. Raw sequencing data have been deposited in the Sequence Read Archive under accession number SRP040228. Mapping and quality control. Paired-end reads were mapped to the hg19 reference genome with BWA 0.6.2 (default parameters) 39 and sorted and indexed with SAMtools 40 . Quality control (QC) was assessed using a custom Perl script to collect a variety of statistics, including mapping characteristics, read quality and selector on-target rate (i.e., number of unique reads that intersect the selector space divided by all aligned reads), generated respectively by SAMtools flagstat, FastQC ( ) and BEDTools coverageBed 41 . Plots of fragment length distribution and sequence depth and coverage were automatically generated for visual QC assessment. To mitigate the impact of sequencing errors, analyses not involving fusions were restricted to properly paired reads, and only bases with Phred quality scores ≥30 (≤0.1% probability of a sequencing error) were further analyzed. Detection thresholds. Two dilution series were performed to assess the linearity and accuracy of CAPP-Seq for quantitating ctDNA. In one experiment, shorn genomic DNA from a NSCLC cell line (HCC78) was spiked into circulating DNA from a healthy individual, and in a second experiment, shorn genomic DNA from one NSCLC cell line (NCI-H3122) was spiked into shorn genomic DNA from a second NSCLC line (HCC78). A total of 32 ng DNA was used for library construction. Following mapping and quality control, homozygous reporters were identified as alleles unique to each sample with at least 20× sequencing depth and an allelic fraction >80%. Fourteen such reporters were identified between HCC78 genomic DNA and plasma DNA ( Fig. 2g,h ), whereas 24 reporters were found between NCI-H3122 and HCC78 genomic DNA ( Supplementary Fig. 5 ). Bioinformatics pipeline. Details of bioinformatics methods are supplied in the Supplementary Methods . Briefly, for detection of SNVs and indels, we employed VarScan 2 (ref. 42 ) with strict postprocessing filters to improve variant call confidence, and for fusion identification and breakpoint characterization, we used an algorithm called FACTERA ( Supplementary Methods ). To quantify tumor burden in plasma DNA, allele frequencies of reporter SNVs and indels were assessed using the output of SAMtools mpileup 40 , and fusions, if detected, were enumerated with FACTERA. Statistical analyses. The NSCLC selector was validated in silico using an independent cohort of lung adenocarcinomas 20 ( Fig. 1c ). To assess statistical significance, we analyzed the same cohort using 10,000 random selectors sampled from the exome, each with an identical size distribution to the CAPP-Seq NSCLC selector. The performance of random selectors had a normal distribution, and P values were calculated accordingly. Of note, all identified somatic lesions were considered in this analysis. Related to Figure 1d , the probability ( P ) of recovering at least two reads of a single mutant allele in plasma for a given depth and detection limit was modeled by a binomial distribution. Given P , the probability of detecting all identified tumor mutations in plasma (for example, median of 4 for CAPP-Seq) was modeled by a geometric distribution. Estimates are based on 250 million 100-bp paired-end reads per lane (for example, using an Illumina HiSeq 2000 platform). Moreover, an on-target rate of 60% was assumed for CAPP-Seq and WES. To evaluate the impact of reporter number on tumor burden estimates, we performed Monte Carlo sampling (1,000×), varying the number of reporters available {1,2,... n } in two spiking experiments ( Fig. 2g–i and Supplementary Fig. 4 ). To assess the significance of tumor burden estimates in plasma DNA using SNVs, we compared patient-specific SNV frequencies to the null distribution of selector-wide background alleles. Indels were analyzed separately using mutation-specific background rates and Z -score statistics. Fusion breakpoints were considered significant when present with >0 read support due to their ultralow false detection rate. For each patient, we calculated a ctDNA detection index (akin to a false-positive rate) based on P value integration from his or her array of reporters ( Table 1 and Supplementary Table 4 ). Specifically, for cases where only a single reporter type was present in a patient's tumor, the corresponding P value was used. If SNV and indel reporters were detected and if each independently had a P value <0.1, we combined their respective P values using Fisher's method 43 . Otherwise, given the prioritization of SNVs in the selector design, the SNV P value was used. If a fusion breakpoint identified in a tumor sample (i.e., involving ROS1 , ALK or RET ) was recovered in plasma DNA from the same patient, it trumped all other mutation types, and its P value ( ∼ 0) was used. If a fusion detected in the tumor was not found in corresponding plasma (potentially owing to hybridization inefficiency; Supplementary Methods ), the P value for any remaining mutation type(s) was used. The ctDNA detection index was considered significant if the metric was ≤0.05 (approximate false-positive rate ≤5%), the threshold that maximized CAPP-Seq sensitivity and specificity in ROC analyses (determined by Euclidean distance to a perfect classifier; i.e., a true-positive rate equal to 1 and a false-positive rate equal to 0; Figs. 3 and 4 , Table 1 and Supplementary Table 4 ). We evaluated CAPP-Seq performance in a blinded fashion by masking all patient identifying information, including disease stage, circulating DNA time point, treatment, etc. We then applied our ctDNA detection index across the entire grid of deidentified plasma DNA samples (13 patient-specific sets of somatic reporters across 40 plasma samples, or 520 pairs). To calculate sensitivity and specificity, we 'unblinded' ourselves and grouped patient samples into cancer-positive (i.e., cancer was present in the patient's body), cancer-negative (i.e., patient was cured) or cancer-unknown (i.e., insufficient data to determine true classification) categories ( Fig. 3a,b and Supplementary Fig. 6 ). ROC analyses and significance estimates were performed using GraphPad Prism 6. Additional details are presented in the Supplementary Methods . Accession codes. Raw sequencing data were deposited in the Sequence Read Archive with accession number SRP040228 . Accession codes Primary accessions Sequence Read Archive SRP040228
A blood sample could one day be enough to diagnose many types of solid cancers, or to monitor the amount of cancer in a patient's body and responses to treatment. Previous versions of the approach, which relies on monitoring levels of tumor DNA circulating in the blood, have required cumbersome and time-consuming steps to customize it to each patient or have not been sufficiently sensitive. Now, researchers at the Stanford University School of Medicine have devised a way to quickly bring the technique to the clinic. Their approach, which should be broadly applicable to many types of cancers, is highly sensitive and specific. With it they were able to accurately identify about 50 percent of people in the study with stage-1 lung cancer and all patients whose cancers were more advanced. "We set out to develop a method that overcomes two major hurdles in the circulating tumor DNA field," said Maximilian Diehn, MD, PhD, assistant professor of radiation oncology. "First, the technique needs to be very sensitive to detect the very small amounts of tumor DNA present in the blood. Second, to be clinically useful it's necessary to have a test that works off the shelf for the majority of patients with a given cancer." The researchers describe their findings in a paper that will be published online April 6 in Nature Medicine. Diehn shares senior authorship with Ash Alizadeh, MD, PhD, assistant professor of medicine. Postdoctoral scholars Aaron Newman, PhD, and Scott Bratman, MD, PhD share lead authorship. "We're trying to develop a general method to detect and measure disease burden," said Alizadeh, a hematologist and oncologist. "Blood cancers like leukemias can be easier to monitor than solid tumors through ease of access to the blood. By developing a general method for monitoring circulating tumor DNA, we're in effect trying to transform solid tumors into liquid tumors that can be detected and tracked more easily." Even in the absence of treatment, cancer cells are continuously dividing and dying. As they die, they release DNA into the bloodstream, like tiny genetic messages in a bottle. Learning to read these messages—and to pick out the one in 1,000 or 10,000 that come from a cancer cell—can allow clinicians to quickly and noninvasively monitor the volume of tumor, a patient's response to therapy and even how the tumor mutations evolve over time in the face of treatment or other selective pressures. "The vast majority of circulating DNA is from normal, non-cancerous cells, even in patients with advanced cancer," Bratman said. "We needed a comprehensive strategy for isolating the circulating DNA from blood and detecting the rare, cancer-associated mutations. To boost the sensitivity of the technique, we optimized methods for extracting, processing and analyzing the DNA." The researchers' technique, which they have dubbed CAPP-Seq, for Cancer Personalized Profiling by deep Sequencing, is sensitive enough to detect just one molecule of tumor DNA in a sea of 10,000 healthy DNA molecules in the blood. Although the researchers focused on patients with non-small-cell lung cancer (which includes most lung cancers, including adenocarcinomas, squamous cell carcinoma and large cell carcinoma), the approach should be widely applicable to many different solid tumors throughout the body. It's also possible that it could one day be used not just to track the progress of a previously diagnosed patient, but also to screen healthy or at-risk populations for signs of trouble. Tumor DNA differs from normal DNA by virtue of mutations in the nucleotide sequence. Some of the mutations are thought to be cancer drivers, responsible for initiating the uncontrolled cell growth that is the hallmark of the disease. Others accumulate randomly during repeated cell division. These secondary mutations can sometimes confer resistance to therapy; even a few tumor cells with these types of mutations can expand rapidly in the face of seemingly successful treatment. "Cancer is a genetic disease," Alizadeh said. "But unlike Down syndrome, for example, which has a single dominant cause, for most cancers it's very difficult to identify any one particular genetic aberration or mutation that is found in every patient. Instead, each cancer tends to be genetically different from patient to patient, although sets of mutations can be shared among patients with a given cancer." So the researchers took a population-based approach. National databases such as The Cancer Genome Atlas contain DNA sequences of tumors collected from thousands of patients—and pinpoint places in which the cancer DNA differs from normal DNA. Although the significance of each individual change is not always clear, it's becoming possible to generate a mutational fingerprint for each cancer type that includes nucleotide changes, insertions or deletions of short pieces of genetic material and translocation events that shuffle or even flip DNA regions. Although no patient will have all the mutations, nearly all will have at least some. The group began by using a bioinformatics approach to collect information from the atlas on 407 patients with non-small-cell lung cancer, looking for regions in the genome enriched for cancer-associated mutations. "We looked for which genes are most commonly altered, and used computational approaches to identify what we call the genetic architecture of the cancer," Alizadeh said. "That allowed us to identify the part of the genome that would be best to identify and track the disease." They identified 139 genes that are recurrently mutated in non-small-cell lung cancer and that represent about 0.004 percent of the human genome. Next, the team designed oligonucleotides, panels of short pieces of DNA, bracketing these regions. The oligonucleotides were then used to perform very deep sequencing (meaning each region was sequenced about 10,000 times) of the surrounding DNA. "By sequencing only those regions of the genome that are highly enriched for cancer mutations, we're able to keep costs down and identify multiple mutations per patient," Diehn said. In contrast, other methods of tracking circulating tumor DNA have relied on single, well-known mutations that nevertheless are unlikely to occur in every patient with a particular cancer. Tracking more than one mutation increases the sensitivity of the approach and allows researchers more flexibility in seeing how the cancer changes over time. "There are currently no reliable biomarkers available for lung cancer patients, which is the most common cancer and No. 1 cause of cancer deaths," Diehn said. "We are very excited about our findings because a personalized, clinically useful biomarker could revolutionize how we detect and manage this devastating disease." Next, the researchers used these oligonucleotides to selectively sequence tumor samples from patients with the disease and identify specific mutations in each patient's tumor. Starting with a predefined panel of oligonucleotides allowed the researchers to quickly home in on patient-specific mutations that could be used to monitor disease. "A key advantage of our approach is that we can also track many different classes of mutations, and integrate information from all of them to get a much stronger signal," Newman said. "We've also developed statistical methods to suppress the background noise in a sample. This allows us to identify even very minute quantities of cancer DNA in a blood sample." When the researchers applied the technique to patients with non-small-cell lung cancer, they found they could detect disease in all patients with stage-2 or higher disease, and in half of those with stage-1, the earliest stage of disease. Furthermore, the absolute levels of circulating tumor DNA were highly correlated with tumor volume estimated by conventional imaging techniques such as CT and PET scans. This suggests CAPP-Seq could be used to monitor tumors at a fraction of the cost of commonly used imaging studies. CAPP-Seq may also be useful as a prognostic tool, the researchers found. The technique detected small levels of circulating tumor DNA in one patient thought to have been successfully treated for the disease; that patient experienced disease recurrence and ultimately died. Conversely, scans of a patient with early stage disease showed a mass thought to represent residual disease after treatment. However, CAPP-Seq detected no circulating tumor DNA, and the patient remained disease-free for the duration of the study. Finally, CAPP-Seq was also able to identify the presence in one patient of a minor population of tumor cells with a mutation that confers resistance to a drug commonly used to treat non-small-cell lung cancer. "If we can monitor the evolution of the tumor, and see the appearance of treatment-resistant subclones, we could potentially add or switch therapies to target these cells," Diehn said. "It's also possible we could use CAPP-Seq to identify subsets of early stage patients who could benefit most from additional treatment after surgery or radiation, such as chemotherapy or immunotherapy." The researchers are now working to design clinical trials to see whether CAPP-Seq can improve patient outcomes and decrease costs. They're also aiming to extend the technique to other types of tumors. Screening healthy but at-risk populations is another goal of the researchers. "It may be possible to develop assays that could simultaneously screen for multiple cancers," Diehn said. "This would include diseases such as breast, prostate, colorectal and lung cancer, for example." "This approach could, theoretically, work for any tumor," Alizadeh said. "We expect it to be broadly applicable across cancers."
dx.doi.org/10.1038/nm.3519
Nano
Coherent manipulation of spin qubits at room temperature
Xuyang Lin et al, Room-temperature coherent optical manipulation of hole spins in solution-grown perovskite quantum dots, Nature Nanotechnology (2022). DOI: 10.1038/s41565-022-01279-x Journal information: Nature Nanotechnology
https://dx.doi.org/10.1038/s41565-022-01279-x
https://phys.org/news/2022-12-coherent-qubits-room-temperature.html
Abstract Manipulation of solid-state spin coherence is an important paradigm for quantum information processing. Current systems either operate at very low temperatures or are difficult to scale up. Developing low-cost, scalable materials whose spins can be coherently manipulated at room temperature is thus highly attractive for a sustainable future of quantum information science. Here we report ambient-condition all-optical initialization, manipulation and readout of hole spins in an ensemble of solution-grown CsPbBr 3 perovskite quantum dots with a single hole in each dot. The hole spins are initialized by sub-picosecond electron scavenging following circularly polarized femtosecond-pulse excitation. A transverse magnetic field induces spin precession, and a second off-resonance femtosecond-pulse coherently rotates hole spins via strong light–matter interaction. These operations accomplish near-complete quantum-state control, with a coherent rotation angle close to the π radian, of hole spins at room temperature. Main Coherent control of spins in solid-state systems holds great promise for quantum information science 1 . Compared with bulk semiconductor materials, localized systems (such as epitaxial-grown quantum dots (QDs)) were considered to be more adaptable to quantum information science because of the possibility of addressing and manipulating single spins 2 , 3 , 4 . Other examples of localized systems include defect centres or dopants in solids 5 . Traditionally, radiofrequency electrical or magnetic stimuli are implemented for spin quantum-state control 6 , 7 , but the time limits of such operations are on the order of nanoseconds at best. Femtosecond or picosecond optical pulses have recently enabled ultrafast spin manipulation at exceptionally high speeds 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 . In spite of the success of various manipulation methods, there are still many shortcomings associated with current spin-host materials from the viewpoint of practical applications. Epitaxial QDs are fabricated using expensive, high-temperature and high-vacuum apparatus. Another fundamental issue is that interlevel scattering and coupling to phonon baths can strongly damp the spin coherence. Consequently, spin manipulation of these QDs is typically accomplished at cryogenic temperatures of a few kelvin 9 , 15 . By contrast, the defect or dopant spins in solids are highly isolated and can be manipulated at room temperature 16 . However, scaled-up production of these point defects might eventually become a challenge. For the scalable and sustainable implementation of spin-based quantum information science, it is desirable to develop low-cost materials whose spins can be coherently manipulated under ambient conditions. The colloidal counterparts of QDs (also called nanocrystals) can be synthesized in large quantities in solution at low cost, yet with high precision in terms of size and shape control, and are particularly suitable for self-assembly or device integration 17 . However, spin manipulation for prototypical CdSe-based colloidal QDs has not been realized at room temperature 10 . We turn our focus to recently developed lead halide perovskite colloidal QDs 18 . Their spin–orbit coupling and electronic structure 19 have proved to be ideal for efficient spin injection by optical means 20 , 21 , 22 , 23 , 24 , 25 , and their strong light–matter interaction should also facilitate spin manipulation based on an optical Stark effect (OSE) 26 , 27 , 28 , 29 . The challenge of spin manipulation in lead halide perovskite QDs, however, is their rapid spin relaxation at room temperature (a few picoseconds) 24 , 25 , probably limited by enhanced electron–hole exchange in these confined systems 30 . In this Article, we combine interfacial charge-transfer chemistry of lead halide perovskite QDs and femtosecond laser pulses to initialize, manipulate and read out hole spins at room temperature. We functionalize the surfaces of CsPbBr 3 QDs with anthraquinone (AQ) molecules. Following the preparation of a spin-polarized exciton in a QD using a circularly polarized photon, the AQ can extract the electron on a sub-picosecond timescale, thus quenching the spin relaxation induced by electron–hole exchange. This results in long-lived hole spin precession about an applied transverse magnetic field up to hundreds of picoseconds, during which a second off-resonance laser pulse coherently rotates the hole spin around a longitudinal axis through the OSE. Taken together, the precession and rotation accomplish successful quantum-state control of hole spins at room temperature. Sample characterization and experimental set-up Figure 1 illustrates the sample characteristics, optical set-up and principle of our experiment. All measurements were performed at room temperature. CsPbBr 3 QDs of controllable sizes were synthesized using a hot-injection method 24 ; details can be found in the Methods. Figure 1a shows the absorption spectra of two QD samples (QD1 and QD2) dispersed in hexane. Their transmission electron microscope (TEM) images in Supplementary Fig. 1 reveal monodisperse cube-shaped dots with average edge lengths of ~4.2 and 4.6 nm for QD1 and QD2, respectively. The uniform quantum confinement results in a series of exciton peaks identifiable at room temperature, with the lowest peaks at 470 and 481 nm for QD1 and QD2, respectively (Fig. 1a ). Confinement-induced energy quantization in these QDs should help to suppress phonon-induced interlevel scattering and sustain spin coherence compared with bulk samples, although their large single-dot linewidth (~50–100 meV; refs. 31 , 32 ) compared with phonon energies suggests that intralevel scattering still poses an issue at room temperature. A carboxylated derivative of AQ, which is a well-known electron acceptor 33 , 34 , was anchored onto the QD surface through the carboxyl group ( Methods ). The enhanced absorption of QD1–AQ and QD2–AQ compared with bare QD1 and QD2 in the ultraviolet can be attributed to the AQ molecules (Supplementary Fig. 2 ). On the basis of absorption spectra and extinction coefficients, we estimate that there are more than 200 AQ molecules on each QD. Fig. 1: System design and experimental set-up. a , Absorption spectra of CsPbBr 3 QD1 and QD2. The spectra of the pump and the tipping pulses (shaded pulses) are included for comparison. The arrows indicate the centre wavelengths of the two lowest exciton peaks for QD1 and QD2. b , Band-edge optical selection rules in CsPbBr 3 QDs in a quasiparticle representation. CBM, conduction-band minimum; VBM, valence band maximum. Circularly polarized σ + and σ − photons are selectively coupled to different spin transitions. c , Optical layout of the spin manipulation experiments, where t tip and t probe are the pump-tipping and pump-probe delays, respectively. BBO, barium borate crystal; OPA, optical parametric amplifier; QWP, quarter waveplate. d , The different experimental schemes of the pulse sequences to study spin precession (top left), the OSE (top right) and coherent spin manipulation (bottom). Source data Full size image An approximate energy-level diagram (Supplementary Fig. 2 ) of CsPbBr 3 QDs 35 and AQ 33 predicts that electron transfer from photoexcited QDs to AQ is the only allowed charge/energy transfer pathway in the system. Electron transfer results in nearly quantitative quenching of the emissions of both QD1 and QD2 (Supplementary Fig. 2 ). Full characterization of the charge-transfer processes is provided in Supplementary Figs. 3 and 4 ; the most important conclusion is that electron transfer occurs mostly within 300 fs and the resulting QD + –AQ − charge-separated states live much longer than 10 ns. The extremely rapid electron transfer is mainly due to large amounts of AQ acceptors available to each QD, statistically enhancing the electron transfer rate by two orders of magnitude 36 . The remaining hole could have long-lived spin polarization as these halide perovskites feature an inverted band structure compared with traditional semiconductors, with weak spin–orbit coupling in their valence band 37 . The band-edge optical selection rules in CsPbBr 3 QDs are depicted in Fig. 1b in a quasiparticle representation 24 , 27 . In a cubic symmetry, the valence band states (| s = 1/2, m s = ±1/2〉) are coupled to the conduction-band spin–orbit split-off states (| j = 1/2, m j = \(\mp\) 1/2〉) through circularly polarized photons ( σ − and σ + , respectively). Anisotropic exchange interactions due to QD shape anisotropy or lattice distortion may diagonalize the circularly polarized excitons into linearly polarized ones 37 , 38 , 39 , 40 . But the spin-selective OSE that we will show below indicates that the spin selection rules still largely hold for the samples studied here, at least at room temperature. Moreover, our primary focus here is single-hole states, for which exchange-induced splitting is eliminated. For the same reason, random orientation of the QDs in an ensemble should not complicate data interpretation 41 , 42 , provided that the rotational motion of these nanostructures in solution takes much longer (nanoseconds 43 ) than the spin relaxation studied here. In the optical experiment (Fig. 1c ; detailed in the Methods ), we used a femtosecond laser amplifier to pump an optical parametric amplifier to generate a tunable pump pulse. Another part of the fundamental beam was frequency-doubled to generate the rotation (tipping) pulse at 515 nm. The spectra of the pump pulses for QD1 and QD2 are plotted in Fig. 1a , which are in resonance with their respective lowest-exciton peaks, whereas the tipping pulse is below their optical gaps and hence functions as off-resonance rotation. The white-light continuum probe was generated by focusing a relatively weak 515 nm beam onto a sapphire window. The circularly polarized pulses were generated using polarizers and waveplates and were focused onto the solution sample in a quartz cuvette; a transverse magnetic field ( B z ) was applied to the sample (Voigt geometry). The pump or tipping pulses were modulated by choppers and the resulting absorption changes (Δ A ) were recorded by the probe pulse. The advantage of recording Δ A here over Faraday rotation in previous studies is the immediate accessibility of broadband spectral information under various modulation schemes, as illustrated below. Room-temperature hole spin precession We first investigated hole spin injection and precession with the pump pulse on and the tipping pulse off (Fig. 1d ). The pump power was controlled to be low, resulting in the average exciton number per QD 〈 N 〉 ≈ 0.025 and 0.08 for QD1 and QD2, respectively (see Methods ), and thereby removing complications from multiexciton Auger recombination 44 . The circularly polarized pump pulse triggers sub-picosecond electron transfer and leaves behind a spin-polarized hole. In the presence of the transverse magnetic field, the hole is in a coherent superposition of eigenstates | ↑ 〉 and | ↓ 〉 quantized by the field: (| ↑ 〉 ± | ↓ 〉)/√2 (Fig. 2a ), situated at the x axis of the Bloch sphere whose z axis is aligned with B z (Fig. 2b ). Owing to the field-induced Zeeman splitting ( E z ) of | ↑ 〉 and | ↓ 〉, the coherent state is rotated on the equator plane with the angular frequency of ω = E z / ħ (that is, the Larmor precession frequency). This precession can be directly visualized by a circularly polarized probe pulse; see Fig. 2c for QD1–AQ and Supplementary Fig. 5 for QD2–AQ. As anticipated, the time-dependent Δ A spectra detected with co- and counter-polarized pump/probe configurations have exactly anticorrelated phases (compare the left and right panels of Fig. 2c ). In each configuration, the bleach at 470 nm and the induced absorption at 484 nm are contributed by hole-induced state-filling and Coulombic effects, respectively, and they have identical kinetics. The kinetics plotted at 484 nm are shown in Fig. 2d and those at 470 nm are in Supplementary Fig. 6 . On the basis of the signal sizes measured by co- and counter-configurations ( S co and S counter ), we calculated the hole spin initialization efficiency (fidelity) of Φ h = | S co − S counter |/| S co + S counter | ≈ 50% (Supplementary Fig. 7 and Supplementary Text 1 ). Fig. 2: Room-temperature hole spin precession in AQ-functionalized CsPbBr 3 QDs. a , Energy level diagram in the presence of a transverse magnetic field ( B z ). After removing the conduction-band electron using an acceptor, the valence band hole oscillates between (| ↑ 〉 ±| ↓ 〉)/√2, where | ↑ 〉 and | ↓ 〉 are eigenstates quantized by the field. b , Bloch sphere representation of hole spin precession. The z axis is aligned with B z . c , Two-dimensional pseudo-colour transient absorption spectra of QD1–AQ measured with co-polarized ( σ + / σ + ) (left) and counter-polarized ( σ − / σ + ) (right) pump/probe beams at B z = 0.65 T. Spin precession with the anticorrelated phase can be seen in the left and right panels, as indicated by the dotted grey lines. Δ A is absorbance change and mOD means milli-optical density. The pump power is 5.5 μJ cm − 2 per pulse, corresponding to 〈 N 〉 ≈ 0.025. d , Transient absorption kinetics probed at 484 nm revealing opposite phases measured with co-polarized (blue-filled circles) and counter-polarized (red-filled circles) pump/probe beams at 0.65 T. The corresponding signals measured under 0 T are shown by open squares. e , Spin precession kinetics at 484 nm extracted by subtraction of the two curves above (blue-filled circles). The corresponding kinetics measured at 0 T are also shown for comparison (purple open squares). The grey solid lines are fits. Source data Full size image Our spin initialization method can be viewed as an active initialization as the hole is generated by dissociating the spin-polarized exciton injected by a pump photon, that is, it carries the spin directly imprinted by the photon. By contrast, spin initialization of charge-doped QDs extensively adopted in previous studies 11 , 12 , 13 , 14 , 15 is more like a passive method in which the pump photon eliminates one of the pre-doped spins by promoting them to trion states and leaves the other type of spin in a polarized state. Although both methods work well, our method does not require pre-doped samples and its interpretation is more straightforward. It resembles, but is greatly simplified from, the electric field-induced exciton ionization method reported for epitaxial QDs 14 , 45 . Figure 2e presents the hole spin precession kinetics in QD1–AQ obtained by taking the difference between co- and counter-Δ A (to remove any common background signals), at 484 nm. The kinetics can be fitted with a damped cosinoidal function: \(S(t) \propto{\mathrm{e}}^{-t/T_2^\ast }\;\cos\left( {\omega t + \varphi } \right)\) , where T 2 * is the transverse dephasing time (44.3 ps), ω is the precession angular frequency (5.66° per ps) and φ is the initial phase close to zero (−1.47°); see Supplementary Table 1 . Using B z = 0.65 T, the Landé g factor of the hole is derived as | g h | = ℏ ω / μ B B z = 1.73, which is larger than the reported value for bulk-like CsPbBr 3 (refs. 38 , 42 ) likely because quantum confinement modifies the g factor 46 or because it is influenced by the surface-appended AQ. As Landé g factors should be slightly different for each QD, dephasing occurs in the course of spin precession, which becomes more obvious under stronger B z (that is, the Δ g mechanism for spin dephasing); see Supplementary Fig. 8 . The zero-field kinetics of QD1–AQ are also plotted in Fig. 2e for comparison, which shows a relaxation time of 40.7 ps. By contrast, the zero-field kinetics of neutral QDs (not functionalized with AQ) of such sizes had a spin relaxation/dephasing time of only ~1 ps (Supplementary Fig. 9 and Supplementary Table 2 ) 24 . The marked contrast between functionalized and unfunctionalized QDs substantiates that the electron–hole exchange is the limiting factor for the exciton spin lifetime and that by removing the electron long-lived hole spin coherence is attainable at room temperature. Furthermore, the exchange interaction between the hole and AQ radical anion should have a minor impact, given a very weak QD-size dependence of the hole spin lifetime (Supplementary Fig. 10 ). A thorough investigation of the hole spin relaxation mechanisms is beyond the scope of the current study and will be pursued in our future work. We note that previous time-resolved Faraday rotation measurements have observed long-lived spin precession of resident carriers in nominally neutral QDs 41 , 42 , probably resulting from unintentional photocharging. However, the extent of photocharging should depend sensitively on the excitation laser power. In our experiments with minimized pump powers (see above), photocharging should be negligible. Otherwise, we would expect a long-lived component of ~40 ps on the spin relaxation kinetic traces of untreated QDs, which is not seen in Supplementary Fig. 9 . Ultrafast rotation using OSE We then studied the spin-selective OSE as a tool for coherent spin manipulation, in an experimental configuration with the pump pulse and B z off and the tipping pulse on (Fig. 1d ). This measured the OSE of neutral exciton states instead of single-hole states in the sample, but it served as a good starting point to illustrate coherent manipulation 47 , 48 . As shown in Fig. 3a , when the system was coherently driven by, for example, a σ + tipping photon, under a quasiparticle representation, the Floquet states hybridized with only |−1/2〉 h and |+1/2〉 e states but not the other two states coupled to σ − . As a result, a blueshift of the transition can be detected with a σ + photon, but not a σ − photon. Fig. 3: Ultrafast rotation using the OSE. a , Scheme of the spin-selective OSE in CsPbBr 3 QDs. Δ is the detuning of the driving photon ( σ + ) energy compared with the optical transition energy ( E g ), and δ OSE is the OSE-induced blue-shift of the transition energy. For simplicity, only the Floquet state of dress-up from |−1/2〉 h is shown, while the other state associated with dress-down from |+1/2〉 e is omitted. b , Two-dimensional pseudo-colour transient absorption spectra of QD1–AQ measured with co-polarized ( σ + / σ + ) (left) and counter-polarized ( σ − / σ + ) (right) tipping/probe beams with a tipping power density of 0.38 GW cm − 2 . A lobe-shaped spectrum near time zero is observed in the left panel, but not in the right one, as indicated by the dotted grey line. c , The time-zero OSE spectra at varying tipping power densities. d , δ OSE as a function of tipping power density (blue circles) with a linear fit (grey solid line). The horizontal error bars within the data points represent the errors in power densities propagated from the errors in the measured beam sizes. e , Bloch sphere representation of coherent rotations of the exciton state (left) and hole state (right) under the OSE-induced effective magnetic field ( B eff ) along the x axis (tipping beam). We assume that electron and hole states equally share δ OSE of the exciton state, and therefore the hole rotation angle is half of the exciton rotation angle. Source data Full size image Indeed, we observed a lobe-shaped Δ A spectrum for QD1 with co-circularly polarized tipping-probe beams, which is almost absent when the beams have counter-circular polarizations (Fig. 3b ); see Supplementary Fig. 11 for representative spectra of QD2. The signal is present only during tipping-probe cross-correlation time. These are consistent with spin-selective, circularly polarized OSE. By contrast, OSE measurements using linearly polarized laser beams of co- and cross-polarizations generate almost identical signal sizes, which are about half of that measured with co-circularly polarized beams (Supplementary Fig. 12 ). A possible reason is that at room temperature the anisotropic exchange splitting in these perovskite QDs (<1 meV) 37 , 38 , 39 is orders of magnitude weaker than their homogeneous exciton linewidth (~50–100 meV) 31 , 32 , thus allowing us to ignore the effect of symmetry breaking in our experiments 49 . The lobe-shaped Δ A spectral intensity increases with the tipping power P tip (Fig. 3c ), which can be quantified as OSE-induced splitting ( δ OSE ) between the | σ + 〉 and | σ − 〉 excitons. The splitting grows approximately linearly with P tip and reaches 9.65 meV at 9.72 GW cm − 2 (Fig. 3d ). From the linear slope, we derived transition dipoles ( μ ) of 21 and 24 D for QD1 and QD2, respectively; see Supplementary Text 2 for the calculation details 27 . The large μ is indicative of intrinsically strong light–matter interaction for CsPbBr 3 QDs, in contrast to a previous study on CdSe-based colloidal QDs that relied on a resonant plasmon enhancement effect to achieve sizable δ OSE (ref. 10 ). The strong light–matter interaction of CsPbBr 3 QDs should facilitate coherent spin manipulation using the OSE. To illustrate this, we can interpret the OSE that lifts the degeneracy of | σ + 〉 and | σ − 〉 excitons as an effective pseudo-magnetic field ( B eff ) 8 , 9 , 10 , 47 , 48 . The direction of this field is along the tipping beam ( x axis; left diagram in Fig. 3e ). If a coherent exciton state of (| σ + 〉 + | σ − 〉)/√2 is prepared, the OSE is able to rotate the exciton state in QD1 with an angle up to 275° (4.8 radian) around B eff (Supplementary Fig. 13 and Supplementary Text 3 ). Note that this estimation is made under the assumption of a linear increase of B eff with δ OSE (and hence P tip ) 48 . For a hole state, we estimate the rotation angle to be up to 137.5° (2.4 radian) by assuming that electron and hole states equally share δ OSE of the exciton state 8 (right diagram in Fig. 3e ). An alternate interpretation of spin rotation using strong pulses is the stimulated Raman transition theory 12 , 15 , 50 . Room-temperature coherent hole spin manipulation With the tools of spin precession and manipulation at hand, we explored complete quantum-state control of hole spins in QD1–AQ with both the pump (chopped) and tipping (unchopped) pulses, as well as B z , on (Fig. 1d ). The pump powers were identical to those used in spin precession measurements, ensuring excitation in the single-exciton regime. The pump-tipping delay ( t tip ) is controlled to investigate the tipping effects at various positions on the Bloch sphere ( z axis aligned with B z ). Figure 4a (top) is the spin precession kinetics tipped at t tip = 17.2 ps, that is, when the state is (| ↑ 〉 + i | ↓ 〉)/√2 on the y axis of the Bloch sphere, with a tipping power of 9.72 GW cm − 2 . A large amplitude change and sign-switch from the untipped kinetics is achieved; see the corresponding Bloch sphere representation in Fig. 4b . The difference in tipping with σ + or σ − pulses is negligible (Supplementary Fig. 14 ). By contrast, when the tipping acts at t tip = 31.7 ps (| ↑ 〉 − | ↓ 〉)/√2; x axis) with the same power, a negligible amplitude change is observed (Fig. 4a , middle). Overall, tipping the states at the y axis and x axis represents the most and least obvious spin manipulations, respectively. Figure 4a (bottom) presents an intermediate case with the tipping acting at ωt + φ ≈ 4.4 rad between the y axis and x axis. Similar results for QD2−AQ are presented in Supplementary Fig. 15 . All the tipped kinetics can be well fitted using Bloch sphere analysis; see Supplementary Text 4 and Supplementary Tables 3 – 6 for details. Fig. 4: Room-temperature hole spin manipulation in AQ-functionalized CsPbBr 3 QDs. a , The untipped spin precession kinetics (grey circles) and the tipped kinetics (coloured circles) are shown for QD1−AQ, with the tipping pulse acting at different times (top: 17.2 ps; middle, 31.7 ps; bottom; 42.0 ps) indicated by black arrows. The pump power is identical to that in Fig. 2 . The tipping power density is 9.72 GW cm − 2 . The effect of the tipping pulse depends sensitively on the tipping time. b , Corresponding Bloch sphere representations of coherent hole spin manipulation using B eff of the tipping pulse. c , Spin precession kinetics of QD1−AQ for different tipping power densities with the tipping time fixed at 17.2 ps (coloured dots). The grey solid lines are their fits. The effect of the tipping pulse also depends sensitively on the tipping power density. d , Tipping angle as a function of tipping power density for QD1−AQ (blue circles) and QD2−AQ (red triangles). The horizontal error bars represent the errors in power densities propagated from the errors in the measured beam sizes. Source data Full size image We examined the tipping power ( P tip ) dependence with t tip fixed at 17.2 ps (Fig. 4c ), that is, tipping at the y axis. The tipping angle was calculated as θ tip = arccos( A t / A u ), where A t and A u are the tipped and untipped signal amplitudes, respectively 8 , 10 . As presented in Fig. 4d , θ tip of QD1−AQ increased sublinearly with P tip ( θ tip ∝ P tip 0.636 ) until it reached ~2π/3 rad at ~10 GW cm − 2 . The maximum θ tip was solely limited by the laser power in our set-up as no sign of sample damage was observed at the largest P tip . For QD2−AQ with a larger transition dipole and a smaller tipping detuning, θ tip also increased with P tip sublinearly ( ∝ P tip 0.634 ), but more steeply than QD1−AQ, reaching 0.56π rad at 3.58 GW cm − 2 . A further increase of P tip , however, resulted in real excitation of trion states because the absorption onset of QD2 was closer to the tipping pulse than QD1 (Fig. 1a ). We expect that π-rad tipping (that is, full quantum-state control) should be achievable for both QD1−AQ and QD2−AQ by tailoring the tipping photon energy and bandwidth, as well as by increasing the tipping power. The sublinear scaling of θ tip with P tip observed herein contradicts the simple assumption of B eff scaling linearly with P tip made previously 48 , but it is similar to a previous study on epitaxial QDs 15 . As explained in ref. 15 , when the Rabi energy ( ℏ Ω R ) of the interaction between the electric field of the tipping pulse and the QD transition dipole becomes comparable to the detuning energy ( Δ ), the so-called adiabatic elimination approximation breaks down and the excitation of virtual population must be considered. For example, at P tip of 9.72 GW cm − 2 , ℏ Ω R for QD1−AQ has reached 67.3 meV, which is indeed comparable with Δ ≈ 230 meV in our experiment. A four-level master-equation simulation in ref. 15 produced θ tip ∝ P tip 0.65 , which is strikingly close to our experimental results. Conclusions The complete set of initialization, manipulation and readout of hole spins in an ensemble of CsPbBr 3 perovskite QDs achieved here at room temperature is a very promising result. It establishes the feasibility of quantum information processing using low-cost, solution-grown samples under ambient conditions. Moreover, we found that the strategy of surface modification with charge acceptors could represent a general method to initialize long-lived carrier spins in perovskite QDs. As shown in Supplementary Fig. 16 , by anchoring pyrene molecules onto the surfaces of CsPbCl 3 perovskite QDs to scavenge photogenerated holes, we observed electron spin precession beyond 100 ps. Further prolonging these room-temperature spin coherence times to nanoseconds, thus enabling 10 4 –10 5 operations (for quantum error correction) using femtosecond pulses, is the next step in the roadmap. Methods Chemicals Caesium carbonate (Cs 2 CO 3 , Sigma-Aldrich, 99.9%), lead( ii ) bromide (PbBr 2 , Alfa Aesar, 98%), zinc bromide (ZnBr 2 , Alfa Aesar, 99.9%), oleic acid (OA, Sigma-Aldrich, 90%), oleylamine (OAm, Acros Organics, 80–90%), 1-octadecene (ODE, Sigma-Aldrich, 90%), methyl acetate (Energy Chemical, 99% extra dry) and anthraquinone-2-carboxylic Acid (AQ, Alfa Aesar, 98%) were used directly without any further purification. Synthesis of CsPbBr 3 QDs CsPbBr 3 QDs were synthesized following previously reported procedures 24 , 51 , 52 . The synthesis started with the preparation of Cs oleate precursors. 0.25 g Cs 2 CO 3 , 0.8 g OA and 7 g ODE were loaded into a 50 ml three-neck flask and vacuum-dried for 1 h at 120 °C using a Schlenk line. The mixture was heated under an N 2 atmosphere to 150 °C until all the Cs 2 CO 3 powders were dissolved. The Cs oleate precursor solution was kept at 100 °C to prevent the precipitation of Cs oleate out of ODE. In another 50 ml three-neck flask, the precursor solution of Pb and Br was prepared by dissolving 225 mg PbBr 2 and 552 mg ZnBr 2 in a mixture of ODE (18 ml), OA (9 ml), and OAm (9 ml). After the precursor solution of Pb and Br was vacuum-dried for 1 h at 120 °C and the temperature reached 140 °C under an N 2 atmosphere, 1.2 ml of the Cs precursor solution was injected to initiate the reaction. The reaction was quenched after 30 s by cooling the flask in an ice bath. After the crude solution was cooled to room temperature, the product was centrifuged at 1,300 g for 20 min to remove the unreacted salts as the precipitate, and the perovskite QDs dispersed in the supernatant were collected. The QDs were precipitated by adding methyl acetate drop-wise until the mixture just turned turbid to avoid decomposition of the QDs. After being centrifuged at 3,820 g for 5 min, the precipitate was dried and dissolved in hexane. Adding extra methyl acetate to the second supernatant precipitated smaller QDs. This process could be repeated to obtain target-size QDs. Preparation of QD–AQ complexes The CsPbBr 3 QD–AQ complexes were prepared by adding anthraquinone-2-carboxylic acid powder into the CsPbBr 3 QDs solution, followed by stirring for 30 min and filtration. On the basis of the reported extinction coefficients of QDs 53 and that measured for AQ (~5,500 M −1 cm −1 at 325 nm), there are (on average) ~280 and 220 AQ molecules for each QD1 and QD2, respectively. The solubility of AQ molecules in hexane is negligible, but in the QD–hexane solution, which contained OA and OAm ligands in excess, AQ becomes slightly soluble. Thus, the molecular numbers above are slightly overestimated. Nevertheless, previous nuclear magnetic resonance measurements for similar CsPbBr 3 QD–molecule systems 54 suggest that the majority of acceptor molecules are indeed bound to QD surfaces. Femtosecond transient absorption Femtosecond transient absorption measurements were carried out using a Pharos femtosecond laser system (Light Conversion; 1,030 nm, full-width at half-maximum 230 fs, 20 W) and Orpheus-HP optical parameter amplifier (Light Conversion) 40 . The repetition frequency of the Pharos femtosecond laser system is tunable from 1 kHz to 100 kHz and was set at 10 kHz for current experiments. The 1,030 nm output laser was split into two beams with an 80/20 ratio. The 80% part was used to pump the optical parameter amplifier to generate a wavelength-tunable pump beam. The remaining 1,030 nm beam from the optical parameter amplifier was focused into a 2-mm-thick BBO crystal to generate a 515 nm tipping beam. A notch filter with a centre wavelength of 514 ± 2 nm and full-width at half-maximum of 17 nm was used to remove 1,030 nm photons from the tipping pulses. The 20% part was further split into two parts with a 75/25 ratio. The 75% part was attenuated with a neutral-density filter and focused into a BBO crystal to generate a 515 nm beam, which was further focused into a sapphire crystal to generate a white-light continuum used as the probe beam. The probe beam was focused with an Al parabolic reflector onto the sample. The probe beam was then collimated and focused into a fibre-coupled spectrometer with a line scan camera and detected at a frequency of 10 kHz. The intensity of the pump and tipping pulses used in the experiment was controlled by a variable neutral-density filter wheel. The delay between the pump and probe pulses was controlled by a motorized delay line and the delay between the pump and tipping pulses was controlled by a homemade delay line. The pump or tipping beam were chopped by a synchronized chopper at 5 kHz and the absorbance change was calculated with adjacent probe pulses. The probe, pump and tipping beams were focused and spatially overlapped on the sample with spot sizes of 240 μm, 270 μm and 355 μm, respectively (at 1/ e 2 intensity), with the spot sizes measured using the knife-edge method. The pulse durations were ~540 fs at 1/ e 2 intensity. Circular polarizations of the pump, tipping and probe beams were controlled by quarter waveplates. The magnetic field direction was perpendicular to the laser beams, and provided by an electromagnet (EM3; Beijing Jinzhengmao Technology Co.). Using the pump beam size, the pump power and photon energy, we calculated that in our experiments the pump photon fluences ( j p ) were 1.3 × 10 13 and 3.3 × 10 13 photons per cm 2 per pulse for QD1 and QD2, respectively. The size-dependent absorption cross-sections ( σ a ) of CsPbBr 3 QDs were reported in ref. 55 : ~3.8 × 10 −15 and 5.0 × 10 −15 cm 2 for QD1 and QD2 at their respective pump wavelengths. The average number of excitons (〈 N 〉) was estimated as: 〈 N 〉 = σ a × j p , which is 0.05 and 0.16 for QD1 and QD2, respectively. However, it is important to note that we used circularly polarized photons to address only one of twofold degenerate quasiparticle transitions. Therefore, 〈 N 〉 should be further reduced by half, that is, 〈 N 〉 ≈ 0.025 and 0.08 for QD1 and QD2, respectively. Data availability All data are available in the main text or Supplementary Information and can be obtained upon request from the corresponding author. The data are also available via Figshare at . Source data are provided with this paper.
A research group led by Prof. Wu Kaifeng from the Dalian Institute of Chemical Physics (DICP), Chinese Academy of Sciences recently reported the successful initialization, coherent quantum-state control, and readout of spins at room temperature using solution-grown quantum dots, which represents an important advance in quantum information science. The study was published in Nature Nanotechnology on Dec 19th. Quantum information science is concerned with the manipulation of the quantum version of information bits (called qubits). When people talk about materials for quantum information processing, they usually think of those manufactured using the most cutting-edge technologies and operating at very cold temperatures (below a few Kelvin), not the "warm and messy" materials synthesized in solution by chemists. Recent years have witnessed the discovery of isolated defects in solid-state materials (such as NV centers) that have made possible room-temperature spin-qubit manipulation, but scaled-up production of these "point defects" will eventually become a challenge. Colloidal quantum dots (QDs), which are tiny semiconductor nanoparticles made in solution, could be a game changer. They can be synthesized in large quantities in solution at low cost, yet with high finesse in size and shape control. Further, they are usually strongly quantum-confined, thus their carriers well isolated from the phonon bath, which could enable long-lived spin coherence at room temperature. But room-temperature coherent manipulation of spins in colloidal QDs has never been reported, in that a QD system whose spins can be simultaneously initialized, rotated, and readout at room-temperature remains to be invented. Here the authors show that solution-grown CsPbBr3 perovskite QDs can actually accomplish this intimidating goal. Polarized hole spins are obtained by sub-picosecond electron scavenging, to surface-anchored molecular acceptors, following a circularly-polarized femtosecond pulse excitation. A transverse magnetic field induces coherent Larmor precession of the hole spins. A second off-resonance femtosecond pulse coherently rotates the spins through the optical Stark effect, which is enabled by the exceptionally strong light-matter interaction of the perovskite QDs. These results represent full quantum-state control of single-hole spins at room temperature, holding great promise for a scalable and sustainable future of spin-based quantum information processing "Our success here is enabled by a very rare combination of knowledge in materials, chemistry and physics," said Prof. Wu. "We fabricated strongly- and uniformly-confined CsPbBr3 QDs as the unique system for the study, and identified appropriate surface-ligand molecules to rapidly extract the electrons via charge-transfer chemistry for hole-spin initialization at room temperature. Meanwhile, we were able to utilize strong light-matter interaction of these QDs to perform coherent spin manipulation."
10.1038/s41565-022-01279-x
Physics
Fibre-optic transmission of 4000 km made possible by ultra-low-noise optical amplifiers
Samuel L.I. Olsson et al. Long-haul optical transmission link using low-noise phase-sensitive amplifiers, Nature Communications (2018). DOI: 10.1038/s41467-018-04956-5 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-04956-5
https://phys.org/news/2018-07-fibre-optic-transmission-km-ultra-low-noise-optical.html
Abstract The capacity and reach of long-haul fiber optical communication systems is limited by in-line amplifier noise and fiber nonlinearities. Phase-sensitive amplifiers add 6 dB less noise than conventional phase-insensitive amplifiers, such as erbium-doped fiber amplifiers, and they can provide nonlinearity mitigation after each span. Realizing a long-haul transmission link with in-line phase-sensitive amplifiers providing simultaneous low-noise amplification and nonlinearity mitigation is challenging and to date no such transmission link has been demonstrated. Here, we demonstrate a multi-channel-compatible and modulation-format-independent long-haul transmission link with in-line phase-sensitive amplifiers. Compared to a link amplified by conventional erbium-doped fiber amplifiers, we demonstrate a reach improvement of 5.6 times at optimal launch powers with the phase-sensitively amplified link operating at a total accumulated nonlinear phase shift of 6.2 rad. The phase-sensitively amplified link transmits two data-carrying waves, thus occupying twice the bandwidth and propagating twice the total power compared to the phase-insensitively amplified link. Introduction The achievable transmission performance of fiber optical transmission systems is limited by amplifier noise and fiber nonlinearities degrading the signal 1 , 2 , 3 . Phase-sensitive amplifiers (PSAs) can provide low-noise amplification, because at high gains their noise figure (NF) is 3 dB lower than that of even ideal phase-insensitive amplifiers (PIAs) 4 , 5 . Using an alternative NF definition where only the signal power is accounted for (idler power is neglected), the NF difference between PSAs and PIAs increases to 6 dB 5 . PSAs are also capable of all-optical mitigation of nonlinear transmission distortions 6 , 7 , 8 . Using PSAs low-noise amplification and nonlinearity mitigation capabilities, PSAs can potentially improve the transmission performance of fiber optical transmission systems 9 , 10 . PSAs can be realized, for example using, parametric gain in χ (2) nonlinear materials through three-wave mixing (TWM) 11 , or χ (3) nonlinear materials through four-wave mixing (FWM) 12 . Typically, two weak waves, called signal and idler, are amplified by one or two high-power waves, called pumps. Depending on how the frequencies of the interacting waves are chosen, different amplification schemes are possible. Two common schemes are the one-mode PSAs in which signal and idler are frequency degenerate and the two-mode PSAs in which signal and idler are frequency non-degenerate. In one-mode PSAs, one quadrature is amplified while the other quadrature is deamplified, squeezing the signal phase along the direction of the amplified quadrature 4 . If the PSA is operated in unsaturated regime, phase noise in the squeezed quadrature will be converted into amplitude noise in the amplified quadrature. If, however, the PSA is operated in saturation both phase and amplitude noise can be suppressed thus making this scheme suitable for simultaneous phase and amplitude regeneration of binary phase-shift keying (BPSK) signals 13 , 14 , 15 . Using this scheme, a two times reach extension, originating from phase and amplitude regeneration, not low-noise amplification, has been demonstrated 16 , 17 . Two severe drawbacks of the one-mode PSA scheme is that it is inherently single-channel and that it is only suitable for BPSK signals. Using other PSA-based schemes, regeneration of more advanced modulation formats such as quadrature phase-shift keying (QPSK) 18 , 19 , and star 8-quadrature amplitude modulation (QAM) 20 , have been demonstrated as well as simultaneous regeneration of more than one channel 21 , 22 . Another way to benefit from PSAs is to utilize their capabilities of low-noise amplification and nonlinearity mitigation. This can be done using two-mode PSAs implemented with the so-called copier-PSA scheme 23 . Using the copier-PSA scheme, all signal phase states will experience low-noise amplification thus providing modulation-format transparency 24 . Moreover, two-mode PSAs are multi-channel compatible and can be used for amplification of wavelength division multiplexing (WDM) signals 25 . In ref. 26 , it was shown that two-mode PSAs potentially can be combined with multi-channel amplitude regenerators for multi-channel regeneration of advanced modulation formats. For details on the requirements regarding the tracking and alignment of polarization in PSA links see ref. 27 Mitigation of fiber nonlinearities to extend transmission reach is a vivid research area currently 28 , and many different schemes have been proposed, e.g., phase conjugated twin waves 29 or conjugate data repetition 30 , which are based on the idea that the signal and the conjugate signal are co-propagated through the same medium and coherently superposed to suppress the nonlinear-induced phase distortion. Cancellation of nonlinear distortion by digital signal processing 31 in the receiver 32 or transmitter 33 has also been demonstrated, as has optical phase conjugation (OPC) 34 . Typically, a doubling or at most a tripling of the system reach have been reported by these schemes, at the expense of spectral efficiency and/or complexity. A way to further enhance performance could be to distribute the compensation, which is attractive for all-optical schemes such as PSAs or OPC, and for OPCs that was recently demonstrated 35 , 36 , although relatively moderate Q-factor improvements over single OPCs were reported. Here we present experimental evidence that in-line PSAs, can provide an unprecedented nonlinear tolerance and transmission reach extension 9 , 10 . In this demonstration of a recirculating loop (i.e., long-haul) transmission experiment with in-line PSAs, we benefit from the inherent simultaneous low-noise amplification and nonlinearity mitigation. This scheme, which is both modulation format-independent and multi-channel compatible 5 , is shown experimentally to have a 5.6 times reach improvement compared to a transmission link using conventional in-line erbium-doped fiber amplifiers (EDFAs) when transmitting a 10 GBd QPSK signal. The accumulated nonlinear phase shift in the PSA link is 6.2 rad, which we believe is the highest nonlinear tolerance ever reported in a lumped-amplifier system. These results demonstrate not only the feasibility of realizing long-haul transmission links using low-noise PSAs but also significant improvement over conventional approaches. The concept of amplification using cascaded PSAs might also find applications in the field of quantum information science, where generation and processing of quantum states are of interest. Results Basic principle The amplifier implementation we consider in this work is the degenerate pump, two-mode PSA. It consists of three waves, an intense pump surrounded by a signal and an idler. The input–output relation for the signal and idler is given by $$\left( {\begin{array}{*{20}{c}} {u_{\mathrm{s}}} \\ {u_{\mathrm{i}}^ \ast } \end{array}} \right)_{{\mathrm{out}}} = \left( {\begin{array}{*{20}{c}} \mu & \nu \\ {\nu ^ \ast } & {\mu ^ \ast } \end{array}} \right)\left( {\begin{array}{*{20}{c}} {u_{\mathrm{s}} + n_{\mathrm{s}}} \\ {u_{\mathrm{i}}^ \ast + n_{\mathrm{i}}^ \ast } \end{array}} \right)_{{\mathrm{in}}}$$ (1) where u s,i are the signal and idler wave amplitudes, n s,i represents vacuum noise present at the input, and the amplifier is characterized via the scalar coefficients μ and ν , where \(\left| \mu \right|^2 - \left| \nu \right|^2 = 1\) ensures photon-number conservation, i.e., two pump photons are converted into one signal and one idler photon. If the input idler wave is absent, u i,in = 0 then the output signal is amplified phase insensitively with gain \(G_{{\mathrm{PIA}}} = \left| \mu \right|^2 \approx \left| \nu \right|^2\) , where the approximate equality holds in the limit of high gain. In our experiment, we employ a sequence of these amplifiers with intermediate fiber losses that are compensated for by the provided gain. The first amplifier has u i,in = 0, so it copies the conjugate incoming signal to the output idler wave. The generated signal-idler pair then propagates through all subsequent amplifiers, while achieving a phase-sensitive gain G PSA of approximately 4 G PIA due to coherent addition of signal and idler conjugate. In contrast to the signal, for which the gain is 6 dB higher in phase-sensitive (PS) mode than in phase-insensitive (PI) mode, the gain for the vacuum noise is always 2 G PIA since the noise is uncorrelated between signal and idler and will thus not add coherently. By comparing PI- and PS-operation at the same signal gain the difference between PIA and PSA amplification can be stated as that a PSA will add 6 dB less noise than a PIA. The first 3 dB of this improvement comes from the phase-sensitive nature of the gain, which releases the PSA from being constrained by the 3 dB quantum limit on PIA NF 4 , at the expense of using half of the available bandwidth for propagating the idler 37 . This NF improvement has been characterized in detail in refs. 38 , 39 The second 3 dB of the improvement comes from the fact that the data in the two-mode-PSA-amplified link are carried by two beams (signal and idler) of equal powers, which makes the effective total data-carrying power in the PSA link twice that of the PIA link 37 . Described above is the so-called copier-PSA scheme, and its linear link properties were analyzed in refs. 40 , 41 and experimentally verified for a single-span link in refs. 5 , 42 A conceptual schematic of a multi-span implementation of the copier-PSA scheme is shown in Fig. 1 . Fig. 1 Long-haul PSA-amplified link. Conceptual schematic of a long-haul PSA-amplified link implemented using the copier-PSA scheme Full size image Experimental set-up The experimental set-up used to demonstrate long-haul transmission with in-line PSAs is illustrated in Fig. 2 . A signal modulated with 10 GBd QPSK data were launched into a recirculating loop. During the first round trip ( N = 1), only one wave, the signal, was present at the input of the polarization tuning and pump recovery stage and a pump wave was generated using a laser. After combining the signal with the pump using a WDM coupler, the two waves were launched into a fiber optical parametric amplifier (FOPA) where a conjugated copy of the signal, the idler, was generated. During the first round trip, the FOPA thus operated as a copier. The three waves were then passed through a power tuning stage where an optical processor (OP) was used to filter the signal and idler as well as adjust their powers. Following the OP, the signal and idler were passed through an EDFA followed by a variable optical attenuator (VOA) for launch power tuning. The pump was passed through a separate path and was attenuated using a VOA. The transmission link consisted of two tunable fiber Bragg-grating dispersion compensating modules (DCMs) and an 80 km standard single-mode fiber (SSMF) transmission span. The combined loss of the SSMF span and the second DCM was 21.5 dB. Fig. 2 Experimental set-up. Recirculating loop set-up used to demonstrate long-haul transmission with in-line PSA-based amplification. Option with in-line EDFA- and PIA-based amplification used for benchmarking is also shown. Colored arrows indicating waves represent PSA case for the second and the following round trips. IQ modulator in phase/quadrature modulator, AOM acousto-optic modulator, VDL variable delay line, PC polarization controller, HNLF highly nonlinear fiber, PLL phase-locked loop, DSP digital signal processing, LO local oscillator Full size image During the second and the following round trips ( N ≥ 2), the pump was regenerated in the polarization tuning and pump recovery stage by injection-locking it to the pump laser and subsequently amplifying it with an EDFA. The process of self-injection-locking enabled stable injection-locking over many circulations. The signal and idler were split into two separate paths and the delay between them introduced by the SSMF was compensated for. A phase-locked loop based on a piezoelectric transducer (PZT) fiber stretcher was used to compensate for any dynamic phase drifts between the arms introduced by temperature and acoustic influence. After the polarization tuning and pump recovery stage, the waves were launched into the FOPA, which now operated as a PSA with 22 dB net gain providing low-noise amplification and nonlinearity mitigation. To compensate for the fact that the copier operated as a PIA, adding 6 dB more noise to the signal than the following PSAs, the signal power launched into the loop was 6 dB higher than the power present at the point of the loop input after the first round trip. The received power was measured at point P rec and the loss from point P in to point P rec was 39 dB. In each round trip, part of the light was coupled out of the recirculating loop and detected using a coherent receiver. A more detailed description of the experimental set-up is presented in the Methods section. Constellation diagrams To benchmark the performance of the PSA-amplified link, measurements were also performed on an EDFA-amplified link and a FOPA-PIA-amplified link. The FOPA-PIA-amplified link was obtained by blocking the idler in the OP and fully attenuating the pump in the power tuning stage. The EDFA link was obtained by replacing the FOPA with an EDFA and turning off the pump laser as well as the pump booster EDFA. The three cases were compared both by studying constellation diagrams and by measuring bit error rate (BER). Figure 3 shows constellation diagrams at various launch powers (measured at point P in ) for the three investigated amplification schemes. The constellations in columns one, two, and four correspond to a measured BER of 10 −3 while the third column shows the constellations for the FOPA-PSA case at the closest available number of round trips to the EDFA case. The variable N indicates the number of round trips. The accumulated nonlinear phase shift was calculated using ϕ NL = γP in L eff , where γ is the nonlinear coefficient, P in is the launch power, and L eff is the effective length defined as L eff = [1 − exp(− αL )]/ α with α being the fiber attenuation and L the link length, and is shown in parenthesis above each constellation. When calculating ϕ NL we used γ = 1.5 W −1 km −1 , α = 0.2 dB km −1 , and L = 80 km. Fig. 3 Signal constellation diagrams. Constellation diagrams at various launch powers using EDFA-based amplification, FOPA-PIA-based amplification, and FOPA-PSA-based amplification. Constellations in column one, two, and four corresponds to a BER of 10 −3 . The variable N indicates the number of round trips where each round trip include an 80 km dispersion compensated SSMF span and ϕ NL denotes the accumulated nonlinear phase shift Full size image It is clear from Fig. 3 that EDFA- and FOPA-PIA-based amplification provide similar performance from −2 dBm launch power, where the reach is limited by amplifier noise, up to 8 dBm launch power, where reach is limited by nonlinear distortions. We can also see that PSA-based amplification significantly reduces the accumulated amplifier noise as well as the impact of fiber nonlinearities, thus allowing for improved reach at all launch powers. Bit error rate measurements Measured BER versus number of round trips and transmission distance at various launch powers is presented in Fig. 4a . It can be seen that the EDFA case and FOPA-PIA case are close to indistinguishable while the PSA case shows significantly improved reach. The reach improvement as well as the maximum number of round trips (for a BER of 10 −3 ) versus launch power is presented in Fig. 4b . From the figure we note that the optimal launch power for the PSA case is 6 dBm while the optimal launch power in the EDFA- and FOPA-PIA cases is 4 dBm. At 6 dBm launch power, the reach improvement using PSA-based amplification is about six times while if the comparison is made at optimal launch powers, the reach improvement is 5.6 times. Fig. 4 BER characterization. a BER curves at various launch powers using in-line EDFA- (filled circles), FOPA-PIA- (open circles), and FOPA-PSA-based (filled squares) amplification. b Number of round trips giving a BER of 10 −3 versus launch power and reach improvement comparing EDFA- (filled circles) and FOPA-PIA-based (open circles) amplification to PSA-based (filled squares) amplification Full size image Discussion Our demonstration of long-haul PSA-amplified transmission was performed using a signal with 10 GBd QPSK data. However, in principle any modulation format and symbol rate can be used with the copier-PSA scheme. Increasing the symbol rate will make it more challenging to achieve good enough temporal alignment of the waves but by using high precision delay lines this should be possible. The copier-PSA performance in linear regime is not expected to depend on either symbol rate or modulation format. However, the ability to mitigate nonlinearities might depend on both symbol rate and modulation format and will require further investigation. Approaches that have been suggested as means to improve nonlinearity mitigation at, e.g., higher symbol rates in PSA-amplified links are addition of distributed Raman amplification 43 or multi-span dispersion map optimization 44 . In our demonstration, we transmitted a single channel but with the copier-PSA scheme multi-channel transmission is possible although with increased complexity due to required polarization, delay, and phase alignment of each channel. Using a recirculating loop to demonstrate long-haul PSA-amplified transmission is a good approach to demonstrate the possible performance improvements that can be gained using in-line PSAs. However, using a recirculating loop simplifies certain aspects of the implementation and in order to realize a real transmission link with in-line PSAs a few challenges remain to be solved. One such challenge is the pump recovery. The injection-locking-based pump recovery is sensitive to frequency differences between the incoming pump wave and the free-running pump laser frequency. In our recirculating loop set-up, this frequency difference could be kept small since it was dictated by the pump laser wavelength drift during the measurement time which was 24 ms for 60 round trips. In a real transmission link, a feedback system would be required to tune the frequency of the slave lasers to match the frequency of the incoming pump wave. Another aspect that will be more challenging in a real transmission link is the polarization alignment of the involved waves. In our recirculating loop set-up, this alignment could be done manually. However, in a real transmission-link polarization tracking would be required to align the waves and to keep them aligned over time. We have demonstrated the possibilities and potential of using cascaded PSAs in the context of high-speed optical communications. However, our results might also find applications in quantum informatics and related fields where generation and processing of quantum states are of interest. Methods Recirculating loop experiment A continuous wave (CW) laser (Keysight N7711A) at 1550.104 nm with <100 kHz linewidth, −145 dB Hz −1 relative intensity noise (RIN), and 30 mW output power was modulated with 10 GBd QPSK data (pseudorandom bit sequence (PRBS) of length 2 15 –1) using a LiNbO 3 -based single polarization I/Q modulator. The electrical signals driving the I/Q modulator were generated using a bit pattern generator (SHF 12103A) followed by electrical amplifiers (SHF 804 TL). After passing a VOA, VOA1, for loop launch power tuning the signal was passed into a recirculating loop that was controlled using a loop controller (Brimrose AMM-55-8-70-C-RLS(nfs)-RM) containing two acousto-optic modulators (AOMs). During the first round trip only the signal was present at the input of the polarization tuning and pump recovery stage and a CW pump wave at 1554.096 nm was generated using a distributed feedback (DFB) laser without isolator (EM4 AA1406-192900-100) with <1 MHz linewidth, −150 dB Hz −1 RIN, and 100 mW output power. The pump wave was subsequently amplified using a 3 W fanless high-power EDFA (IPG EUA-3K-C-CHM) and attenuated to obtain 1 W at the FOPA input. The signal was combined with the pump before the FOPA using a WDM coupler and the signal and pump state of polarizations (SOPs) were aligned using PC1 and PC2 for maximum FOPA gain. With only signal and pump present at the FOPA input, the FOPA operated as a PIA with 16 dB net gain. The FOPA consisted of four cascaded spools of strained highly nonlinear fiber (HNLF) (OFS HNLF-SPINE with zerodispersion wavelength (ZDW) at 1543 nm) of lengths 101, 124, 156, and 205 m, with in-line isolators placed between the individual spools for stimulated Brillouin scattering (SBS) suppression 45 . During the first round trip, the FOPA generated a conjugated copy of the signal, frequency- and phase-locked to the signal and pump, at the idler wavelength through FWM. After the in-line amplifier, the waves were led to a power tuning stage where the high-power pump was separated from the signal and idler waves. The signal and idler were amplified using an EDFA (Nortel) and then passed into an OP (Finisar WaveShaper 1000S) for filtering (0.4 nm bandpass filters) and power tuning such that they were balanced in power at point P in , just before the transmission fiber. The two waves were then led into a custom built EDFA with 3.1 dB NF and 25 dBm output power followed by VOA2 for launch power tuning. PC4 was tuned so that the polarization dependent loss (PDL) experienced by the signal over the transmission stage was minimized. The pump was attenuated using VOA3 to obtain −5 dBm at point P in and PC5 was tuned such that the SOP of the pump launched into the pump laser in the second round trip was aligned with the free-running pump laser SOP. The transmission link constituted of two 100 GHz channel grid tunable fiber Bragg-grating DCMs (TeraXion TDCMX-C100-(−80 km/+5 km)), DCM1 for dispersion pre-compensation and DCM2 for post-compensation, and an 80 km SSMF transmission span. The dispersion map was experimentally optimized for longest reach in a strongly nonlinear regime (6 dBm launch power). In the PSA case, the optimum dispersion map was 289 ps nm −1 pre-compensation and 986 ps nm −1 post-compensation. In both the EDFA- and the FOPA-PIA case, the optimum dispersion map was 68 ps nm −1 pre-compensation and 1207 ps nm −1 post-compensation. The amount of per span residual dispersion was experimentally optimized for longest reach in a nonlinear transmission regime for the PSA case and was <35 ps nm −1 . This amount of residual dispersion had a negligible impact on the performance in linear transmission regime both for the PSA and PIA cases as well as for the PIA cases in nonlinear transmission regime due to the low symbol rate and few round trips. The launch power was measured as signal power at point P in . PC6 was adjusted so that the signal SOP at the beginning of the second round trip was the same as the SOP of the signal launched into the transmission loop. The round trip time was 0.4 ms. During the second and the following round trips both signal, idler, and pump were present at the input of the polarization tuning and pump recovery stage. The pump was separated from the signal and idler and injection-locked to the pump laser. This process of self-injection-locking enabled stable locking over many circulations. The signal and idler were also separated, and the delay between them introduced by the transmission fiber was compensated for using a variable delay line (VDL) with a 1 dB insertion loss. The idler was attenuated such that the signal and idler had equal power going into the FOPA and their SOPs were aligned using PC1 and PC3 so that the FOPA gain was maximized. A phase-locked loop (PLL) based on a PZT fiber stretcher was used to compensate for any dynamic phase drifts between the arms introduced by temperature and acoustic influence. The FOPA-PSA net gain was 22 dB. For simplicity the PSA-amplified link was implemented such that the same FOPA was used both for the copier and the PSA. As a consequence of this, the first and last in-line amplifiers in the PSA-amplified link were PIAs. In order to compensate for the extra signal degradation caused by the first in-line PIA, the signal power launched into the loop was 6 dB higher than the power present at the point of the loop input after the first round trip. The absence of nonlinearity mitigation in the last in-line amplifier in the link was not compensated for. Due to the placement of the loop output coupler, the loss of the final span was lower than the other spans by ~2 dB. For the EDFA case and the FOPA-PIA case, this resulted in slightly better performance compared to what would have been achieved in a link in which all spans had the same loss. For the PSA case, the performance was still worse then it would have been if the last amplifier in the link was a PSA. Note, however that the impact of having slightly lower loss in the last span is negligible after many circulations. In each round trip, part of the light was coupled out of the recirculating loop and amplified by an EDFA (JDS Uniphase OAB optical amplifier) followed by an optical filter (OTF-30M-12S2) with a 3 dB bandwidth of 0.9 nm centered at the signal wavelength. The amplified and filtered signal was then coupled into a coherent receiver (NeoPhotonics Integrated PBS ICR) along with a local oscillator wave generated by a CW laser (IDPhotonics CBDX1-1-C-H01-FA) at the signal wavelength with <100 kHz linewidth, −145 dB Hz −1 RIN, and 40 mW output power. The signal was sampled at 50 GS s −1 using a real-time sampling oscilloscope (Tektronix DPO73304SX) with 33 GHz analog bandwidth. For each round trip, 2.5 × 10 6 samples (corresponding to 50 μs at 50 GS s −1 ) were taken in the middle of the 0.4 ms long burst and then post processed off-line using conventional DD-LMS-based digital signal processing (DSP). The back-to-back signal-to-noise ratio (SNR) penalty of the transmitter and receiver was 0.5 dB at a BER of 10 −3 . For the EDFA case, the pump laser was turned off and the FOPA was substituted with a custom built EDFA with 3.1 dB NF and 25 dBm output power followed by a VOA tuned such that the net gain of the EDFA and VOA was 22 dB. For the FOPA-PIA case, the idler was blocked in the OP and the pump was fully attenuated before the transmission stage using VOA3. The in-line FOPA-PIA net gain was 16 dB. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Change history 31 July 2018 The original version of this Article incorrectly listed an affiliation of Samuel L.I. Olsson as ‘Thomas Johann Seebeck Department of Electronics, Tallinn University of Technology, Tallinn 19086, Estonia’, instead of the correct ‘Present address: Nokia Bell Labs, 791 Holmdel Road, Holmdel, NJ 07733, USA’. Similarly, Egon Astra had an incorrect affiliation of ‘Present address: Nokia Bell Labs, 791 Holmdel Road, Holmdel, NJ 07733, USA’, instead of the correct ‘Thomas Johann Seebeck Department of Electronics, Tallinn University of Technology, Tallinn 19086, Estonia’. This has been corrected in both the PDF and HTML versions of the Article.
Researchers from Chalmers University of Technology, Sweden, and Tallinn University of Technology, Estonia, have demonstrated a 4000 kilometre fibre-optical transmission link using ultra low-noise, phase-sensitive optical amplifiers. This is a reach improvement of almost six times what is possible when using conventional optical amplifiers. The results are published in Nature Communications. Video streaming, cloud storage and other online services have created an insatiable demand for higher transmission capacity. To meet this demand, new technologies capable of significant improvements over existing solutions are being explored worldwide. The reach and capacity in today's fibre optical transmission links are both limited by the accumulation of noise, originating from optical amplifiers in the link, and by the signal distortion from nonlinear effects in the transmission fibre. In this ground-breaking demonstration, the researchers showed that the use of phase-sensitive amplifiers can significantly, and simultaneously, reduce the impact of both of these effects. "While there remain several engineering challenges before these results can be implemented commercially, the results show, for the first time, in a very clear way, the great benefits of using these amplifiers in optical communication," says Professor Peter Andrekson, who leads the research on optical communication at Chalmers University of Technology. The amplifiers can provide a very significant reach improvement over conventional approaches, and could potentially improve the performance of future fibre-optical communication systems. "Such amplifiers may also find applications in quantum informatics and related fields, where generation and processing of quantum states are of interest, as well as in spectroscopy or any other application which could benefit from ultra-low-noise amplification," says Professor Peter Andrekson.
10.1038/s41467-018-04956-5
Earth
Microbes deep beneath seafloor survive on byproducts of radioactive process
Justine F. Sauvage et al, The contribution of water radiolysis to marine sedimentary life, Nature Communications (2021). DOI: 10.1038/s41467-021-21218-z Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-21218-z
https://phys.org/news/2021-02-microbes-deep-beneath-seafloor-survive.html
Abstract Water radiolysis continuously produces H 2 and oxidized chemicals in wet sediment and rock. Radiolytic H 2 has been identified as the primary electron donor (food) for microorganisms in continental aquifers kilometers below Earth’s surface. Radiolytic products may also be significant for sustaining life in subseafloor sediment and subsurface environments of other planets. However, the extent to which most subsurface ecosystems rely on radiolytic products has been poorly constrained, due to incomplete understanding of radiolytic chemical yields in natural environments. Here we show that all common marine sediment types catalyse radiolytic H 2 production, amplifying yields by up to 27X relative to pure water. In electron equivalents, the global rate of radiolytic H 2 production in marine sediment appears to be 1-2% of the global organic flux to the seafloor. However, most organic matter is consumed at or near the seafloor, whereas radiolytic H 2 is produced at all sediment depths. Comparison of radiolytic H 2 consumption rates to organic oxidation rates suggests that water radiolysis is the principal source of biologically accessible energy for microbial communities in marine sediment older than a few million years. Where water permeates similarly catalytic material on other worlds, life may also be sustained by water radiolysis. Introduction Radionuclides are ubiquitous in sediment and rock, where their decay leads to hydrogen (H 2 ) and oxidant production via radiolysis of water 1 , 2 , 3 , 4 . Radiolytic yields in pure water are well constrained 5 , 6 and some monominerals (pyrite, various oxides, mordenite, calcite) are known to amplify water-radiolytic H 2 yields when irradiated by γ rays 7 , 8 , 9 . Similarly, other oxides and calcite enhance water-radiolytic H 2 production following exposure to α particles 9 , 10 , 11 . The effect of mineralogically complex natural materials on H 2 yields is previously unexplored. Hydrogen (H 2 ) and oxidants generated by natural radiolysis of water provide a continuous source of chemical energy for subsurface ecosystems 2 , 3 , 4 , 12 , 13 . Microbial life persists deep beneath Earth’s surface 14 , 15 and constitutes a significant fraction of Earth’s total biomass 16 , 17 . Radiolytic H 2 is now recognized as the primary electron donor for microbial communities kilometers below the surface in Precambrian regions of continental lithosphere 14 . However, the extent to which most subsurface ecosystems rely on radiolytic products has been unclear because (i) radiolytic chemical yields in natural environments have been poorly constrained and (ii) organic matter and oxidants from the surface photosynthetic world are pervasive in many subsurface environments. Results and discussion We experimentally quantified H 2 yields for α- and γ-irradiation of pure water, seawater, and seawater-saturated marine sediment with a typical abyssal clay porosity (80–85%) for all abundant marine sediment types (abyssal clay, nannofossil-bearing clay (calcareous marl), clay-bearing siliceous ooze, calcareous ooze, and lithogenous sediment), which collectively cover ~70% of Earth’s surface. Our results show that for pure water, seawater, and marine sediment slurries, H 2 production increases linearly with absorbed α- and γ-ray dose. Energy-normalized radiolytic H 2 yields, denoted by G(H 2 ) (molecules H 2 per 100 eV absorbed) 1 , in seawater are indistinguishable from those in pure water, within the 90% confidence limit of our experiments. In contrast, G(H 2 ) values of marine sediment slurries are consistently higher than values for pure water (Fig. 1 ). The catalytic effect of marine sediment on radiolytic yield is significant for both α- and γ-irradiation, but much larger for α-irradiation. Alpha-irradiation G(H 2 ) values for abyssal clay slurries are more than an order of magnitude higher than for pure water. On average, clay-bearing siliceous ooze and calcareous marl increase G(H 2 ) for α-irradiation by factors of 15 and 12, respectively. Calcareous ooze increases yields by a factor of 5 for α-irradiation. For γ-irradiation, clay-bearing siliceous ooze and abyssal clay amplify G(H 2 ) by factors of 8 and 4, respectively. Calcareous ooze and marl slurries doubled G(H 2 ) for γ-irradiation. These results demonstrate that (i) all common marine sediment types catalyze radiolytic H 2 production, and (ii) the magnitude of this catalysis depends on sediment composition and radiation type. Fig. 1: Radiolytic H 2 catalysis by marine sediment. Experimental H 2 yields for α irradiation ( A ) and γ irradiation ( B ). Reported yields are averages of a minimum of four replicate experiments. Vertical dashed lines represent multiples of production in pure water. Site locations ( C ) color-coded to indicate origins of samples in A and B . Full size image Previous experiments with oxides suggest that the primary cause of increased yield in all of our sediment types is energy transfer from sediment particles to the water via excitons 18 , 19 , 20 . H 2 yield exceeds the pure-water yield for γ-irradiated oxides characterized by a band gap equal to the 5.1 eV energy of the H–OH bond in water 18 . This result is consistent with irradiation of the oxide generating excitons that propagate to the oxide–water interface, where they lyse the water 18 , 19 , 20 . With excitons as the primary mechanism for transferring irradiation energy from sediment particles to particle–water interfaces 18 , 19 , factors that may cause variation in radiolytic H 2 catalysis from one sediment type to another include mineral composition of the sediment (which affects band gap), particle size (which affects exciton migration distance), water adsorption form (physisorbed vs. chemisorbed), and surface density of hydroxyl groups 20 . In addition to H 2 , water radiolysis generates diverse oxidized products in wet sediment. In pure water, production of H 2 from radiolysis is stoichiometrically balanced by production of H 2 O 2 [2H 2 O → H 2 + H 2 O 2 ] 12 , 21 . In the presence of reduced chemicals, such as reduced sulfur and/or reduced iron, H 2 production is balanced by production of H 2 O 2 plus oxidation of iron and sulfur 7 , 22 . H 2 O 2 spontaneously decomposes to ½O 2 + H 2 O 23 ; its rate of spontaneous decomposition is catalytically increased by common minerals 24 , 25 , and it readily oxidizes sulfide and iron 26 . Many microorganisms can directly or indirectly utilize radiolytic H 2 and/or oxidized radiolytic products as an electron donor and terminal electron acceptor, respectively 3 , 27 . Oxidizing power from water radiolysis both benefits and challenges microbes. Oxygen, oxidized iron, and oxidized sulfur are used as terminal electron acceptors by diverse microorganisms 3 , 28 , 29 and some bacteria can use H 2 O 2 directly as a terminal electron acceptor 30 . However, the H 2 O 2 , its degradation product O 2 , and the short-lived reactive oxygen species (ROS) that react to produce the H 2 O 2 , oxidized metal, and oxidized sulfur, can oxidatively stress microbes 31 . Furthermore, ROS are critical for multiple biological processes (gene expression, intracellular signaling, cell defense) 32 . In short, subseafloor microbes need to manage these chemicals (particularly short-lived ROS) but can energetically benefit from reducing relatively stable radiolytic products (including H 2 O 2 , O 2 , oxidized iron, and sulfur species). Despite continual radiolytic production throughout the sediment column, dissolved H 2 concentrations are mostly below the detection limit (which ranges from 1–5 nM H 2 from site to site) in oxic subseafloor sediment 33 , 34 and low to below detection (1–30 nM H 2 ) in anoxic subseafloor sediment 35 (Fig. 2 ). Measured in situ H 2 concentrations are generally 2 to 5 orders of magnitude lower than expected from radiolytic production in the absence of H 2 -consuming reactions (Fig. 2 ). This discrepancy between measured and expected concentrations indicates that consumption of radiolytic H 2 is essentially equal to its production throughout the sediment. The simplest explanation is microbial H 2 oxidation at all depths, since enzymatic potential for H 2 oxidation is ubiquitous in marine sediment 36 and the in situ Gibbs energy of H 2 oxidation is energy-yielding at the H 2 detection limit throughout these sequences (Supplementary Information). Although this oxidation of radiolytic H 2 contributes to gross redox activity, it does not contribute to net activity (e.g., net oxidant reduction) because water radiolysis generates oxidants in stoichiometric balance with H 2 . Fig. 2: Sedimentary profiles of dissolved H 2 concentrations. Dissolved H 2 concentration profiles for North Pacific Site KN195-EQP11, South Pacific IODP Sites U1365, U1369, U1370, and U1371, North Atlantic Sites KN223-11, KN223-12 and KN223-15, Equatorial Pacific ODP Site 1225, and Peru Trench ODP Site 1230. Open symbols mark measured H 2 concentrations. Solid symbols represent H 2 concentrations expected from radiolytic H 2 production and diffusion in the absence of in situ H 2 consumption. Gray vertical lines mark detection limits for dissolved H 2 concentration measurements. The detection limit of the applied analytical protocol was defined as the mean of the repeated procedural blank measurements plus three times their standard deviation. Symbol colors match site locations in Fig. 4 . Full size image We assess the potential contribution of water radiolysis in marine sediment to global bioenergy fluxes by quantifying global production of radiolytic H 2 and radiolytic oxidants in marine sediment (Fig. 3 ). Our estimates of radiolytic H 2 and oxidant production are based on (i) spatial integration of a previously published model of sedimentary water radiolysis 2 , (ii) our dataset of experimentally constrained radiolytic H 2 yields for the principal marine sediment types, and (iii) the global distribution of sediment properties (details in “Methods”). Radiolytic production of H 2 and oxidants per unit area varies by five orders of magnitude from site to site, depending primarily on sediment column thickness (Fig. 3 ). Global production rates of radiolytic H 2 and oxidants in marine sediment are 2.7 × 10 13 mol electron equivalents per year (mol e − eq yr −1 ). Fig. 3: Global distribution of production rates of radiolytic H 2 and radiolytic oxidants in marine sediment. Rates are expressed in mol electron equivalents/cm 2 /year. In electron equivalents, water radiolysis produces H 2 and oxidized chemicals at equal rates. Although abyssal clay has a volumetric radiolytic production rate ∼ an order of magnitude higher than the volumetric rate for continental-margin sediment types, the sediment layer that blankets open-ocean regions is much thinner and consequently has much lower vertically integrated radiolytic production than the sediment of continental margins. Full size image These rates are significantly higher than a recent estimate of the radiolytic H 2 production in Precambrian lithosphere (3.2–9.4 × 10 10 mol e − eq yr −1 ) 13 , which covers 72% of global continental area. Although consideration of mineral catalysis would increase the estimate for Precambrian lithosphere, it would probably not erase the difference between marine sediment and Precambrian lithosphere, because porosity is much higher and particle size is much smaller in marine sediment 2 than in Precambrian lithosphere 13 . Our estimated rates of radiolytic chemical production in marine sediment (2.7 × 10 13 mol e − eq yr −1 ) are almost two orders of magnitude lower than the flux of organic matter (organic carbon and organic nitrogen) to the seafloor (~1 × 10 15 mol e − eq yr −1 ) 27 and roughly an order of magnitude lower than the burial rate of organic matter in marine sediment (0.7–3.4 × 10 14 mol e − eq yr −1 ) 27 . At steady state, the rate of organic consumption in marine sediment is approximated by the difference between the organic flux to the seafloor and the organic burial rate. Based on this difference, global rates of radiolytic H 2 production and radiolytic electron acceptor production in marine sediment are only 1–2% of global organic consumption in marine sediment (in electron equivalents). However, most organic consumption in marine sediment occurs at the seafloor and organic consumption rate generally decreases exponentially with sediment depth 27 . In contrast, production of radiolytic H 2 and radiolytic oxidants is relatively constant throughout the sediment column and not focused near the seafloor. We assess the potential importance of radiolytic products as an energy source for oxic subseafloor sedimentary ecosystems, by comparing production rates of radiolytic chemicals to net O 2 reduction rates at nine sites with oxic subseafloor sediment, where organic matter concentrations are low but electron acceptors are abundant 33 . Redfield stoichiometry of dissolved NO 3 − to O 2 indicates that net O 2 consumption in these sequences is almost entirely due to organic oxidation 33 . In sediment deposited during the last few million years, the ratio of radiolytic H 2 production to net O 2 consumption is generally less than 1 (Fig. 4A ), indicating that microbial respiration is primarily based on oxidation of organic matter. In older sediment, this ratio is generally greater than 1, implying that radiolytic H 2 is the primary electron donor (Fig. 4A ). To the extent that oxidation of reduced metals in the sediment [e.g. Mn(II), Fe(II)] also contributes to net O 2 consumption, this ratio overestimates the importance of organic matter as an electron donor relative to radiolytic H 2 . Fig. 4: Metabolic contribution of radiolytic products in marine sediment. A Ratios of (i) radiolytic H 2 production to net O 2 reduction and (ii) radiolytic oxidant production to net O 2 reduction, plotted against sediment age for sites with oxic deep subseafloor sediment. Horizontal lines represent one standard deviation of the ratio of radiolytic H 2 production to net O 2 reduction. B Ratios of (i) radiolytic H 2 production to net dissolved inorganic carbon (DIC) production and (ii) radiolytic oxidant production to net DIC production for sites with anoxic deep subseafloor sediment. Horizontal lines represent one standard deviation of the ratio of radiolytic H 2 production to net DIC production. The transition from ratios below 1 (organic oxidation dominance) in young sediment to ratios above 1 (radiolytic dominance) in older sediment results from large decreases in organic oxidation rates with increasing sediment age; radiolytic production of H 2 and oxidants is relatively constant with sediment age, assuming no major changes in sediment composition. Full size image To evaluate the potential role of radiolytic products for sustaining subseafloor communities in anoxic sediment, we compare radiolytic production rates to dissolved inorganic carbon (DIC) production rates for seven sites from the Pacific Ocean and Bering Sea (Fig. 4B ). DIC is the primary oxidized product of organic-fueled catabolism. As with oxic sediment, in anoxic sediment younger than a few Ma, this ratio is generally less than 1.0, indicating that organic matter is the primary electron donor. However, the ratio is generally at or above above 1 in older anoxic sediment (Fig. 4B ). Collectively, these findings suggest that radiolytic H 2 is the primary electron donor in marine sediment older than a few Ma. Given the stoichiometry of water radiolysis 12 , these results also suggest that radiolysis is the primary source of electron acceptors in marine sediment older than a few Ma (Fig. 4 A, B ). This continuous release of oxidants through sediment-catalyzed water radiolysis may sustain diverse redox processes in anoxic sediment, such as (i) NO 3 − reduction inferred from transcriptomic signatures 37 and (ii) SO 4 2− reduction inferred from radiotracer incubations 38 of samples taken from sediment deep beneath the last subseafloor occurrences of measurable dissolved NO 3 − and SO 4 2− . Decomposition of radiolytic H 2 O 2 to O 2 is also consistent with most bacterial isolates from anoxic subseafloor sediment being facultative aerobes 39 . Comparison to continental data 13 , 16 indicates that subseafloor sedimentary life constitutes one of Earth’s largest radiolysis-supported biomes. This study reveals the importance of abundant geological materials as catalysts of radiolytic chemical production. Explicit recognition of this catalytic effect is necessary to constrain habitable zones within subsurface environments on Earth and other planetary bodies. Although modern marine sediment typically contains biogenic components (carbonate and/or opal microfossils, fish teeth, etc.), its mineral composition directly overlaps with the mineral composition of early Earth and other planetary bodies. The known catalytic minerals abundant in marine sediment (zeolites and calcite) are inferred to have existed on early Earth 40 . Zeolites and other minerals dominant in marine sediment (smectite, chlorite, opaline silica) are present on Mars 41 , 42 , 43 . On modern Earth, naturally catalyzed radiolytic products appear to provide the dominant fuel for microbial activity in marine sediment older than a few Ma. Where catalytic materials were present, radiolytic products may also have been significant for pre-photosynthetic life on early Earth and where water permeates similarly catalytic material on other planets and moons, such as the subsurface environments of Mars 42 , Europa, or Enceladus, life may similarly be sustained today. Methods Radiation experiments We experimentally quantified radiolytic hydrogen (H 2 ) production in (i) pure water, (ii) seawater, and (iii) seawater-saturated sediment. We irradiated these materials with α- or γ-radiation for fixed time intervals and then determined the concentrations of H 2 produced. Sediment samples were slurried with natural seawater to achieve a slurry porosity ( φ ) of ~0.83, which is the average porosity of abyssal clay in the South Pacific Gyre 34 . The seawater source is described below. To avoid microbiological uptake of radiolytic H 2 during the course of the experiment, seawater and marine sediment slurries were pre-treated with HgCl 2 (0.05% solution) or NaN 3 (0.1% wt/vol). To ensure that addition of these chemicals did not impact radiolytic H 2 yields, irradiation experiments with pure water plus HgCl 2 or NaN 3 were also conducted. HgCl 2 or NaN 3 addition had no statistically significant impact on H 2 yields 5 , 6 , 10 . Experimental samples were irradiated in 250 mL borosilicate vials. A solid-angle 137 Cs source (beam energy of 0.67 MeV) was used for the γ-irradiation experiments at the Rhode Island Nuclear Science Center (RINSC). The calculated dose rate for sediment slurries was 2.19E−02 Gy h −1 accounting for the (i) source activity, (ii) distance between the source and the samples, (iii) sample vial geometry, and (iv) attenuation coefficient of γ-radiation through air, borosilicate, and sediment slurry. 210 Po (5.3 MeV decay −1 )-plated silver strips with total activities of 250 μCi were used for the α-irradiation experiments. For α-irradiation of each sediment slurry, a 210 Po-plated strip was placed inside the borosilicate vial and immersed in the slurry. Calculated total absorbed doses were 4 Gy and 3 kGy for γ-irradiation and α-irradiation experiments, respectively. The settling time of sediment grains in the slurries (1 week) was long compared to the time span of each experiment (tens of minutes to an hour for α-experiments, hours to days for γ-experiments). Therefore, we assumed that the suspension was homogenous during the course of each experiment. H 2 concentrations were measured by quantitative headspace analysis via gas chromatography. For headspace analysis, 30 mL of N 2 was first injected into the sample vial. To avoid over-pressurization of the sample during injection, an equivalent amount of water was allowed to escape through a separate needle. The vials were then vigorously shaken for 5 min to concentrate the H 2 into the headspace. Finally, a 500-μL-headspace subsample was injected into a reduced gas analyzer (Peak Performer 1, PP1). The reduced gas analyzer was calibrated using a 1077 ppmv H 2 primary standard (Scott-Marrin, Inc.). A gas mixer was used to dilute the H 2 standard with N 2 gas to obtain various H 2 concentrations and produce a five-point linear calibration curve (0.7, 2, 5, 20, and 45 ppm). H 2 concentrations of procedural blanks consisting of sample vials filled with non-irradiated deionized 18-MΩ water were also determined. The concentration detection limit obtained using this protocol was 0.8–1 nM H 2 . Relative error was less than 5%. Radiation experiments were performed at a minimum in triplicate. Sample selection and experimental radiolytic H 2 yields, G(H 2 ) Millipore Milli-Q system water was used for our pure-water experiments. For seawater experiments, we used bottom water collected in the Hudson Canyon (water depth, 2136 m) by RV Endeavor expedition EN534. Salinity of North Atlantic bottom water in the vicinity of the Hudson Canyon (34.96 g kg −1 ) is similar to that of mean open-ocean bottom water (34.70 g kg −1 ) 44 , 45 . The 20 sediment samples used for the experiments were collected by scientific coring expeditions in three ocean basins (expedition KN223 to the North Atlantic 46 , expedition KN195-3 to the Equatorial and North Pacific 47 , International Ocean Discovery Program (IODP) Expedition 329 to the South Pacific Gyre 34 , MONA expedition to the Guaymas Basin 48 , expedition EN32 to the Gulf of Mexico 49 , and expedition EN20 to the Venezuela Basin 50 ). To capture the dominant sediment types present in the global ocean, we selected samples typical of five common sediment types [abyssal clay (11 samples), nannofossil-bearing clay or calcareous marl (2 samples), clay-bearing diatom ooze (3 samples), calcareous ooze (2 samples), and lithogenous sediment (2 samples)]. The locations, lithological descriptions, and mineral compositions of the samples are given in Supplementary Tables 1 , 2 , 3 , and Supplementary Fig. 1 . Additional chemical and physical descriptions of the sediment samples used in the radiation experiments can be found in the expedition reports for the expeditions on which the samples were collected 34 , 46 . Energy-normalized radiolytic H 2 yields are commonly expressed as G(H 2 )-values (molecules H 2 per 100 eV absorbed) 1 . As shown in Supplementary Fig. 2 , for all irradiated samples (pure water, seawater, and marine sediment slurries), H 2 production increased linearly with absorbed α- and γ-ray-dose. We calculated G(H 2 )-values for each sample and radiation type (α or γ) as the slope of the least-square regression line of radiolytic H 2 concentration versus absorbed dose (Supplementary Fig. 2 ). The error on the yields is less than 10% for each sample. G(H 2 )-values for each sample and radiation type (α or γ) are reported in Supplementary Table 3 . Although radiolytic OH• is known to react with dissolved organic matter 51 , total organic content does not appear to significantly impact radiolytic H 2 production, since the most organic-rich sediment (e.g., Guaymas Basin and Gulf of Mexico sediment) did not yield particularly high H 2 (Supplementary Table 3 ). Calculated radiolytic production rates of H 2 and oxidants in the cored sediment of individual sites We calculated radiolytic H 2 production rates ( P H2 , in molecules H 2 cm −3 yr −1 ) for the cored sediment column at nine sites with oxic subseafloor sediment in the North Pacific, South Pacific, and North Atlantic; and seven sites with anoxic subseafloor sediment in the Bering Sea, South Pacific, Equatorial Pacific, and Peru Margin (see Supplementary Fig. 3 for site locations). For these calculations, we used the following equation from Blair et al. 2 : $$P_{{\mathrm{H}}_2} = {\sum} A _{{\mathrm{m}},i}\rho \left( {1 - {{\upvarphi}} } \right)E_i{\mathrm{G}}({\mathrm{H}}_2)_i$$ (1) where i is alpha, beta, or gamma radiation; A m is radioactivity per mass solid; φ is porosity; ρ is density solid; \(E_i\) is decay energy; and \({\mathrm{G}}({\mathrm{H}}_2)_i\) is radiolytic yield. We calculated radiolytic oxidant production rates for these sediment columns from the H 2 production rates. Because H 2 production and oxidant production are stochiometrically balanced in water radiolysis [2H 2 O → H 2 + H 2 O 2 ], the calculated radiolytic H 2 production rates (in electron equivalents) are equal to radiolytic oxidant production rates (in electron equivalents). The in situ γ- and α-radiation dosages in marine sediment are, respectively, 13 and 15 orders of magnitude lower than the dosage used in our experiments. Because the measured G(H 2 ) for pure water in our γ-irradiation experiment (dose rate = 2.19E-02 Gy h −1 ) is statistically indistinguishable from previously published G(H 2 ) values at much higher dose rates (ca. 1.00E+3 Gy h −1 ) 5 , we infer that the γ-irradiation G(H 2 ) value is constant with dose rate over five orders of magnitude. Similarly, our experimental pure water H 2 yields following α-particle irradiation from a 210 Po-source (dose rate of 2.55E+03 Gy h −1 ) are indistinguishable from the yield obtained by Crumière et al. 6 [G(H 2 ) = 1.30 ± 0.13] for air-saturated deionized water exposed to a cyclotron-generated He 2+ particle beam at higher dose rate (dose rate 1.62E+05 Gy h −1 ). The close similarity in H 2 yields obtained in both experiments implies that (i) radiolytic H 2 yield from α-particle irradiation is identical to that from cyclotron-generated He 2+ particle irradiation, and (ii) this yield is constant over a two-orders-of-magnitude range dose rate. Therefore, we use our experimentally determined α- and γ-irradiation G(H 2 ) values for the low radiation dose rate found in the subseafloor. Because the G(H 2 ) of β irradiation has not been experimentally determined for water-saturated materials, we assume that the G(H 2 ) of β-radiation matches the G(H 2 ) of γ-radiation for the same sediment types. In pure water, their G(H 2 ) values differ by only 17% 1 . Because β radiation, on average, contributes only 11% of the total radiolytic H 2 production from the U, Th series and K decay in marine sediment, these estimates of total H 2 production differ by only 2–5% relative to estimates where the G(H 2 ) of β radiation is assumed equal to that for pure water or for α radiation of the same sediment types. To calculate H 2 production rates for the entire sediment column at seven South Pacific sites and two North Atlantic sites, we measured downcore sediment profiles of U, Th, and K (i) 187 sediment samples from IODP Expedition 329 Sites U1365, U1366, U1367, U1368, U1369, U1370, and U1371 34 , 52 , and (ii) 40 samples from KN223 expedition Sites 11 and 12 (ref. 46 ). Total U and Th (ppm) and K 2 O (wt%) for these sites are reported in the EarthChem SedDB data repository. We measured U, Th, and K abundances using standard atomic emission and mass spectrometry techniques (i.e. ICP-ES and ICP-MS) in the Analytical Geochemistry Facilities at Boston University. Sample preparation, analytical protocol, and data are reported in Dunlea et al. 52 . The precision for each element is ~2% of the measured value, based on three separate digestions of a homogenized in-house standard of deep-sea sediment. To calculate H 2 production rates for the sediment columns at North Pacific coring Sites EQP10 and EQP11 (ref. 47 ), we used radioactive element content data from Kyte et al. 53 , who measured chemical concentrations at high resolution in bulk sediment in core LL44-GPC3. Because Site EQP11 was cored at the same location as LL44-GPC3 (ref. 53 ) and the sediment retrieved at all three sites is homogeneous abyssal clay, we assume the radioactive element abundances measured in core LL44-GPC3 to be representative of Sites EQP10 and EQP11 (ref. 47 ). Calculated radiolytic H 2 production rates for South Pacific sites are listed in Supplementary Table 4 and for North Atlantic and North Pacific sites in Supplementary Table 5 . For Bering Sea Sites U1343 and U1345 (ref. 54 ), sedimentary U, Th, and K content measurements are unavailable. Since sediment recovered at these two sites is primarily siliciclastic with a varying amount of diatom-rich clay, we use U, Th, and K concentration values reported for upper continental crust by Li and Schoonmaker for these Bering Sea sites 55 . Finally, we calculate downhole radiolytic H 2 production rates for ODP Leg 201 Sites 1225, 1226, 1227, and 1230 (ref. 35 ). Sediment compositions for these sites include nannofossil-rich calcareous ooze (Site 1225), alternation of nannofossil (calcareous) ooze and diatom ooze (Site 1226), and siliciclastic with diatom-rich clay intervals (Sites 1227 and 1230). Because sedimentary U, Th, and K measurements are not available for Leg 201 sites, we used average U, Th, and K concentration values measured in North Atlantic 46 and South Pacific Sites 34 , 52 with corresponding lithologies. We use isotopic abundance values reported in Erlank et al. 56 to calculate the abundance of 238 U, 235 U, 232 Th, and 40 K from the measured ICP-MS values of total U, Th, and K concentration. We then converted radionuclide concentrations to activities using Avogadro’s number and each isotope’s decay constant 2 . We refer to Blair et al. for a detailed explanation of activities and radiolytic yield calculations 2 . Calculation of global radiolytic H 2 and oxidant production rates in marine sediment We calculated global radiolytic H 2 production in ocean sediment by applying Eq. ( 1 ) (ref. 2 ) globally. As with the rates at individual sites, we calculated global radiolytic oxidant production (in electron equivalents) from global H 2 production and the stochiometry of water radiolysis [2H 2 O → H 2 + H 2 O 2 ]. Our global radiolytic H 2 production calculation spatially integrates calculations of sedimentary porewater radiolysis rates that are based on (i) our experimentally constrained radiolytic H 2 yields for the principal marine sediment types, (ii) measured radioactive element content of sediment cores in three ocean basins (North Atlantic 46 , North Pacific 53 , and South Pacific 34 , 52 ), and (iii) global distributions of sediment lithology 57 , sediment porosity 58 , and sediment column length 59 , 60 . To generate the global map of radiolytic H 2 production, we created global maps of seafloor U, Th, and K concentrations, density, G(H 2 )-α values, and G(H 2 )-γ-and-β by assigning each grid cell in our compiled seafloor lithology map (Supplementary Fig. 4 ) its lithology-specific set of input variables (Supplementary Table 6 ). Because our model assumes that lithology is constant with depth, U, Th, and K content, grain density, and G(H 2 )-values are constant with depth. The G(H 2 )-values (α, β, and γ radiation), radioactive element content (sedimentary U, Th, and K concentration), density, porosity, and sediment thickness are determined as follows. Radiolytic yield [G(H 2 )] for α,β-&-γ radiation Radiolytic yields for the main seafloor lithologies are obtained by averaging experimentally derived yields for the respective lithologies (Supplementary Table 6 ). We assume that G(H 2 )-β values equal G(H 2 )-γ values. Sediment lithology For these calculations of radiolytic chemical production, we generally used seafloor lithologies and assumed that sediment type is constant with sediment depth. For seafloor lithology, the geographic database of global bottom sediment types 57 was compiled into five lithologic categories: abyssal clay, calcareous ooze, siliceous ooze, calcareous marl, and lithogenous (Supplementary Fig. 4 ). Some areas of the seafloor are not described in the database 57 . These include (i) high-latitude regions (as the seafloor lithology database extends from 70°N to 50°S) 57 and (ii) some discrete areas located along continental margins (e.g., Mediterranean Sea, Timor Sea, South China Sea, Supplementary Fig. 4 ). We used other data sources to identify seafloor lithologies for these regions. We added an opal belt (siliceous ooze) in the Southern Ocean between 57°S and 66°S 61 , 62 . The geographic extent of this opal belt was based on DeMaster 62 and Dutkiewicz et al. 61 . We defined the areas of the seafloor from 50°S to 57°S, from 66°S to 90°S, and in the Arctic Ocean as mostly composed of lithogenous material, based on (i) drillsite lithologies in the Southern Ocean [ODP: Site 695 (ref. 63 ), Site 694 (ref. 63 ), Site 1165 (ref. 64 ), Site 739 (ref. 65 )], the Bering Sea and Arctic Ocean [International Ocean Discovery Program (IODP): Sites U1343 and U1345 (ref. 54 ), Site M0002 (ref. 66 ), ODP: Site 910 (ref. 67 ), Site 645 (ref. 68 )] and between 50°S and 57°S [Deep Sea Drilling Project (DSDP): Site 326 (ref. 69 ), Ocean Drilling Program (ODP): Site 1138 (ref. 70 ), Site 1121 (ref. 71 )], and Dutkiewicz et al. 61 . In the North and South Atlantic, sediment type can be very different at depth than at the seafloor. For these regions, we departed from our assumption that sediment lithology is the same at depth as at the seafloor. Subseafloor lithologies at ODP Sites [1063 (ref. 72 ), 951 (ref. 73 ), 925 (ref. 74 ), and 662 (ref. 75 )] and IODP Sites [U1403 (ref. 76 ) and U1312 (ref. 77 )] indicate that sediment in the Atlantic Ocean basin is generally 30–90% biogenic carbonate content and detrital clay 78 , even where the seafloor lithology is abyssal clay 57 . Therefore, regions in the Atlantic Ocean described as abyssal clay in the seafloor lithology database 57 were characterized as calcareous marl for our calculations (Supplementary Fig. 4 ). Because abyssal clay catalyzes radiolytic H 2 production at a higher rate than calcareous marl, this characterization may underestimate production of radiolytic H 2 and radiolytic oxidants in these Atlantic regions. Radioactive element content For four of the five lithologic types in our global maps (abyssal clay, siliceous ooze, calcareous ooze, and calcareous marl), we average U, Th, and K concentrations from sites in the North Atlantic 46 , North Pacific 53 , and South Pacific 34 , 52 . The average U, Th, and K concentration values are consistent with data reported in Li and Schoonmaker 55 for the characteristic U, Th, and K content found in abyssal clay and calcareous ooze. For lithogenous sediment, we use U, Th, and K concentration values reported for upper continental crust by Li and Schoonmaker 55 . Lithology-specific radioactive element values are given in Supplementary Table 6 and used to calculate A m, i in Eq. ( 1 ). Density Characteristic density values for calcite, quartz, terrigenous clay, and opal-rich sediment were extracted from the Proceedings of the Integrated Ocean Drilling Program Volume 320/321 and are assigned to calcareous ooze, lithogenous sediment, abyssal clay, and siliceous ooze, respectively 79 . Global porosity For global porosity, we use a seafloor porosity data set by Martin et al. 58 and accounted for sediment compaction with depth by using separate sediment compaction length scales for continental-shelf (0–200 m water depth; c 0 = 0.5 × 10 −3 ), continental-margin (200–2500 m; c 0 = 1.7 × 10 −3 ), and abyssal sediment (>3500 m; c 0 = 0.85 × 10 −3 ) 80 , 81 . Once the porosity was 0.1%, the depth integration was halted. Global sediment thickness We calculated global depth-integrated radiolytic H 2 production by summing the seafloor production rates over sediment depth in one-meter intervals (Fig. 3 in main text). Sediment thickness is from Whittaker et al., supplemented with Laske and Masters where needed 82 , 83 . Ocean depth For porosity calculations, water depths were determined using the General Bathymetric Chart of the Oceans 84 , resampled to a 5-arc minute grid, i.e. the resolution of the Naval Oceanographic Office’s Bottom Sediment Type (BTS) database “Enhanced dataset” 57 . Dissolved H 2 concentration profiles H 2 concentrations from South Pacific Sites U1365, U1369, U1370, and U1371, and the measurement protocol, are described in ref. 1 . H 2 concentrations from North Atlantic KN223 Sites 11, 12, and 15, and North Pacific Site EQP11 were determined using the same protocol and are posted on SedDB (see “Data availability”). The detection limit for H 2 ranged between 1 and 5 nM H 2 , depending on site, and is displayed as gray vertical lines in Fig. 2 of the main text. H 2 concentrations for Equatorial Pacific Site 1225 and Peru Trench Site 1230 were measured by the “headspace equilibration technique”, which measures steady-state H 2 levels reached following laboratory incubation of the sediment samples 85 , 86 . For comparison to these measured H 2 concentrations, we use diffusion-reaction calculations to quantify what in situ H 2 concentrations would be in the absence of H 2 -consuming reactions. The results of these calculations are represented as solid circles (•) in Fig. 2 of the main text. Temporal changes in H 2 concentration due to diffusive processes and radiolytic H 2 production in situ are expressed by Eq. ( 2 ): $$\frac{{\partial {\mathrm{H}}_2(x,t)}}{{\partial t}} = \frac{D}{{\varphi F}}\frac{{\partial ^2{\mathrm{H}}_2(x,t)}}{{\partial x^2}} + P(x)$$ (2) with D : the diffusion coefficient of H 2 (aq) at in situ temperature \(\varphi\) : porosity F : formation factor x : depth Z : sediment column thickness \({\mathrm{H}}_2\) : hydrogen concentration P : radiolytic H 2 production rate t : time. With constant radiolytic H 2 production, P ( x ) = P with depth, and at steady-state, $$\frac{{\partial ^2{\mathrm{H}}_2(x)}}{{\partial x^2}} = - \frac{{P\varphi F}}{D}.$$ (3) We integrate Eq. ( 3 ) over the length x twice, $${\mathrm{H}}_2(x) = - \frac{1}{2}\frac{{P\varphi F}}{D}x^2 + Ax + B$$ (4) where A and B in Eq. ( 4 ) are constants of integration. We use two boundary conditions to derive the value of these constants. Boundary condition 1: concentration of H 2 at the sediment-water interface, x = 0, is zero due to diffusive loss to the overlying water column. Boundary condition 2: concentration of H 2 at the basement–sediment-water interface, x = Z , is zero due to diffusive loss to the underlying basement. With these boundary conditions, \(A = \frac{1}{2}\frac{{P\varphi F}}{D}Z\) and B = 0 and $${\mathrm{H}}_2(x) = \frac{1}{2}\frac{{P\varphi F}}{D}(xZ - x^2).$$ (5) In cases where we expect radiolytic H 2 production rates to significantly vary with depth due to changes in lithology, we adapted the boundary conditions and applied a two-layer diffusion model to account for this variation. Calculation of Gibbs Energies for the Knallgas reaction For H 2 concentrations above the detection limits at South Pacific IODP Expedition 329 sites (Supplementary Fig. 5 ) 34 , we quantified in situ Gibbs energies (Δ G r ) of the Knallgas reaction (H 2 + ½O 2 → H 2 O). In situ Δ G r values depend on pressure ( P ), temperature ( T ), ionic strength, and chemical concentrations, all of which are explicitly accounted for in our calculations: $$\Delta G_{\mathrm{r}} = \Delta G^\circ _{\mathrm{r}}\left( {T,P} \right) + 2.3\,RT\,{\mathrm{log}}_{10}Q$$ (6) where: Δ G r : in situ Gibbs energy of reaction (kJ mol H 2 −1 ) Δ G ° r ( T , P ): Gibbs energy of reaction under in situ T and P conditions (kJ mol H 2 −1 ) R : gas constant (8.314 kJ −1 mol K −1 ) Q : activity quotient of compounds involved in the reaction. We use the measured composition of the sedimentary pore fluid to determine values of Q . For a more complete overview of in situ Gibbs energy-of-reaction calculations in subseafloor sediment, see Wang et al. 87 . Calculation of organic oxidation rates (net rates of O 2 reduction and DIC production) We calculated the vertical distribution of net O 2 reduction rates at nine sites where the sediment is oxic from seafloor to basement and the vertical distribution of DIC production rates at seven sites where the subseafloor sediment is anoxic (see Supplementary Fig. 3 for site locations). Dissolved O 2 concentrations are from Røy et al. 47 and D’Hondt et al. 88 . DIC concentrations are from ODP Leg 201 (ref. 35 ), and the Proceedings of the IODP Expedition 323 (Sites U12343, U1345) 54 and IODP Expedition 329 (Site U1371 (ref. 34 )). The net rates are calculated using the MatLab program and numerical procedures of Wang et al. 89 , modified by using an Akima spline, rather than a 5-point running mean, to generate a best-fit line to the chemical concentration data. Details of the calculation protocol for O 2 production rates and DIC production rates are respectively described in the supplementary information of D’Hondt et al. 88 and in Walsh et al. 90 . The DIC reaction rates and their first standard deviations calculated for the seven sites are given in Supplementary Table 7 . To facilitate comparisons of radiolytic chemical rates to net DIC production rates, rates are converted to electron equivalents (2 electrons per H 2 , 4 electrons per O 2 , 4 electrons per organic C oxidized). Estimation of sediment ages We estimated sediment ages for Sites U1343 and U1343 using the sediment-age model of Takahashi et al. 54 , which is based on biostratigraphic and magnetostratigraphic data. Because detailed chronostratigraphic data are not available for the remaining sites (Equatorial Pacific sites (1225 and 1226), Peru Trench Site 1230 and Peru Basin Site 1231, South Pacific sites U1365, U1366, U1367, U1369, U1370, and U1371, North Pacific sites EQP9 and EQP10, and North Atlantic sites KN223-11 and KN223-12), we used the mean sediment accumulation rate for each of these sites (Supplementary Fig. 3 ) to convert its sediment depth (in meters below seafloor) to sediment age (in millions of years, Ma). Mean sediment accumulation rate was calculated by dividing sediment thickness by basement age 91 (Supplementary Table 8 ). For Sites 1225, 1226, 1230, 1231, U1365, U1366, U1367, U1369, U1370, and U1371, sediment thickness was determined by drilling to basement 34 , 35 . For Sites EQP9, EQP10, KN223-11, and KN223-12, sediment thicknesses were determined from acoustic basement reflection data. Data availability Sedimentary radioactive element content datasets [U (ppm); Th (ppm); K (reported as wt% K 2 O)] can be retrieved from the EarthChem SedDB data repository ( ). Code and datasets for the global estimate of radiolytic H 2 production are available at .
A team of researchers from the University of Rhode Island's Graduate School of Oceanography and their collaborators have revealed that the abundant microbes living in ancient sediment below the seafloor are sustained primarily by chemicals created by the natural irradiation of water molecules. The team discovered that the creation of these chemicals is amplified significantly by minerals in marine sediment. In contrast to the conventional view that life in sediment is fueled by products of photosynthesis, an ecosystem fueled by irradiation of water begins just meters below the seafloor in much of the open ocean. This radiation-fueled world is one of Earth's volumetrically largest ecosystems. The research was published today in the journal Nature Communications. "This work provides an important new perspective on the availability of resources that subsurface microbial communities can use to sustain themselves. This is fundamental to understand life on Earth and to constrain the habitability of other planetary bodies, such as Mars," said Justine Sauvage, the study's lead author and a postdoctoral fellow at the University of Gothenburg who conducted the research as a doctoral student at URI. The process driving the research team's findings is radiolysis of water—the splitting of water molecules into hydrogen and oxidants as a result of being exposed to naturally occurring radiation. Steven D'Hondt, URI professor of oceanography and a co-author of the study, said the resulting molecules become the primary source of food and energy for the microbes living in the sediment. "The marine sediment actually amplifies the production of these usable chemicals," he said. "If you have the same amount of irradiation in pure water and in wet sediment, you get a lot more hydrogen from wet sediment. The sediment makes the production of hydrogen much more effective." Justine Sauvage, lead author of the study, measures dissolved oxygen content in sediment cores collected in the North Atlantic. Photo courtesy of Justine Sauvage Why the process is amplified in wet sediment is unclear, but D'Hondt speculates that minerals in the sediment may "behave like a semiconductor, making the process more efficient." The discoveries resulted from a series of laboratory experiments conducted in the Rhode Island Nuclear Science Center. Sauvage irradiated vials of wet sediment from various locations in the Pacific and Atlantic Oceans, collected by the Integrated Ocean Drilling Program and by U.S. research vessels. She compared the production of hydrogen to similarly irradiated vials of seawater and distilled water. The sediment amplified the results by as much as a factor of 30. "This study is a unique combination of sophisticated laboratory experiments integrated into a global biological context," said co-author Arthur Spivack, URI professor of oceanography. The implications of the findings are significant. "If you can support life in subsurface marine sediment and other subsurface environments from natural radioactive splitting of water, then maybe you can support life the same way in other worlds," said D'Hondt. "Some of the same minerals are present on Mars, and as long as you have those wet catalytic minerals, you're going to have this process. If you can catalyze production of radiolytic chemicals at high rates in the wet Martian subsurface, you could potentially sustain life at the same levels that it's sustained in marine sediment." Sauvage added, "This is especially relevant given that the Perseverance Rover has just landed on Mars, with its mission to collect Martian rocks and to characterize its habitable environments." D'Hondt said the research team's findings also have implications for the nuclear industry, including for how nuclear waste is stored and how nuclear accidents are managed. "If you store nuclear waste in sediment or rock, it may generate hydrogen and oxidants faster than in pure water. That natural catalysis may make those storage systems more corrosive than is generally realized," he said. The next steps for the research team will be to explore the effect of hydrogen production through radiolysis in other environments on Earth and beyond, including oceanic crust, continental crust and subsurface Mars. They also will seek to advance the understanding of how subsurface microbial communities live, interact and evolve when their primary energy source is derived from the natural radiolytic splitting of water.
10.1038/s41467-021-21218-z
Chemistry
New quantum material could warn of neurological disease
Hai-Tian Zhang et al, Perovskite nickelates as bio-electronic interfaces, Nature Communications (2019). DOI: 10.1038/s41467-019-09660-6 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-09660-6
https://phys.org/news/2019-04-quantum-material-neurological-disease.html
Abstract Functional interfaces between electronics and biological matter are essential to diverse fields including health sciences and bio-engineering. Here, we report the discovery of spontaneous (no external energy input) hydrogen transfer from biological glucose reactions into SmNiO 3 , an archetypal perovskite quantum material. The enzymatic oxidation of glucose is monitored down to ~5 × 10 −16 M concentration via hydrogen transfer to the nickelate lattice. The hydrogen atoms donate electrons to the Ni d orbital and induce electron localization through strong electron correlations. By enzyme specific modification, spontaneous transfer of hydrogen from the neurotransmitter dopamine can be monitored in physiological media. We then directly interface an acute mouse brain slice onto the nickelate devices and demonstrate measurement of neurotransmitter release upon electrical stimulation of the striatum region. These results open up avenues for use of emergent physics present in quantum materials in trace detection and conveyance of bio-matter, bio-chemical sciences, and brain-machine interfaces. Introduction Functional interfaces between biological and synthetic matter can greatly benefit from hydrogen transfer, which is of broad relevance to bio-sensing and bio-chemical sciences. Sensing media that responds to low concentrations of bio-markers therefore can be relevant in this context, however, must be functional near room (or body) temperature while constantly exposed to complex biological media. As a promising candidate, the perovskite nickelate SmNiO 3 (SNO, space group Pbnm) 1 , is water-stable, and belongs to a class of strongly correlated quantum materials, whose properties are highly sensitive to the occupancy of electrons in their partially filled orbitals 2 , 3 , 4 . When doped with charge carriers, SNO shows massive electronic structure changes: For one electron/unit cell doping from hydrogen, the electrical resistance changes by ~10 orders of magnitude 5 . In previous work, perovskite nickelates have shown potential for electric field detection in salt water media 6 . Glucose is a sugar that is essential for energy production in organisms and widely serves as a model system for bio-chemical studies. In nature, glucose can be oxidized into gluconolactone by losing hydrogen in the presence of glucose oxidase (GOx) enzyme 7 , and this reaction is seen across various organisms 8 , 9 . Utilizing an external electric field, perovskite oxide nano-particles have been used for glucose detection 10 , 11 . An important strategy to understand such biological and bio-chemical reactions involves measurement of the hydrogen transfer processes. Here, we present enzyme-mediated spontaneous hydrogen transfer between glucose reaction and SNO devices, as well as interfacing perovskite devices with acute mouse brain slices. Results Reaction mechanism Figure 1a shows the schematic pathway for spontaneous atomic hydrogen transfer between glucose–GOx reaction and a perovskite, where the nickelate participates in the reaction by accepting the hydrogen in the glucose–enzyme–oxide transfer chain. The reaction mechanism is described in Fig. 1b . During the glucose–enzyme–SNO reaction, the hydrogen atoms from the glucose are first transferred to the GOx enzyme as it occurs in nature, and then into the SNO lattice. This process occurs spontaneously without the need for any external energy input. The hydrogen then bonds with oxygen anions and occupies interstitial sites among the oxygen octahedra in SmNiO 3 , contributing an electron to the d orbitals of nickel 5 . The hydrogen acts as a donor dopant in the lattice. As a result, the singly occupied Ni e g orbitals in glucose-reacted SNO (GSNO) become doubly occupied and the additional electron in the e g orbital imposes large on-site Mott–Hubbard electron–electron repulsion, leading to localization of the charge carriers and resistivity increase 1 , 12 , as shown in Fig. 1c . Such a hydrogen-induced conduction suppression serves as a sensitive platform for chemical transduction at the interface between the nickelate films and biological glucose reaction. Fig. 1 Spontaneous hydrogen transfer between perovskite and glucose–enzyme reaction. a Schematic figure of the atomic hydrogen transfer from the glucose to perovskite. The glucose oxidase (GOx) enzymes are anchored on the gold electrode via cystamine bonding (details are described in Supplementary Fig. 1 ). Figure not drawn to scale for clarity. b Reaction mechanism of glucose+SmNiO 3 transformation to gluconolactone+G-SmNiO 3 . The GOx enzyme serves as a catalyst and transfers hydrogen from glucose to SmNiO 3 , referred to as G-SmNiO 3 . The hydrogens bonded with carbons are omitted for figure clarity. c The electron filling configuration of the Ni 3 d orbitals in SmNiO 3 and G-SmNiO 3 . For the pristine SmNiO 3 , the e g orbitals are singly occupied. In the case of G-SmNiO 3 , the donors doped from the hydrogen occupy an e g orbital, resulting in large on-site columbic repulsion energy U , and localizing the charge carriers resulting in reduction of electronic conductivity Full size image Electrical characterization To demonstrate the hydrogen transfer from the glucose–GOx reaction to SNO, SNO devices with GOx-modified Au electrodes were first fabricated, as schematically shown in Fig. 2a (for details, see Supplementary Methods and Supplementary Fig. 1 ). Next, atomic force microscopy (AFM) and cyclic voltammetry (CV) measurements were performed to verify the successful decoration of GOx on Au surface. As shown in Fig. 2b , bright GOx dots were observed on the Au surface. A line scan along AB indicates the height of GOx is around 5 nm, which is consistent with the actual size of GOx 13 . The pristine Au surface is smooth with a roughness of ~0.7 nm (Supplementary Fig. 2 ). In the CV scan, a pair of reversible electron transfer peaks were observed at the position characteristic of the GOx enzyme (Fig. 2c ) 14 . No CV peak was found at this voltage region when the measurement is performed on a bare Au electrode surface (Supplementary Fig. 3 ). With these measurements, we can confirm the existence of GOx on the Au surface. The reaction between the enzyme–SNO device and glucose solution was initiated by applying a droplet (20 μL) of 0.5 M glucose solution (in deionized water (DI) water) on top of the device, as schematically shown in Fig. 2a . After the glucose droplet was applied, a sharp increase of resistance of the enzyme–SNO device was observed, see the red curve in Fig. 2d . However, if there is no GOx decoration on the SNO device, no reaction occurs between the glucose solution and the nickelate device, as shown in the black curve of Fig. 2d . The reacted solution was subsequently characterized by Fourier-transform infrared (FTIR) spectroscopy measurement, and the formation of gluconolactone was observed (Supplementary Fig. 4 ), which is consistent with the reaction mechanism described in Fig. 1b . The reaction occurs spontaneously without any external electric fields (Supplementary Fig. 5 ). The resistance of the device can be reversed back to original state by annealing, due to the room temperature metastable trapping of the hydrogen in the perovskite lattice 5 . After the recovery, new GOx enzyme can be decorated to the same device and the entire process can be reproduced (Supplementary Fig. 5 ). Fig. 2 Electrical response of nickelate devices interfaced with glucose without external energy. a Schematic figure of the enzyme-SmNiO 3 (SNO) device, with glucose oxidase (GOx) decorated Au electrodes. Before the reaction, glucose solution was added on top of the device surface, as shown in the zoomed in figure on the right. b The surface morphology of GOx-modified Au surface measured by atomic force microscopy (AFM). The GOx molecules are the bright dots on the surface and a line scan along AB shows the height of the GOx is around 4–5 nm. c Cyclic voltammetry (CV) measurements with the GOx-modified Au surface as a working electrode. Electrochemical reduction and oxidation peaks of GOx were observed as expected 14 . d Temporal resistance of the enzyme–SNO device after 0.5 M glucose solution is applied as shown in ( a ). A clear increase in resistance is observed after the glucose solution is applied (red curve). No change in resistance was observed for the control SNO sample without any GOx modification (black curve in the inset). R 0 is the resistance of the pristine enzyme–SNO device. e Resistance increase of the enzyme–SNO device after the device is soaked in glucose solution for 1 h with different concentration. A monotonic increase of R / R 0 is observed with increasing glucose concentration. The enzyme–SNO device is responsive to glucose concentration down to 5 × 10 −16 M (signal to noise ratio >3). The error bar shown in inset plot was determined from the standard deviation of 10 measurements Full size image To demonstrate the crucial role of SNO in this reaction, GOx were used to modify Au electrodes on control groups, including transparent oxide conductors such as indium-doped SnO 2 (ITO), fluorine-doped SnO 2 (FTO), and Pd, an elemental metal. No change in electrical behavior was observed (Supplementary Fig. 6 ). SrTiO 3 and Nb-doped SrTiO 3 with empty and partially filled d orbitals were also decorated with GOx and no spontaneous hydrogen transfer was seen (Supplementary Fig. 7 ). The SNO devices were stable in water and the doping from glucose was non-volatile at room temperature (Supplementary Figs. 8 and 9 ). The enzyme–SNO devices were highly responsive to dilute glucose concentrations and showed good selectivity. For the responsivity test, the enzyme–SNO devices were soaked in glucose solution with different concentrations for one hour and then the resistance ratio ( R / R 0 ) was plotted in Fig. 2e . In all the cases, the device resistance increased after the reaction, and R / R 0 becomes larger with increasing glucose concentration. The R / R 0 at the dilute limit of the glucose concentration is shown in the inset of Fig. 2e and the detection limit is determined as 5 × 10 −16 M (signal to noise ratio >3) (for comparison with glucose sensing literature, see Supplementary Fig. 10 ). The high detection limit in our enzyme–SNO devices is a unique attribute of strong electron correlations, a quantum mechanical effect wherein miniscule perturbation to the electron occupancy of orbitals can result in giant modulation of the transport gap 1 . The detection of glucose is reproducible as shown in Supplementary Fig. 11 . The GOx–SNO devices also function at body temperature (37 °C), see Supplementary Fig. 12 . To test the selectivity of the enzyme–SNO device, 20 μL of 0.5 M mannose, galactose and glucose solutions were separately applied to the enzyme–SNO device, no reaction was observed for the mannose and galactose solution as seen from electrical characterization (Supplementary Fig. 13 ). Synchrotron X-ray-based characterization X-ray diffraction measurements were performed to study the structural evolution in glucose reacted nickelates (GSNO) with scans around LaAlO 3 substrate 002 peak (pseudocubic notation). The sample abbreviation and treatment conditions are summarized in Supplementary Table 1 . The pristine SNO 002 peak was observed with a lower Q z value compared to the LaAlO 3 substrate due to its larger out of plane lattice parameter, see Supplementary Fig. 14 . Figure 3a shows position-specific x-ray diffraction data of the reacted GSNO sample with patterned electrodes. Red solid curve is on top of a GOx-modified Au electrode, while blue dashed curve is on a Pd electrode without any GOx modification. After the reaction, an extra peak with smaller Q z (arising from the hydrogen doping-induced lattice expansion 6 ) was found in the red curve besides the original SNO 002 peak, while no extra peak was seen for the sample without enzyme modification. The observation of both pristine and hydrogen-doped SNO peaks in the red curve suggests a two-layer structure, where the hydrogen-doped SNO layer is restricted to a thin near-surface layer on top of the pristine SNO due to the self-limiting kinetics at room temperature and the fact there is no external energy supplied. Fig. 3 Mechanism of the spontaneous reaction between the enzyme–SNO device and glucose. a Synchrotron X-ray diffraction scans of glucose-reacted SmNiO 3 (SNO) devices with and without glucose oxidase (GOx) enzyme modification. The scans are along Q z direction around the 002 peak of LaAlO 3 substrate (pseudocubic notation). b Angle-dependent X-ray absorption near edge spectroscopy (XANES) spectra on glucose-reacted SNO devices with and without GOx enzyme modification (Ni K-edge). At a surface sensitive incident angle of 0.05 o , XANES spectra acquired on the GOx-modified electrode on GSNO show pronounced reduction in the white line peak amplitude and the effective pre-edge humps, as compared to the electrode without any GOx, suggesting orbital filling at the SNO surface, due to the hydrogen transfer. The blue dashed curve in the figure inset is shifted upward for clarity of data presentation. At incidence angle of 5.05 o , XANES spectra acquired on GOx-modified device shows negligible difference with respect to that without enzyme modification, which indicates the majority of the film is still pristine SNO. The insets show zoomed-in pre-edge feature in XANES spectra. c Classical MD trajectory of a representative FADH2 molecule. Snapshots show the conformational changes that the FADH2 molecule undergoes over timescales of ~10 ns before approaching the SNO (001) surface (pseudocubic notation). d Several tens of FADH2 near-surface conformations from ~500 ns of classical MD trajectories are sampled and used as starting configurations for AIMD simulations. Two representative samples are illustrated to demonstrate the spontaneous hydrogen transfer from an H site in FADH2 to surface oxygen of SNO (001). In both the depicted cases, one of the hydrogens from FADH2 gets extracted and gets adsorbed into the SNO (001) (zoomed-in view); the extraction process is spontaneous, with an energetic gain as large as 1.8 eV. Classical MD simulations suggest that the steric effects are important and can hinder the hydrogen transfer from FADH2 to SNO (001) as shown by detailed first principles calculations of representative trajectories (see Supplementary Fig. 21 ) Full size image A combination of angle-dependent X-ray absorption near edge spectroscopy (XANES) measurements and electron transport modeling was performed to investigate the depth profile of hydrogen in the GSNO. Figure 3b show angle-dependent XANES spectra (Ni K-edge), presenting a significant contrast between the surface of GSNO and deeper layers in the film. At the incident angle (0.05°) below the total reflection critical angle that entails surface sensitive measurements, the XANES spectra on respective Au (GOx modified) and Pd (no GOx) electrodes show pronounced differences. Firstly, the white line peak amplitude explicitly gets weaker on GOx-modified Au electrode as compared to that on Pd electrode. Secondly, the effective integrated area underneath the XANES pre-edge hump exhibits significant reduction on GOx-modified Au electrode. Both reductions indicate d -orbital filling due to the electron doping at the near-surface region. In large contrast, the XANES spectra overlap in all aspects between the electrodes with and without enzyme at large angle of incidence (5.05°). The reduction in the pre-edge hump area at different incidence angles is quantified by the area ratio between pristine SNO (no enzyme) and GSNO (with enzyme). For the 0.05° incidence angle, the area ratio is 2.60, suggesting hydrogen doping at the surface. Almost no reduction was found for 5.05° incidence angle, with a ratio of 1.04. The shallow X-ray probing depth at an incidence angle of 0.05° sets the maximum (upper bound) doping layer thickness to ~10 nm. Tunneling transport modeling of the glucose-treated SNO devices in fact indicates the doped layer to be of the order of 1 nm thickness (Supplementary Figs. 15 and 16 ). While the self-limiting kinetics at near room temperature eventually leads to a thin fully doped surface layer, the GOx–SNO devices can be used numerous times before the resistance saturation is reached, as shown in Supplementary Fig. 17 . Classical and quantum mechanical simulations We use a combination of classical molecular dynamics (MD) and quantum chemical simulations to understand the thermodynamics and kinetics of the spontaneous hydrogen transfer mechanism. There are two key steps involved: the first reductive half-reaction of β- d -glucose to gluconolactone happens spontaneously in presence of GOx enzyme, where 1,3 hydroxyl groups of glucose donate hydrogen to the redox cofactor flavin adenine dinucleotide (FAD) of GOx, forming FADH2. This process has been studied in quantum chemical and docking simulations, with a heat of formation ~−600 kcal/mol 15 , 16 . The second step involves hydrogen transfer from FADH2 to the strongly correlated oxide SNO. We evaluate the energetics of dehydrogenation of FADH2 using quantum chemical simulations. While the energetic cost to dehydrogenate FADH2 is high (~2.2–3.2 eV/hydrogen), the presence of SNO allows for spontaneous hydrogen transfer from FADH2 to SNO (see Supplementary Fig. 18 and Supplementary Methods for details). To simulate the dynamics of FADH2 interaction with SNO, we perform classical MD simulations to adequately sample sterically acceptable configurations of FADH2 at the active SNO sites, i.e. surface O (see the simulation box in Supplementary Fig. 19 ). The classical MD simulations suggest that the conformational dynamics of FADH2 is a slow process and the diffusion of FADH2 molecules to the SNO (001) (pseudocubic notation) surface occurs at timescales of tens of nanosecond (see Fig. 3c for snapshots from a representative trajectory and Supplementary Movie 1 ). The FADH2 molecules undergo a series of conformational changes before adsorbing onto the SNO (001) surface. We sample several such energetically favorable near surface configurations of FADH2 from the classical MD and use it as starting configurations for smaller ab initio MD (AIMD) models to study the effects of strong correlation and its role in FADH2 dehydrogenation (see Supplementary Fig. 20 ). Figure 3d shows snapshots from two representative AIMD trajectories that depict the temporal evolution of the FADH2 molecules near the SNO surface. The magnified images track the FADH2 and the NiO 6 octahedra near the SNO surface (top panel of Fig. 3d ). For both the cases shown, we observe spontaneous hydrogen transfer to surface oxygen of SNO within 2 ps of simulation, also see Supplementary Movie 2 . This picture is consistent with the enzyme-assisted hydrogen transfer mechanism depicted schematically in Fig. 1 . We find that the conformations of the FADH2 play a key role in dictating the hydrogen transfer: If the FADH2 conformations are sterically favorable, the process is spontaneous (see Supplementary Fig. 21 and Supplementary Movie 3 ). Interfacing with mouse brain slice We further extended the experimental studies to another important bio-marker dopamine (DA), which is a neurotransmitter that plays a significant role in motivation and learning 17 . Low levels of DA are causal to the progression of Parkinson’s disease (PD), and are hypothesized to be implicated in schizophrenia and attention deficit hyperactivity disorder (ADHD) 18 , 19 , 20 . Consequently, detection of low concentrations of DA is required for future studies of these diseases and for the development of pharmacological therapies 21 . DA can be monitored by our nickelate devices using the horseradish peroxidase (HRP) enzyme, as schematically shown in Supplementary Fig. 22a . The HRP–SNO device is responsive to DA both in DI water down to 5 × 10 −17 M (Supplementary Fig. 22b and Supplementary Fig. 23 for comparison with literature). The HRP–SNO devices were also functional in biological media and responded to DA in artificial cerebrospinal fluid (ACSF) (see Fig. 4a ). As control experiments, the HRP–SNO device was found to be stable in both pure ACSF and DI water, and the HRP enzyme is essential for the hydrogen transfer process to the nickelate lattice (Supplementary Fig. 24 ). Enzymatic selectivity coupled with the spontaneous ion–electron transfer therefore ensures robustness of the nickelate quantum material in various biological and brain environments. Fig. 4 Direct interfacing of HRP–SNO device with acute mouse brain slice. a Electrical response of the horseradish peroxidase–SmNiO 3 (HRP–SNO) devices to varying dopamine concentration in artificial cerebrospinal fluid. The device resistance change is presented as ratio before and after the reaction ( R / R 0 ). The error bar was determined from the standard deviation of 10 measurements in each case. b A schematic (drawing not to scale for clarity) showing the process of interfacing acute mouse brain slice with the HRP–SNO device. The black dash lines in the brain anatomy map show where the striatum slice and primary visual cortex slice were cut. Under electrical stimulation, dopamine molecules are released from the striatum slice and dope the SNO device through the hydrogen transfer assisted by the HRP enzyme. The brain anatomy image is adapted with permission from an open data resource © 2015 Allen Institute for Brain Science. Allen Brain Atlas API 26 . Available from: . c A photo of the experimental set up during the interfacing between striatum slice and HRP–SNO device. The experiment was performed in an aqueous artificial cerebrospinal (ACS) fluid environment and the stimulation electrode was used to trigger dopamine release from the striatum slice. The striatal brain slice is ~10 × 5 mm and the HRP–SNO device region (red rectangle) is fully covered under the slice. d I – V characteristics of the HRP–SNO device interfaced with striatal brain slice. When stimulated, the striatal brain slice releases dopamine which can be monitored by the HRP–SNO devices as seen from change in channel resistance. e The HRP–SNO device was interfaced with striatum slice in the same way as described in Fig. 4c , but with no electrical stimulation (and thus no dopamine release). No resistance change was seen, and the device was stable in the spinal fluid environment. f The primary visual cortex part of the mouse brain which releases little or no dopamine under electrical stimulation[ 24 ] was interfaced with the HRP–SNO device, After the electrical stimulation, much smaller response (only ~2% change in resistance) was observed compared to that of striatum slice stimulation Full size image We then directly interfaced an acute mouse brain slice onto the nickelate devices to monitor DA release triggered by electrical stimulation of the striatum, the brain area enriched with dopaminergic projections, as schematically shown in Fig. 4b and c . In this experiment, an acute mouse striatal slice was placed on a HRP–SNO device in a chamber continuously perfused with oxygenated ACSF solution (see the Supplementary Methods section for complete details), and electrical stimulation was applied to trigger the release of DA from the striatum 22 . Figure 4d shows the corresponding response of the HRP–SNO device to DA released from stimulated striatal slice. The resistance increase of the HRP–SNO device (~23%) approximately corresponds to DA concentration of 10 −10 –10 −9 M, based on the DA-concentration-dependent experiments shown in Fig. 4a . Such an estimation is consistent with stimulation experiments under similar conditions 23 , considering that only a small fraction of the DA molecules effuse out from the brain synapses and reach the HRP–SNO device surface. As a control experiment, the HRP–SNO device was interfaced with a striatal slice without electrical stimulation, and negligible response was observed (Fig. 4e ). In another control experiment, identical electrical stimulation was applied to a primary visual cortex (V1) slice where there is expected to be little or no DA innervation, and therefore minimal or no DA was expected to be released 24 . A much smaller response (only ~2% resistance change) was found from the HRP–SNO device interfaced with stimulated V1 slice compared to the case of stimulated striatal slice, suggesting the large response observed with striatum slice stimulation is from DA release (see Fig. 4f ). The much smaller response observed with V1 stimulation is likely from small amounts of DA-like species such as serotonin 25 . Also, the HRP enzyme was found to be critical in transferring the hydrogen from DA to SNO. No change in resistance was found when the SNO device with only gold electrodes (without HRP enzyme) was interfaced with the striatal slices while the same electrical stimulation was applied (see Supplementary Fig. 25 ). Discussion We have presented the discovery of room temperature enzyme-mediated spontaneous hydrogen transfer from model biological reactions and brain matter into a perovskite quantum material. The hydrogen transfer from biological reactions at the nickelate interface trigger a unique response: strong Coulomb repulsion that localizes charge carriers and suppresses electrical conduction. Coupled with the ability to function at body temperature in brain and biological environments enables response to ultra-low concentrations of bio-markers. The results open up directions for exploring correlated quantum systems in health sciences, brain interfaces and biological routes to dope emerging semiconductors. Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Data availability All data presented in the main text and the supplementary information are available.
What if the brain could detect its own disease? Researchers have been trying to create a material that "thinks" like the brain does, which would be more sensitive to early signs of neurological diseases such as Parkinson's. Thinking is a long way off, but Purdue University and Argonne National Laboratory researchers have engineered a new material that can at least "listen." The lingua franca is ionic currents, which help the brain perform a particular reaction, needed for something as basic as sending a signal to breathe. Detecting ions means also detecting the concentration of a molecule, which serves as an indicator of the brain's health. In a study published in Nature Communications, researchers demonstrate the ability of a quantum material to automatically receive hydrogen when placed beneath an animal model's brain slice. Quantum means that the material has electronic properties that both can't be explained by classical physics, and that give it a unique edge over other materials used in electronics, such as silicon. The edge, in this case, is strong, "correlated" electrons that make the material extra sensitive and extra tunable. "The goal is to bridge the gap between how electronics think, which is via electrons, and how the brain thinks, which is via ions. This material helped us find a potential bridge," said Hai-Tian Zhang, a Gilbreth postdoctoral fellow in Purdue's College of Engineering and first author on the paper. In the long run, this material might even bring the ability to "download" your brain, the researchers say. "Imagine putting an electronic device in the brain, so that when natural brain functions start deteriorating, a person could still retrieve memories from that device," said Shriram Ramanathan, a Purdue professor of materials engineering whose lab specializes in developing brain-inspired technology. "We can confidently say that this material is a potential pathway to building a computing device that would store and transfer memories," he said. The researchers tested this material on two molecules: Glucose, a sugar essential for energy production, and dopamine, a chemical messenger that regulates movement, emotional responses and memory. Because dopamine amounts are typically low in the brain, and even lower for people with Parkinson's disease, detecting this chemical has been notoriously difficult. But detecting dopamine levels early would mean sooner treatment of the disease. "This quantum material is about nine times more sensitive to dopamine than methods that we use currently in animal models," said Alexander Chubykin, an assistant professor of biological sciences in the Purdue Institute for Integrative Neuroscience, based in Discovery Park. The quantum material owes its sensitivity to strong interactions between so-called "correlated electrons." The researchers first found that when they placed the material in contact with glucose molecules, the oxides would spontaneously grab hydrogen from the glucose via an enzyme. The same happened with dopamine released from a mouse brain slice. The strong affinity to hydrogen, as shown when researchers at Argonne National Laboratory created simulations of the experiments, allowed the material to extract atoms on its own—without a power source. "The fact that we didn't provide power to the material for it to take in hydrogen means that it could bring very low-power electronics with high sensitivity," Ramanathan said. "This could be helpful for probing unexplored environments, as well." The researchers also say that this material could sense the atoms of a range of molecules, beyond just glucose and dopamine. The next step is creating a way for the material to "talk back" to the brain.
10.1038/s41467-019-09660-6
Medicine
Small molecule inhibitor prevents or impedes tooth cavities in a preclinical model
Qiong Zhang et al. Structure-Based Discovery of Small Molecule Inhibitors of Cariogenic Virulence, Scientific Reports (2017). DOI: 10.1038/s41598-017-06168-1 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-06168-1
https://medicalxpress.com/news/2017-08-small-molecule-inhibitor-impedes-tooth.html
Abstract Streptococcus mutans employs a key virulence factor, three glucosyltransferase (GtfBCD) enzymes to establish cariogenic biofilms. Therefore, the inhibition of GtfBCD would provide anti-virulence therapeutics. Here a small molecule library of 500,000 small molecule compounds was screened in silico against the available crystal structure of the GtfC catalytic domain. Based on the predicted binding affinities and drug-like properties, small molecules were selected and evaluated for their ability to reduce S. mutans biofilms, as well as inhibit the activity of Gtfs. The most potent inhibitor was further characterized for Gtf binding using OctetRed instrument, which yielded low micromolar K D against GtfB and nanomolar K D against GtfC, demonstrating selectivity towards GtfC. Additionally, the lead compound did not affect the overall growth of S. mutans and commensal oral bacteria, and selectively inhibit the biofilm formation by S. mutans , indicative of its selectivity and non-bactericidal nature. The lead compound also effectively reduced cariogenicity i n vivo in a rat model of dental caries. An analog that docked poorly in the GtfC catalytic domain failed to inhibit the activity of Gtfs and S. mutans biofilms, signifying the specificity of the lead compound. This report illustrates the validity and potential of structure-based design of anti- S. mutans virulence inhibitors. Introduction Dental caries is a multifactorial disease of bacterial origin, which is characterized by the localized destruction of dental hard tissues 1 , 2 . Though the oral cavity harbors over 700 different bacterial species, Streptococcus mutans initiates the cariogenic process and remains as the key etiological agent 3 . Using key matrix producing enzymes, glucosyltransferases (Gtfs), S. mutans produces sticky glucosyl glucan polymers, which facilitate the attachment of the bacteria to the tooth surface. The glucans is a major component of the biofilm matrix that shields the microbial community from host defenses, mechanical and oxidative stresses, and orchestrates the formation of cariogenic biofilms 4 . Furthermore, copious amounts of lactic acid are produced as a byproduct of bacterial consumption of dietary sugars within the mature biofilm community, which ultimately leads to demineralization of the tooth surface, ensuing cariogenesis. Current practices to prevent dental caries remove oral bacteria non-discriminatively through chemical and physical means such as mouthwash and tooth brushing 5 . Since the biofilm assembly renders bacteria to become more resistant to antibiotics and other manipulations, these traditional approaches have had only limited success. Additionally, existing mouthwashes are often associated with adverse side effects because the use of broad-spectrum antimicrobials are often detrimental to beneficial commensal species. Selectively targeting cariogenic pathogens such as S. mutans has been explored previously, however it was found that the antimicrobial peptide also alters the overall microbiota 6 . Our increasing understanding of bacterial virulence mechanisms provides new opportunities to target and interfere with crucial virulence factors such as Gtfs. This approach has the added advantages of not only being selective, but may also help to preserve the natural microbial flora of the mouth 7 , which may avoid to exert the strong pressure to promote the development of antibiotic resistance, overcoming a major public health issue in the antibiotic era. It is well established that glucans produced by S. mutans Gtfs contribute significantly to the cariogenicity of dental biofilms. Therefore, the inhibition of the Gtf activity and the consequential glucan synthesis would impair the S. mutans virulence, which could offer an alternative strategy to prevent and treat biofilm-related diseases 8 , 9 . S . mutans harbors three Gtfs: GtfB, GtfC, and GtfD. While GtfB synthesizes pre-dominantly insoluble glucans, GtfD only produces water-soluble glucans, and GtfC can synthesize both soluble and insoluble glucans 10 , 11 , 12 . Previous studies have demonstrated that glucans produced by GtfB and GtfC are essential for the assembly of the S. mutans biofilms 4 , while glucans produced by GtfD serve not only as a primer for GtfB, but also as a source of nutrient for S. mutans and other bacteria 13 , 14 . All Gtfs are composed of three functional regions: the N-terminal variable junction region, the C-terminal glucan-binding region, and the highly conserved catalytic region in the middle, which is essential for the glucan synthesis. The crystal structural of GtfC from S. mutans has been determined 15 , which provides key molecular insights for the design and development of novel Gtf inhibitors. Polyphenolic compounds 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 that include catechins, flavonoids, proanthocyanidin oligomers, and other plant-derived analogs 24 , 25 and synthetic small molecules 26 have been studied extensively for years and were found to display modest anti-biofilm activities through modulating the expression of Gtfs of S. mutans . However, the selectivity of these bioactive compounds remains to be determined and the potency is not satisfactory for the biofilm inhibition. In the present study, novel inhibitors of S. mutans Gtfs were developed through in silico screening of commercial compound libraries against the active site of the catalytic domain from the S. mutans GtfC. A lead compound targeting Gtfs was identified, synthesized, and shown to have the ability to bind to Gtfs and inhibit S. mutans biofilm formation selectively in vitro . Furthermore, the lead compound possesses anti-virulence properties in vivo . Results Structure-based virtual screening to identify small-molecule compounds that target Gtfs and inhibit biofilm formation Taking advantage of the available crystal structure of the GtfC catalytic domain complexed with acarbose, we conducted a structure-based in silico screening of 500,000 drug-like compounds using the FlexX/LeadIT software. The top ranked small molecules, as calculated using the binding energy scores in the FlexX software, were considered based on their binding pose, potential interactions with key residues, and ease of synthesis. Due to the abundance of polar residues in the GtfC active site, several of the top scored docking scaffolds contain aromatic rings, nitro groups, and polar functional groups such as amides and heteroatoms such as sulfur, etc. A total of 90 compounds with diverse scaffolds which vary in their functional groups, hydrophobicity, and H-bond accepting/donating capacity were then purchased and subjected to in vitro biofilm assays using cariogenic S. mutans . Seven potent low micromolar inhibitors were identified (Fig. 1A ). Two of these compounds (#G16 and #G43) were the most potent, as they inhibited more than 85% of S. mutans biofilms at 12.5 μM (Fig. 1B ). Compounds #G16 and #G43 share several functional groups including a nitro group, heterocyclic rings, and polar carbonyl functional property. Figure 1 ( A ) Structures of seven most potent Gtf inhibitors of S. mutans biofilms. ( B ) Biofilm inhibitory activities of the potent inhibitors at 12.5 µM as determined by the crystal violet assay. Full size image Inhibition of Gtfs by lead compounds Zymographic enzymatic assay was used to determine whether the lead compounds inhibited the activity of Gtfs that are responsible for the production of glucans and biofilm formation. Supernatants containing Gtf proteins prepared from S. mutans bacterial cultures were subjected to SDS-PAGE analysis and zymographic assay. Treatment of the SDS-PAGE gels with lead compounds #G16 and #G43 in a zymographic assay revealed that both #G16 and #G43 drastically reduced the glucan production by the Gtfs, #G43 was more potent (Fig. 2A , bottom panels). The same amount of the protein sample was used as controls and visualized by protein staining (Fig. 2A , top panels). The lead compounds were also tested against individual Gtfs using supernatant proteins harvested from cultures of various double mutants. Compound #G43 consistently inhibited the activity of both GtfB and GtfC (Fig. 2B and C ), ImageJ analysis of the intensities suggest 80% inhibition of both enzymes, while compound #G16 had a smaller effect on the activity of GtfB (60% inhibition) (Fig. 2B ) and GtfC (70% inhibition) (Fig. 2C ). Overall #G43 is more potent than #G16 in inhibiting Gtfs. Figure 2 Gtf patterns of S. mutans UA159 and its mutant variants. Culture supernatants were prepared from S. mutans UA159 wild type and gtf double mutants, and then subjected to SDS-PAGE analysis with equivalent amount of proteins in each lane. The upper panel was stained with Coomassie blue to monitor the total protein amounts while the lower panel shows enzymatic activities of Gtfs with the treatment of the lead by the zymographic assay. The intensity of the bands were quantified using ImageJ in comparison to DMSO. ( A ) Effects of lead compounds #G16 and #G43 at 25 µM on the activity of Gtfs from wild type S. mutans . ( B ) Effects of lead compounds #G16 and #G43 at 25 µM on the activity of GtfB from S. mutans GtfCD mutants. ( C ) Effects of lead compounds #G16 and #G43 at 25 µM on the activity of GtfC from S. mutans GtfBD mutants. Full size image Binding kinetics of #G43 lead compound determined by OctetRed Analysis Zymograhic assays suggest that the lead compound #43 inhibited the activity of both GtfB and GtfC. To determine if the inhibition is attributed by the binding of the lead compound to the enzymes, the OctetRed96 system was used to characterize protein-small molecule binding kinetics. The his-tagged catalytic domains of GtfB and GtC were immobilized separately onto an anti-penta-HIS (HIS1K) biosensor which consists of high affinity, high specificity penta-his antibody pre-immobilized on a fiber optic biosensor. This sensor was then exposed to varying concentrations of #G43. Assay data fit to a 1:1 binding model with a fixed maximum response, which produced a K D value of 3.7 µM for GtfB. The K D value for GtfC was more potent at 46.9 nM (Fig. 3A and B ). These data suggest that the lead compound is selective toward GtfC, the protein used in the in-silico analysis. It should be noted that the catalytic domain of GtfC is less soluble compared to that of GtfB’s, which may be responsible for the inherent higher off rate of the his-tag from the sensor, leading to a weaker association curve when compared to GtfB. Nevertheless, consistent nanomolar K D values were obtained from three independent experiments. Figure 3 Binding curves of compound #G43 at varying concentrations with ( A ) GtfB, and ( B ) GtfC catalytic domain. Full size image Expression of gtf s was not significantly affected by compound #G43 We have also examined the effect of this potent small molecule on the gene expression of gtf s. The relative expression level of gtf s was evaluated by real time RT-PCR. Compared to the DMSO control group, expression of gtf s were marginally down-regulated after the treatment with compound #G43 at different concentrations. However, no significant difference was observed between the treated and control groups, suggesting that compound #G43 inhibited Gtfs via binding to the targets rather than altered expression of its targeting genes, gtf s (Fig. 4 ). Figure 4 Effect of the compound #G43 on expression of gtfs in S. mutans. S. mutans UA159 cells treated with different concentrations of #G43 were harvested and used to prepare RNA. The expression of gtfs was examined by real time RT-PCR. The mRNA expression levels were calibrated by 16S rRNA. Values represent the means ± standard deviations from three independent experiments. NS indicates no significant difference between DMSO control and compound-treated groups. The P value > 0.05 is considered to be not significant. Full size image The most potent compound is not bactericidal and did not inhibit the growth of commensal streptococcal species, and other oral bacteria To determine the selectivity of the lead compound toward S. mutans biofilm formation versus bacterial growth, we evaluated effects of the compound on bacterial growth and viability. No significant difference in S. mutans cell viability was observed between the control group and #G43 treated groups up to 200 µM (Fig. 5A ), suggesting that the compound is not bactericidal towards S. mutans . This compound was also evaluated for its ability to inhibit two oral commensal streptococci: S. sanguinis and S. gordonii as the goal was to develop non-bactericidal and species-selective agents. The compound did not have any effect on bacterial growth (Fig. 5B ) of both streptococcal species. In addition, we evaluated effects of the compound on other oral bacteria including Aggregatibacter actinomycetemcomitans VT1169, a Gram-negative, facultative anaerobe, and Actinomyces naeslundii T14VJ1, a gram-positive, facultative anaerobe (Fig. 5C ). At 200 µM, the compound had no significant inhibition of Aggregatibacter actinomycetemcomitans . Slight inhibition (>20%) was observed of Actinomyces naeslundii growth at 200 µM, suggesting the selectivity towards S. mutans . Figure 5 Effects of the lead compound #G43 on cell viability. ( A ) Effects on S. mutans . S. mutans was treated with DMSO and a serial dilution of #G43. The cell viability was determined by the numbers of CFU in a logarithmic scale. ( B ) Effects on commensal species. S. gordonii , S. sanguinis , and S. mutans were treated with 200 µM of the compound or DMSO, and bacterial growth was measured at OD 470 , and normalized to the DMSO control (100%). ( C ) Effects on Aggregatibacter actinomycetemcomitans and Actinomyces naeslundii . A. actinomycetemcomitans and A. naeslundii were treated with the lead compound at 200 µM or 25 µM or DMSO control, bacterial growth was measured at OD 470 , and normalized to the DMSO control (100%). Values represent the means ± standard deviations from three independent experiments. NS indicates that the cell viability between DMSO control and compound-treated groups was not significantly different. The P value > 0.05 is considered to be not significant. Full size image #G43 did not inhibit the biofilm formation by commensal streptococci but inhibit S. mutans in the dual species biofilms To determine the selectivity of the lead compound toward S. mutans biofilm formation over the biofilms of other species, we evaluated effects of the compound on the biofilm formation by two oral commensal bacteria: S. sanguinis and S. gordonii . No significant difference in S. sanguinis biofilm formation was observed between the control group and #G43 treated groups up to 200 µM (Fig. 6A ). A slight increase in the biofilm formation by S. gordonii was observed when treated with the lead compound. Further, experiments using a dual species model was conducted using S. mutans with either S. sanguinis (Fig. 6B ), or S. gordonii (Fig. 6C ). We observed a reduction in the overall biofilm formation with the treatment of the compound. Moreover, the lead compound shifted the bacterial composition ratio of commensal streptococcus to S. mutans from untreated 1:4 to either 4:1 (Fig. 6D ) for S. sanguinis , or 3:2 for S. gordonii (Fig. 6E ). The increase in commensal bacteria by the treatment again suggests that the lead selectively affects S. mutans biofilm. Figure 6 Effects of compound #G43 on commensal single and dual species biofilms. ( A ) S. mutans , S. gordonii , and S. sanguinis were treated with DMSO or 25 µM of compound #G43, and the biomasses of each treated biofilm were quantitated by crystal violet staining and measured at OD 562 . ( B ) The cell viability of dual species biofilms was determined by the numbers of CFU in a logarithmic scale using S. mutans , and S. sanguinis . ( C ) The cell viability of dual species biofilms was determined by the numbers of CFU in a logarithmic scale using S. mutans , and S. gordonii . ( D ) Species distribution in dual species biofilms with S. mutans and S. sanguinius . Bars represent the mean and standard deviations of three independent experiments. ( E ) Species distribution in dual species biofilms with S. mutans and S. gordonii . Bars represent the mean and standard deviations of three independent experiments. *P < 0.05. Full size image Docking analysis, the facile synthesis of #G43 and its inactive analog to establish that the ortho primary benzamide moiety is crucial for its potency To explore the underlying mechanism of #G43’s bioactivity, the compound was docked into the active site of GtfC to elucidate plausible interactions. The top docking pose of #G43 within the GtfC active site revealed several key interactions. The nitro group on the benzothiophene ring interacts with Arg540, the amide linker is within close proximity of Gln592, and pi-pi stacking interactions are observed between Trp517 and the benzene ring. Of particular importance is the interaction of the primary ortho amide group on the benzene ring with Glu515, Asp477, and Asp588. While the mechanism of the glucan formation is not fully understood, Glu515, Asp477, and Asp588 are assumed to function as a nucleophile, a general acid/base catalyst, and a stabilizer of the glucosyl intermediate, respectively 15 . Thus, we hypothesized that this functional ortho amide group is crucial for the binding of the compound to the protein. In order to test this, we designed an analog (#G43-D) with a 3D structure (Fig. 7A ) that does not contain the primary amide group and subjected it to docking analysis, as a theoretical design. This scaffold failed to produce a good docking score in FlexX (greater than −25 kJ/mol) and yielded a weak binding pose (Fig. 7B ). Due to the absence of the primary amide group, the scaffold takes on a different orientation and possesses poor interactions with the active site. Figure 7 Effects of the lead compound #G43 and its inactive analog #G43-D. ( A ) Chemical structures of lead and its inactive analog. Docking poses of ( B ) Compound #G43 in blue skeleton and ( C ) compound #G43-D in pink skeleton. Three key residue interactions are depicted by displaying residue chains. ( D ) Effects of active and inactive compound on the activity of Gtfs by zymographic assays. Glucan zymographic assays (bottom panel) were performed using SDS-PAGE analysis of Gtfs from culture supernatants of S. mutans UA159 incubated with vehicle control DMSO, the synthesized active #G43, and its derivative at 50 µM. SDS-PAGE analysis of Gtfs (top panel) was used as a loading control. ( E ) Fluorescent microscopy images of S. mutans UA159 biofilms treated with DMSO control, the synthesized #G43, and its derivative #G43-D at 100 µM. Viable bacterial cells were stained with 2.5 µM Syto9 (green). Full size image The lead compound was re-synthesized in one step using commercially available reagents, anthranlinamide and 5-nitro-1-benzothiophene -2-carboxylic acid, in an excellent yield and fully characterized (see supplemental data). We also synthesized the “inactive” analog (#G43-D) in one step by replacing the anthranilinamide with aniline in the EDAC coupling synthesis. Zymographic analysis consistently showed that the lead compound #G43 drastically reduced the glucan production, especially of those produced by GtfC. However the designed “inactive” compound #G43-D significantly reduced the ability to inhibit the glucan production (Fig. 7D ). Additionally, i n vitro biofilm assay and fluorescence microscopy revealed that the inactive analog #G43-D did not inhibit S. mutans biofilms at concentrations up to 200 µM (Fig. 7E ). Binding studies of this analog against GtfB yielded a K D value of 68 µM, compared to a K D value of 3.7 µM by the active analog (see supplemental data). Our data demonstrates that not only is the inhibition of biofilms by selectively targeting Gtfs plausible, but the inclusion of the primary ortho amide group is crucial to maintain potent anti-biofilm activity. Further structure and activity relationship studies are ongoing to improve the potency of #G43. #G43 reduced S. mutans virulence in vivo To evaluate in vivo efficacy of the lead compound #G43, we tested the compound using a rat model of dental caries 27 (Table 1 ). All rats from the two experimental groups were colonized with S. mutans . The bacterial colonization appears to be reduced in #G43 treated rats, however the reduction did not reach the statistically significant difference when compared with the control group. The buccal, sulcal, and proximal surface caries scores of the treated animals were significantly reduced. These data suggest that the lead small molecule selectively targets virulence factors, Gtfs and Gtf- mediated biofilm formation, rather than a simple inhibition of bacterial growth. Furthermore, the #G43 treated rats did not lose weight over the course of the study in comparison with the control group, suggesting that the compound is not toxic. Table 1 Effects of the lead compound on bacterial colonization and mean caries scores in vivo . Full size table Discussion Dental caries is a multifactorial disease, in which S. mutans and other cariogenic species interact with dietary sugars to promote virulence. The current marketed therapies for dental caries and other infectious diseases are non-selective and broad spectrum in nature, which compromises the benefit of commensal bacteria in the oral flora. Thus, we have conducted this study to develop novel small molecule inhibitors selective for key virulence factors of S. mutans . As Gtfs are crucial for the biofilm formation and the cariogenicity of S. mutans , we conducted an in silico screening of 500,000 drug-like small molecule compounds targeting GtfC and identified top scored scaffolds for in vitro biofilm assays. Seven potent biofilm inhibitors emerged from this study, the lead compound #G43 was further characterized and shown to have anti-biofilm activity through the binding to GtfBC and the inhibition of the activity of GtfBC. The lead compound drastically reduced bacterial virulence in the rat model of dental caries. In addition, the protein-small molecule binding kinetic analysis of #G43 and GtfBC revealed the lead compound has strong selectivity, it has low micromolar affinity for GtfB and more potent nanomolar affinity for GtfC. Furthermore, compound #G43 selectively inhibited S. mutans biofilms in single-species and dual-species biofilm. As the catalytic domain of GtfB and GtfC shares 96% similarity at amino acid sequence level, the selectivity by the compound is remarkable. Since the crystal structure of GtfC/acarbose complex was used for screening and identification of potent lead compounds, this result further demonstrates the validity of this structure-based drug design approach for precision drug discovery. Numerous studies have claimed the identification of natural and synthetic small molecules that inhibit the biofilm formation of S. mutans through affecting the gene expression of a variety of biofilm regulatory genes including gtfs 28 , 29 , 30 , 31 . Many compounds may have indirect effects on the expression of gtf s as they can target different signaling and metabolic pathways. None has been shown to have a direct effect on the activity of Gtfs. Further investigation through docking analysis of this lead compound identified critical interactions of the ortho primary amide group of the compound with key active site residues of GtfC. An analog that does not contain this functional group lost the ability to inhibit the activity of Gtfs and in vitro biofilm formation, demonstrating that these effects are directly related and that the inclusion of the primary ortho amide group is crucial to maintaining potent anti-biofilm activity. The lead compound contains a nitro group, and typically, nitro groups are not amenable for development of drugs due to the potential of hazardous production of the nitroanion radical, nitroso intermediate, and N-hydroxy derivative 32 . However, this is a concern only for systemic drugs and not for topical applications we intend to carry out. Nevertheless, efforts are underway to optimize the activity and explore the removal of such predicted groups. Further, we were encouraged to find that #G43 did not affect the survival rates of S. mutans and two commensal streptococcal species up to 200 µM, and did not significantly affect other common oral bacteria such as Actinomeyes naeslundii and Aggregatibacter actinomycetemcomitans . The non-toxic feature of #G43 was also evident in the rodent caries models as no weight loss was observed in rats. A recent study also reported the development of a Gtf inhibitor through a similar approach. The observed potency of our lead compound #G43 is slightly better than the previously reported scaffold 26 . Further #G43 drastically inhibited cariogenicity in vivo , but did not significantly inhibit S. mutans colonization. This is interesting finding as the compound effectively inhibited the biofilm formation by S. mutans in vitro . It is possible that the sampling method skewed our results toward the total numbers of S. mutans recovered from the oral cavity rather than only the biofilm bacteria. In addition, in vivo inhibition of S. mutans glucan production may not be sufficient to inhibit in vivo biofilm formation thus bacterial colonization. This would be a desirable outcome as we can inhibit virulence but minimally affect bacterial colonization and demonstrate a virulence-selective therapeutic approach. Moreover, in contrast to the reported compound, #G43 did not significantly affect the expression of Gtfs. We also demonstrate that the lead compound selectively binds to GtfC and GtfB, suggesting the impact on the activity of GtfBC by the direct interaction rather than through downregulation of gene expression of gtfBC . In conclusion, using structure-based design, we have developed a unique low micromolar biofilm inhibitor that targets S. mutans Gtfs through binding to key virulence factors, Gtfs. Our compound is drug-like, non-bactericidal, easy to synthesize, and exhibits very potent efficacy in vivo . The report disclosed an excellent candidate that can be developed into therapeutic drugs that prevent and treat dental caries. Methods Structure-Based 3D Database Search The crystal structure of the complex of GtfC and acarbose (PDB code: 3AIC) 15 was used for in silico screening. The GtfC active site was prepared by selecting residues and cofactors (water and MES) within 6.5 Å of acarbose and then a pharmacophore that consists of Asp588 (H-acceptor) and Gln960 (H-donor) was generated. The reliability of the FlexX/LeadIT package was assessed by virtually generating a 3D structure of acarbose using VEGA-Z, and then by docking the structure into the prepared GtfC active site. This resulting docking generated a comparable binding mode to the experimental data. A large library of about 500,000 small molecules obtained in 3D mol2 format from the free-access ZINC database was used for the in silico screening. Docking runs were performed with a maximum allowed number of 2000 poses for each compound. The produced binding energies were ranked according to the highest scoring conformation. Compounds with binding energies better than −20 kJ/mol were selected for further investigation. The structures of top scoring compounds were examined for their bindings inside the GtfC pocket, drug-like properties based on Lipinski’s rules, and for synthetic feasibility. Bacterial strains, culture conditions, and chemicals Bacterial strains, including S. mutans UA159 and various Gtf mutants as described below, S. sanguinis SK36, and S. gordonii were grown statically at 37 °C with 5% CO 2 in Todd-Hewitt (TH) broth or on THB agar plate, or in chemically defined biofilm medium supplemented with 1% sucrose 33 . Aggregatibacter actinomycetemcomitans VT1169 and Actinomyces naeslundii T14VJ1 were grown in Tryptic soy broth with yeast extract (TYE). Small molecule candidates were purchased from either ChemBridge Corporation or Enamine Ltd in USA. Stock solutions were prepared in dimethyl sulfoxide (DMSO) at 10 mM and arrayed in a 96-well format for biological screening. S. mutans biofilm formation and inhibition assays Biofilm assays using 96-well flat-bottom polystyrene microtiter plates were performed to evaluate S. mutans biofilm formation at various conditions of small molecule inhibitors as described 34 , 35 . Each assay was replicated three times. Minimum biofilm inhibitory concentration (MBIC) of compounds was determined by serial dilutions. The most active compounds identified from the tested candidates were selected for further examination. Construction of S. mutans Gtfs mutants GtfB, GtfC single mutant, and GtfBC double mutant in which gtf was replaced with a kanamycin resistance cassette, aphA3 (encoding an aminoglycoside phosphotransferase), are gifts from Dr. Robert Burne’s Laboratory, University of Florida, Gainesville, FL. The GtfD mutant was constructed by an overlapping PCR ligation strategy using an erythromycin resistance cassette isolated from the IFDC2 cassette 36 . In brief, a 1-kb DNA fragment upstream of gtfD was PCR amplified with a primer pair of GtfD-UpF1 and GtfD-UpR-ldh, while a 1-kb DNA fragment downstream of gtfD was PCR amplified with a primer pair of GtfD-DnF-erm and GtfD-DnR1. The erythromycin cassette was PCR amplified with a primer pair of ldhF and ermR. With a primer pair of GtfD-UpF and GtfD-DnR, the overlapping PCR was used to amplify the three fragments that contain overlapping regions (listed in supplemental data). The resulting 2.8-kb ΔgtfD / erm amplicon was transformed into S. mutans UA159, and transformants were selected on THB plates containing erythromycin after 48 h incubation. The GtfBD and GtfCD double mutants were constructed by transformation of the GtfB and GtfC single mutant with the Δ gtfD / erm amplicon and followed by the selection of kanamycin- and erythromycin-resistant colonies. The in-frame insertion of erm in the place of gtfD for each mutant allele was verified by DNA sequencing analyses. The mutants were further validated by the production of respective Gtf. Inhibition of the activity of Gtfs determined by zymographic assays A well established zymographic assay was used to determine enzymatic activity of Gtfs 37 . In brief, overnight S. mutans UA159 cultures were diluted 1:100 in fresh 5 mL THB. Bacteria were grown to OD 470 of 1.0, and spun down by centrifugation at 4 °C and culture supernatants were collected and filtered through a 0.22-μm-pore-size filter membrane to remove residual bacterial cells and dialyzed at 4 °C against 0.02 M sodium phosphate buffer (pH 6.8), with 10 μM phenylmethylsulfonyl fluoride (PMSF), followed by a second dialysis against 0.2 mM sodium phosphate containing 10 μM PMSF. After dialysis, 4 mL of samples were concentrated to 40 μL by 100 K Amicon Ultra-4 centrifugal filter (Merk Millipore Ltd.). For electrophoresis and zymographic analysis, 10 μL of each concentrated culture supernatant was applied to 8% SDS-PAGE in duplicate. One gel was used for protein staining with Coomassie blue dye, while the other one was subjected to zymographic assay as described 37 . For zymogram analysis, following electrophoretic separation, gels were washed twice for 15 min each with renaturing buffer containing 2.5% Triton X-100. Gels were then incubated for 18 h at 37 °C with 0.2 M sodium phosphate buffer (pH 6.5) containing 0.2% dextran T70, 5% sucrose, and varying concentrations of the small molecule inhibitors. The reactions were stopped by washing gels with distilled water at 4 °C for 10 min, and digital images of the resultant white opaque glucan bands were visualized against a black background and captured using a digital camera. ImageJ software was used to analyze the intensities of each band, and % of the inhibition by the lead compounds was calculated by comparing the band intensity between lead compounds, and DMSO treated groups. Expression and purification of GtfB and GtfC catalytic domains The DNA fragment coding for either GtfB catalytic (residues 268 aa–1074 aa) or GtfC-catalytic (residues 295 aa–1103 aa) was PCR amplified with primer sets of GtfB-BamH1-F and GtfB-Xho1-R, GtfC-BamH1-F and GtfC-Xhol1-R respectively (Supplemental Table 1 ) using S. mutans genomic DNA as a template. Each amplified fragment was cloned into pET-sumo vector respectively and transformed in Escherichia coli BL21(DE3). The recombinant strain grown to OD 600 = 0.8 in LB medium was induced with 0.1 mM IPTG at 18 °C overnight. Cell lysates prepared form the overnight grown E. coli cells were subjected to protein purification using HiTrap TM Column (Ni 2+ affinity) followed by gel filtration experiments as described 38 , 39 . Octet Red analysis Octet full kinetic binding analysis was performed for binding of #G43 to GtfB and GtfC respectively. The rate constant, K D , was determined using the Octet® Red96 system (ForteBio, Menlo Park, CA). Phosphate buffer with 3.5% (w/v) DMSO was used. The proteins were captured on dip-and-read Anti-Penta-HIS (HIS1K) Biosensor. These consist of high affinity, high specificity Penta-His antibody from Qiagen pre-immobilized on a fiber optic biosensor. The binding of #G43 at 3-fold serial dilutions in phosphate buffer from 200, 66.6, 22.2, 7.4, 2.5 to 0 µM was assessed. The ForteBio Octet analysis software (ForteBio, Menlo Park, CA) was used to generate the sensorgram and monitor the accuracy of the analysis. Cell viability of S. mutans, S. gordonii, S. sanguinis, Aggregatibacteractinomycetemcomitans and Actinomyces naeslundii Effects of lead small molecules on cell viability were examined as described 34 . The number of colony-forming units (CFU) per milliliter of each sample treated with selected compounds at different concentrations was enumerated after incubation for 24 h at 37 °C and compared to the values obtained from the DMSO control group. Overnight broth cultures were transferred by 1:50 dilutions into fresh THB medium and were allowed to grow until mid-exponential phase (OD 470 nm 0.6) before transfer to 96-well plates containing desired concentration of the testing compounds. After 16 h incubation, bacterial growth was measured at OD 470 , and normalized to the DMSO control (100%). Growth of commensal and dual-species biofilms Overnight broth cultures were transferred by 1:50 dilutions into fresh THB medium and were allowed to grow until mid-exponential phase (OD 470 nm = 0.6) before transfer to 96-well plates. For mono-species biofilms, 1:100 dilution of the individual cultures was added to the 96-well plate containing the desired concentrations of compounds or DMSO. After incubation for 16 h, the biofilms were gently washed with PBS in triplicate and the biofilms were quantified with crystal violet staining. For dual-species biofilms, 1:100 dilution of S. mutans was used and 1:10 dilution of the commensal species ( S. sanguinius or S. gordonii ) was used as inoculum to seed the 96-well plate containing the desired concentrations of compounds or DMSO. After incubation for 16 h, the biofilms were scratched off with a sterile spatula and suspended in 100 µL of PBS, the biofilm samples were vortexed. To determine the total number of viable bacterial cells (CFU), 100 μl from 16h-dispersed biofilms were serially diluted in potassium phosphate buffer and plated in duplicate on blood agar plates. The commensal species could be differentiated from S. mutans due to their characteristic green rings formed around the colonies. Rat model of dental caries S. mutans in vivo colonization and virulence were evaluated using a rat model of dental caries as previously described 40 , 41 , 42 . Fischer 344 rats were bred and maintained in trexler isolators. Rat pups were removed from isolators at 20 days of age and randomly assigned into two groups of 6 animals with or without treatment of the potent inhibitor #G43. Rats were then infected with S. mutans UA159 for three consecutive days and provided a caries-promoting Teklad Diet 305 containing 5% sucrose (Harlan Laboratories, Inc., Indianapolis, IN) and sterile drinking water ad libitum. One group of rats was then treated with vehicle control while another group was topically treated with the lead compound at 100 µM twice daily for 4 weeks beginning 10 days post infection. Following each treatment, drinking water was withheld for 60 min. Animals were weighed at weaning and at the termination of the experiment. The animals were euthanized, their mandibles excised for microbiological analysis of plaque samples on MS agar plates and blood agar plates and for scoring of caries by the method of Keyes 43 . All experimental protocols were approved by University of Alabama at Birmingham Institutional Animal Care and Use Committee. The methods were carried out in accordance with the relevant guidelines and regulations. Synthesis of small molecules Protocols used to synthesize the lead compound and its subsequent derivatives are described in the supplementary information. Statistical Analysis The analysis of the in vitro experimental data was performed by ANOVA and Student’s t test using SPSS 11.0 software (SPSS Inc., Chicago, IL). Statistical significance in mean caries scores, colony-forming units (CFU) per mandible and body weights between two groups of rats was determined by one-way ANOVA with the Tukey–Kramer multiple comparison test using the InStat program (Graphpad Software). Differences were considered to be significant when a value of P ≤ 0.05 was obtained.
University of Alabama at Birmingham researchers have created a small molecule that prevents or impedes tooth cavities in a preclinical model. The inhibitor blocks the function of a key virulence enzyme in an oral bacterium, a molecular sabotage that is akin to throwing a monkey wrench into machinery to jam the gears. In the presence of the molecule, Streptococcus mutans—the prime bacterial cause of the tooth decay called dental caries—is unable to make the protective and sticky biofilm that allows it to glue to the tooth surface, where it eats away tooth enamel by producing lactic acid. This selective inhibition of the sticky biofilm appears to act specifically against S. mutans, and the inhibitor drastically reduced dental caries in rats fed a caries-promoting diet. "Our compound is drug-like, non-bactericidal and easy to synthesize, and it exhibits very potent efficacy in vivo," the researchers explained in an article in Scientific Reports. It is "an excellent candidate that can be developed into therapeutic drugs that prevent and treat dental caries." About 2.3 billion people worldwide have dental caries in their permanent teeth, according to a 2015 Global Burden of Disease study. Current practices to prevent cavities, such as mouthwash and tooth brushing, indiscriminately remove oral bacteria through chemical and physical means, and have limited success. Caries is the Latin word for rottenness. "If we have something that can selectively take away the bacteria's ability to form biofilms, that would be a tremendous advance," said Sadanandan Velu, Ph.D., associate professor of chemistry in the UAB College of Arts and Sciences, and a lead researcher in the study. "This is particularly exciting in the broad sense of targeting microbiota using chemical probes tailored to the specific pathogen within a complex microbial community," said Hui Wu, Ph.D., professor of pediatric dentistry, UAB School of Dentistry, director of UAB Microbiome Center, and a lead investigator in the study. "Successful development of this selective lead inhibitor in the dental setting offers a proof of concept that selective targeting of keystone bacteria is promising for the design of new treatments," Wu said. "This is relevant for many elusive human diseases as the microbiome is being linked to overall health and disease." Wu's expertise is bacteriology and biochemistry, and Velu's is structure-based drug design. Their interdisciplinary study also included researchers from the Department of Microbiology in the UAB School of Medicine. Research details The glucan biofilm is made by three S. mutans glucosyltransferase, or Gtf, enzymes. The crystal structure of the GtfC glucosyltransferase is known, and the UAB researchers used that structure to screen—via computer simulations—500,000 drug-like compounds for binding at the enzyme's active site. Ninety compounds with diverse scaffolds showing promise in the computer screening were purchased and tested for their ability to block biofilm formation by S. mutans in culture. Seven showed potent, low-micromolar inhibition, and one, #G43, was tested more extensively. #G43 inhibited the activity of enzymes GtfB and GtfC, with micromolar affinity for GtfB and nanomolar affinity for GtfC. #G43 did not inhibit the expression of the gtfC gene, and it did not affect growth or viability of S. mutans and several other oral bacteria tested. Also, #G43 did not inhibit biofilm production by several other oral streptococcal species. In the rat-model of dental caries, animals on a low-sucrose diet were infected with S. mutans and their teeth were treated topically with #G43 twice a day for four weeks. The #G43 treatment caused very significant reductions in enamel and dentinal caries. "In conclusion," Wu and Velu wrote in their paper, "using structure-based design, we have developed a unique low-micromolar biofilm inhibitor that targets S. mutans Gtfs through binding to key virulence factors, Gtfs."
10.1038/s41598-017-06168-1
Nano
Nanoscale view of energy storage
Tarun C. Narayan et al. Direct visualization of hydrogen absorption dynamics in individual palladium nanoparticles, Nature Communications (2017). DOI: 10.1038/ncomms14020 , www.nature.com/articles/ncomms14020 Journal information: Nature Communications
http://www.nature.com/articles/ncomms14020
https://phys.org/news/2017-01-nanoscale-view-energy-storage.html
Abstract Many energy storage materials undergo large volume changes during charging and discharging. The resulting stresses often lead to defect formation in the bulk, but less so in nanosized systems. Here, we capture in real time the mechanism of one such transformation—the hydrogenation of single-crystalline palladium nanocubes from 15 to 80 nm—to better understand the reason for this durability. First, using environmental scanning transmission electron microscopy, we monitor the hydrogen absorption process in real time with 3 nm resolution. Then, using dark-field imaging, we structurally examine the reaction intermediates with 1 nm resolution. The reaction proceeds through nucleation and growth of the new phase in corners of the nanocubes. As the hydrogenated phase propagates across the particles, portions of the lattice misorient by 1.5%, diminishing crystal quality. Once transformed, all the particles explored return to a pristine state. The nanoparticles’ ability to remove crystallographic imperfections renders them more durable than their bulk counterparts. Introduction The development of improved energy storage technologies is crucial for the advancement of a number of industries including large-scale alternative energy, clean transport and portable electronics 1 . Two promising strategies—electrical energy storage in batteries and chemical storage of hydrogen in metals—often rely on solute-induced phase transformations 2 , 3 , 4 . These transformations are generally accompanied by large structural changes from incorporation of the solute atom 5 , 6 . In bulk samples, the large stresses resulting from volume changes cause the formation of several misfit dislocations and eventual fracture, which reduce the cyclability of the system 7 , 8 , 9 , 10 , 11 , 12 . To address these problems, there has been a push towards nanoscale systems, as they have proven to have faster transformation kinetics and are more robust upon repeated charge/discharge cycles 13 , 14 . Recent work suggests that conducting solute introduction and removal at high rates suppresses phase separation and thus causes the reaction to proceed through a continuous solid solution 15 . The resulting lack of phase coexistence and, accordingly, the lack of interfacial strains or defects, helps to explain the increased durability. At lower rates, however, many systems undergo phase separation and thus sustain large stresses 10 , 16 , 17 , 18 , 19 , 20 , 21 . How nanoparticles exhibiting solute-induced phase transformations suffer such high stresses but remain durable remains unclear; notably, in typical systems, solute uptake induces significant volume changes on the order of 7–10% (refs 5 , 6 , 22 ). Palladium hydride serves as an excellent model to understand solute-induced phase transitions. This system is characterized by one of the oldest and most well-studied solute-driven phase transitions, with physics that closely parallel those of Li intercalation and deintercalation compounds, such as LiNiMnO 4 (refs 23 , 24 , 25 ). Moreover, the palladium hydrogen system shows fairly fast kinetics at readily attainable temperatures and pressures, allowing more accessible probing of the phase transformation thermodynamics 26 . PdH x generally exists in two face-centered cubic phases: a hydrogen-poor α phase existing at lower H 2 pressures and a hydrogen-rich β phase existing at higher H 2 pressures. The phase transformation behaviour of PdH x is well known in the bulk, but the changes at the level of individual nanoparticles are only now starting to be addressed, thanks to the development of several single-particle techniques, including in situ transmission electron microscopy (TEM) 23 , 27 , 28 , plasmonic nanospectroscopy 29 , 30 and coherent X-ray diffractive imaging 21 . These studies have demonstrated that single crystalline particles do not exhibit phase coexistence at equilibrium 23 , 29 , in contrast to multiply twinned particles 27 . Recent X-ray diffraction experiments have captured intermediates during the hydrogenation reaction, revealing the arrangement of the α and β phases and their corresponding strain profiles at one step during the hydrogenation reaction 21 . However, none of these studies reveals the nature by which the α phase transforms to the β phase in real time. Here we conduct high-resolution dynamic studies of the α to β transformation on the subparticle level. Our results not only give structural insights into the reaction intermediates but help explain the high durability of such nanoparticles in energy storage devices. An environmental TEM serves as an effective tool to study the hydrogenation of palladium in situ 31 , 32 . The ability to flow in hydrogen gas at pressures up to 600 Pa with a variable temperature stage allows us to study both structural and spectroscopic properties as a reaction occurs. For example, in palladium, the lattice constant increases by 3.7% and the bulk plasmon resonance shifts by 2 eV upon transformation from the α to the β phase 6 , 23 , 27 , 33 , 34 . Techniques such as selected area electron diffraction (SAED), dark field (DF) imaging, and scanning TEM (STEM) allow insight into the particle structure and crystallography. Electron energy loss spectroscopy (EELS) quantifies the energy lost by the electron beam as it excites a variety of processes, characterizing electronic changes in Pd 35 . Combined, these techniques allow us to image particles with sub-nanometre resolution and allow for thorough structure–function correlation. Here, we first use a combination of STEM and EELS to image the hydrogen absorption process in single crystalline Pd cubes in real time. We find that the reaction proceeds through a nucleation-and-growth pathway where the β-phase nucleates in one or more corners of the cube before establishing a (100) phase front. We then examine nanocubes using DF imaging and SAED after freezing the reaction while it is in progress to examine the various reaction intermediates in greater depth (see Supplementary Methods and Supplementary Discussion for details). This analysis suggests the development of a lattice misorientation, which disappears upon completion of the transformation. SAED patterns of representative particles that have been loaded and unloaded twice show that the diffraction spots sharpen upon loading, further underscoring that the completion of the solute absorption process can reverse the crystal quality degradation induced during the α to β transformation. Results Real-time monitoring of the phase transformation We start by preparing single crystalline palladium nanocubes using previously published procedures 36 , 37 . By using the smaller nanocubes as seeds, we prepare cubes ranging from about 15 to 80 nm in size. We deposit the particles on an amorphous SiO 2 grid for analysis in the electron microscope and clean them of organic contaminants using a 25% oxygen in argon plasma. A similar cleaning procedure has been shown to leave the surface ligand-free 29 . The SiO 2 substrate does not contaminate in the presence of hydrogen and offers a featureless background for EELS in the relevant energy range 38 , making it ideal for in situ TEM experiments. The resulting sample is shown in Supplementary Fig. 1 . After introducing the sample into the microscope, we examine it with STEM-EELS in a H 2 atmosphere to understand the real-time progression of the hydrogenation reaction. At the voltage and camera length used in this experiment, STEM images show notable diffraction contrast, as elaborated in the Supplementary Methods and Supplementary Fig. 2 . Depending on the precise orientation of the particle, either the α or β phase can appear brighter. The difference likely stems froms the difference in the excitation error for the spots captured by the annular dark-field STEM detector. By temporarily positioning the beam at different locations in the particle, we collect the EEL spectra and thus assign phases to the regions of differing contrast. An example of this procedure is shown in Supplementary Fig. 3 . STEM time series from three representative particles with edge lengths of 20, 36 and 43 nm are shown in Fig. 1 . Regardless of particle size, the reaction begins with the formation of a β-phase nucleus in one or more corners of the particle. While the orientation of the interface between the β nuclei and the α-phase matrix cannot be deduced from the STEM images, the diagonal nature of the phase boundary is consistent with a (111)-type interface. This interface has also been shown in a prior study that demonstrates a coherent (111)-oriented phase boundary between the α and β phases during early stages of the transformation in an ∼ 100 nm palladium cube 21 . In these cubic particles, the observed morphology of phase nucleation and growth does not follow the spherical shell mechanism previously suggested for palladium nanoparticles 23 , but rather resembles the spherical cap model proposed for olivine-based cathodes in lithium-ion batteries 39 . Compared with the spherical shell model, which predicts coherent phase transitions for particles smaller than 35 nm (ref. 23 ), the spherical cap model leads to a lower elastic penalty for the coherent existence of an interface between the α and β phases. It is therefore reasonable to assume that cubic nanoparticles larger than 35 nm will still maintain coherency during hydrogen loading and unloading. Figure 1: Snapshots of scanning transmission electron microscopy movies of the phase transformation. Still frames of the phase transformation as followed with scanning transmission electron microscopy (STEM) for three different particles of sizes ( a ) 20 nm, ( b ) 36 nm and ( c ) 43 nm accompanied by their respective bright field transmission electron microscope images. The correspondence between each region and its respective phase was verified using electron energy loss spectroscopy. The dotted lines represent the approximate locations of the phase boundaries. In some images, the cube drifts out of the field of view briefly, thus resulting in the image being cut off. The scale bar is 25 nm in each image. The particles shown in a , b show the presence of a single nucleus of the β phase, whereas the particle in c shows the existence of two nuclei. Full size image The preferential nucleation at the corners of the cubes could have thermodynamic or kinetic origins. In a previous study on the hydrogenation of palladium nanocrystals, a phase field model suggested that the corners of a cuboidal particle are under tensile strain 21 . Such strain lowers the enthalpy of the α- to β-phase transition and promotes preferential hydrogen uptake in the corners. Prior work has also shown that hydrogen absorption into 111-terminated octahedra occurs more quickly than into 100-terminated cubes 40 . It is therefore likely that hydrogen absorption will be faster through the ‘111-like’ surfaces at the rounded corners of the cubes, which would again facilitate hydrogen uptake through the particle corners. In the two smaller cubes shown in Fig. 1a,b , only one β-phase nucleus appears to form. The nucleus in the 36 nm cube shown in Fig. 1b in particular seems to have a (111) type interface at the corner, but spills out across the bottom edge of the cube to nearly establish a (100) phase front across the particle. In the first 26 s, it appears that the β-phase nucleus does not grow horizontally, but grows diagonally, suggesting that the phase front moves faster along 〈111〉 or 〈110〉 than it does along 〈100〉. Once the phase front is established, as in the image taken after 77 s, it is very stable and does not suffer from reorientations. An interface oriented along (100) is consistent with a coherent process, as a (100) phase boundary minimizes the elastic energy penalty required to establish a coherent interface in an infinite medium (as elaborated in the Supplementary Discussion and Supplementary Fig. 4 ) 41 . This prediction corroborates earlier TEM work on β phase coherent precipitates in a palladium foil 42 , suggesting that the α to β transition proceeds coherently. Once the phase front nears the edge of the particle, the α phase is confined to the corners of the nanocube and eventually squeezed out. The smaller cube in Fig. 1a exhibits many similar tendencies. Since the particle is smaller, the β phase spreads across one face of the cube more quickly and establishes a {100} phase front before the {111} phase front has finished growing, as suggested by the curved interface. The final steps closely resemble those of the 36 nm cube. In the larger particle shown in Fig. 1c , the β-phase nuclei in the opposite corners of the cube connect to form the (100) interface. The phase front initially reorients itself continuously but eventually stabilizes in one of the 〈100〉 directions. The β-phase front then begins to propagate across the particle slowly. For example, from 0 to 76 s, approximately 20% of the particle has converted from the α to the β phase. In the next 200 s, approximately 50% of the particle has transformed, meaning that 30% of the particle has been hydrogenated in that time period; the transformation rate during this time has reduced by more than a factor of two. As such, we see that growth is faster at earlier stages, so the phase transition is unlikely to be surface limited as the (100) phase front moves across the particle. In addition, we note that the diffusion constants of hydrogen in the α and β phases at about 250 K are 7 × 10 −8 cm 2 s −1 and 2 × 10 −8 cm 2 s −1 , respectively, meaning that hydrogen could travel 100 nm in under 1 ms (refs 26 , 43 ). The rate of the α-to-β phase transformation is thus not limited by hydrogen diffusion or surface reactions, but by the slow motion of the phase boundary. After a slow progression through the bulk of the particle, the later stages of the phase transformation occur very quickly. Upon nearing the edge of the particle, the α phase is restricted to a corner of the particle, as seen in the 329 and 334 s snapshots in Fig. 1c . The morphologies of the two phases in this stage resemble those seen during the initial stages of the reaction, implying that the beginning and end of the reaction occur by a similar mechanism. An interesting feature of some of the STEM movies is that, in some cases, the contrast inverts between the α and β phases; the correspondence between regions of different contrast and their EEL signatures has been observed as in Supplementary Fig. 3 . This effect can be seen between the 46 and 77 s time steps of Fig. 1b and the 272 and 324 s time steps of Fig. 1c . In both cases, the contrast inversion occurs as the β phase propagates across the sample. Since the features in our STEM images arise from diffraction contrast, the contrast inversion could arise from a lattice reorientation and/or deformation. This hypothesis is futher supported by high-resolution electron diffraction, as described below. Structural analysis of reaction intermediates With this general understanding of the dynamic phase transformation, we then investigated the structure of the reaction intermediates to gain more detailed insight into the mechanism. To do so, we used a combination of displaced-aperture dark field (DADF) imaging and SAED. We first rapidly increased the hydrogen partial pressure to 500 Pa at −35 °C, which is slightly above the pressure required for the α to β transformation. After about 2 min, the nanoparticles were rapidly cooled to 100 K, while slowly purging the H 2 gas from the sample compartment. At 100 K, the palladium surface is catalytically inactive towards H 2 splitting and recombination, and we therefore effectively trap the hydrogen atoms inside the particles 44 . Since the diffraction spots shift inward by approximately 3.5% upon transitioning to the β phase, spots arising from the two phases are easily resolvable. Introducing an objective aperture allows for the selection of electrons diffracting from a single phase; these electrons can then be used to generate a DADF image, as seen in Fig. 2a . Overlaid dark-field images collected from the outside (blue) and inside (red) diffraction spots of 24 different particles are shown in Fig. 2b–g . The particles are arranged in the order of increasing hydrogen content within particles of similar sizes. The dark-field image constructed using both the α- and β-phase spots is shown in Supplementary Fig. 5 . We see immediately that the DF images closely parallel those seen in the dynamic STEM measurement, and that cubes of different sizes transform by a similar mechanism. Nucleation of the β phase appears to occur at the corners of the cube as seen in columns i and ii. The cubes we observed largely had two or more nuclei, especially in the case of larger cubes, which suggests that nucleation and movement of the phase boundary occur with comparable rates. We also note that the striping seen in the β-phase nuclei, most prominently in the second column of Fig. 2c , are consistent with intensity fringes due to a varying thickness of these crystallographic domains 45 . A varying thickness is consistent with the spherical cap model, where the β-phase cap thickness progressively decreases away from the cube corners along 〈110〉. The nucleation step is followed by growth and coalescence of nuclei into a (100) phase front seen in columns ii and iii, which then propagates across the particle and, in some cases, leaves a small phase region in the corner of the particle as seen in the fourth column of Fig. 2c,d . Figure 2: Dark field images of reaction intermediates. ( a ) Representative electron diffraction pattern. The red and blue circles correspond to the positions of the objective apertures that give rise to the images shown to the right. ( b – g ) Overlaid dark field images of 24 different particles grouped together by size range obtained from the outside spot (blue) and the inside spot (red). Within each row, the particles are approximately arranged by increasing transformed fraction to attempt to recreate the phase transformation. The scale bar is 50 nm in each image. Full size image Figure 3 displays representative diffraction patterns corresponding to the dark field images shown in Fig. 2d . The first column of Fig. 3b shows the diffraction pattern for a cube with a (111)-type interface. A zoomed-in image of the 020 spot is shown in the first column of Fig. 3c . We can see that there are only two spots—one corresponding to the β phase and one corresponding to the phase—that essentially lie on the same radial line. Figure 3: Diffraction patterns of selected reaction intermediates. ( a ) Overlaid dark field images shown in Fig. 2d . The scale bar is 50 nm in each image. ( b ) Diffraction patterns corresponding to the dark field images are shown. ( c ) The diffraction patterns shown in b zoomed in on the region highlighted with the red square. The white dotted lines correspond to the arc delineating constant lattice parameter. Full size image The dark-field image in the second column of Fig. 3a corresponds to the growth of the β-phase nuclei to the point in which they begin to form the (100) phase front. The two growing β-phase nuclei are similar in appearance to the β-phase nucleus in Fig. 1b at 46 s. In this case, the diffraction pattern in the second column of Fig. 3b , and the corresponding zoomed-in spot in Fig. 3c , is similar to that seen in the first panel, suggesting that coherency is likely maintained at this stage. The third column of Fig. 3b shows a representative diffraction pattern of a particle that is roughly 50% transformed into the β phase. Each spot appears to have smeared out into a streak. Closer examination of an individual spot as in the third column of Fig. 3c shows that each spot has split into two α–β pairs. A fit of the two sets of lattices shows that one is rotated by approximately 1.5° with respect to the other. This secondary set of spots implies that a portion of the lattice exists in a slightly different orientation than the rest of the particle. Our findings are consistent with an earlier report of the hydrogenation of palladium films, in which the authors find that a majority of the crystallites are rotated 1.5–3° with respect to their original orientation 46 . The rotation in the diffraction pattern corresponds to the rotation in the plane of the substrate. A slight additional out-of-plane component, which is difficult to detect using this pattern, could alter the diffraction condition and thus contribute to the change in contrast that is observed in the STEM images. There is not enough evidence to interpret the nature of this rotation to distinguish between phenomena such as dislocations or strained rotations. Recent calculations and experiments in highly mechanically stressed nanoscale systems have shown that the barrier towards creation of partial dislocations decreases with decreasing particle size, although the spherical cap morphology seen here suggests that there may not be enough of a driving force to nucleate a dislocation 39 , 47 , 48 . As the α phase becomes isolated in a corner of the particle as in the fourth column of Fig. 3a , the diffraction spot in Fig. 3c now resembles that taken at the early stage of the transformation; the only difference is that now the β-phase spots are more intense than the α-phase spots. There are no longer two sets of lattices in the particle that show different orientations. This absence, along with the close resemblance of the frozen-in intermediates with the real-time images, suggests that the particles remove crystallographic imperfections during the loading process, as the misoriented part of the nanoparticle either realigns with the rest of the particle or the phase front pushes the rotated region out of the particle. The diffraction patterns of other particles with similar morphologies lead to the same conclusions. Crystallographic ‘healing’ in nanoparticulate phase transformations has also been observed in the LiNi 0.5 Co 0.5 O 2 system 10 . In this case, X-ray imaging shows that edge dislocations formed by the coexistence of phases appear to migrate to the surface of a particle during lithiation. Unlike the bulk, in which the defect is trapped inside, nanoparticles have a nearby surface to which defects can migrate and reduce energy penalties required to annihilate dislocations. Equilibrium analysis of hydriding and dehydriding To check that the continual probing of particles with the electron beam does not severely alter the reaction process, we investigate nanocubes after they have reached equilibrium. Pressure-composition isotherms are collected using a series of SAED patterns at different hydrogen pressures for particles with edge lengths of 19, 33, 48 and 74 nm, as shown in Fig. 4a . At each pressure, the system is allowed to settle for 30 min, consistent with the equilibration times shown in Supplementary Fig. 6 . These isotherms both confirm that the particle remains in a single phase at equilibrium and remains single-crystalline through both the α to β and β to α phase transitions. We see no evidence of a secondary rotated lattice. The retention of crystallinity is consistent with our earlier findings 23 . Figure 4: Diffraction-based pressure-composition isotherms. ( a ) Representative pressure-composition isotherms collected using electron diffraction for four different particle sizes. The transmission electron microscopy images in the background of each isotherm were taken after two cycles of hydrogenation and dehydrogenation. The separations between pairs of spots are represented as percentage changes from the separation in the reference pattern of the unloaded state. The error bar corresponds to the standard error of the distribution of changes in separation. The scale bar is 40 nm in each image. ( b ) Change in the width of diffraction spots upon cycling. The width of each spot is referenced to the original spot in the room temperature diffraction pattern. The standard deviation of this distribution of width changes corresponds to the error bar. The circular points correspond to the percentage change in spot width and are thus referenced to the left axis. The triangular points correspond to the hydrogen loading and are thus referenced to the right axis. Note that the average peak width decreases upon loading and increases upon unloading, suggesting that the hydrogen absorption process increases crystallinity. Full size image The isotherms, however, do not capture crystal quality during the course of the reaction. To better understand the impact of the phase transition on this parameter, we cycle the particles with hydrogen and monitor the average change in the peak width of the observable spots as referenced to the diffraction pattern of the pristine particle. A representative data set is shown in Fig. 4b for a 31 nm particle (similar plots for 19 and 47 nm particles are shown in Supplementary Fig. 7 ). The spots become noticeably sharper after the α to β phase transformation; this behaviour is seen consistently upon cycling. Since the reciprocal of the spot width is approximately proportional to the crystallite size, crystal quality increases during the loading process. The cycling data thus confirm that the hydrogen absorption process serves as a mechanism to remove crystallographic imperfections in the particle. Our results corroborate recent work proposing a coherent loading process but an incoherent unloading process 49 . The morphologies we observe are consistent with coherency at the beginning and end of the reaction; furthermore, the spot size at the end of the transformation indicates that the reaction proceeds without formation of many dislocations. We see such results for all particle sizes observed, suggesting that bulk-like incoherent transitions are not present at least up through sizes of ∼ 80 nm for the loading process. Upon desorption, however, the broadening of the diffraction spots shows the process occurred with significant defect formation, potentially indicative of an incoherent mechanism. Using a combination of SAED, DF imaging, STEM and EELS in an environmental TEM, we are able to follow the hydrogenation of palladium in situ and discern the mechanism of the process. Regardless of size, the particles exhibit similar dynamic behaviour. The β phase initially nucleates at one or more corners of the cube. The nucleus then grows across the cube after establishing a (100) phase front. During this process, a portion of the transforming particle reorients, thus giving rise to four or more crystalline domains in the initially single-crystalline particle. After moving across the particle, the β phase segregates the α phase to a corner of the cube, at which point the particle no longer contains reoriented regions. Cycling experiments consisting of two loading and unloading processes confirm that the particle improves its crystal quality during the α to β phase transformation. The ability of a nanoparticle to remove prior imperfections allows it to be an effective medium for energy storage, as it is able to maintain a high degree of crystallinity during the cycling process. Our results demonstrate the utility of nanoscaling for solute-based phase transformations—the small size allows for defects to be pushed out of the particle, which is not possible in the bulk. Data availability All relevant data are available from the authors. Additional information How to cite this article: Narayan, T. C. et al . Direct visualization of hydrogen absorption dynamics in individual palladium nanoparticles. Nat. Commun. 8, 14020 doi: 10.1038/ncomms14020 (2017). Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
In a lab 18 feet below the Engineering Quad of Stanford University, researchers in the Dionne lab camped out with one of the most advanced microscopes in the world to capture an unimaginably small reaction. The lab members conducted arduous experiments - sometimes requiring a continuous 30 hours of work—to capture real-time, dynamic visualizations of atoms that could someday help our phone batteries last longer and our electric vehicles go farther on a single charge. Toiling underground in the tunneled labs, they recorded atoms moving in and out of nanoparticles less than 100 nanometers in size, with a resolution approaching 1 nanometer. "The ability to directly visualize reactions in real time with such high resolution will allow us to explore many unanswered questions in the chemical and physical sciences," said Jen Dionne, associate professor of materials science and engineering at Stanford and senior author of the paper detailing this work, published Jan. 16 in Nature Communications. "While the experiments are not easy, they would not be possible without the remarkable advances in electron microscopy from the past decade." Their experiments focused on hydrogen moving into palladium, a class of reactions known as an intercalation-driven phase transition. This reaction is physically analogous to how ions flow through a battery or fuel cell during charging and discharging. Observing this process in real time provides insight into why nanoparticles make better electrodes than bulk materials and fits into Dionne's larger interest in energy storage devices that can charge faster, hold more energy and stave off permanent failure. Technical complexity and ghosts For these experiments, the Dionne lab created palladium nanocubes, a form of nanoparticle, that ranged in size from about 15 to 80 nanometers, and then placed them in a hydrogen gas environment within an electron microscope. The researchers knew that hydrogen would change both the dimensions of the lattice and the electronic properties of the nanoparticle. They thought that, with the appropriate microscope lens and aperture configuration, techniques called scanning transmission electron microscopy and electron energy loss spectroscopy might show hydrogen uptake in real time. After months of trial and error, the results were extremely detailed, real-time videos of the changes in the particle as hydrogen was introduced. The entire process was so complicated and novel that the first time it worked, the lab didn't even have the video software running, leading them to capture their first movie success on a smartphone. Following these videos, they examined the nanocubes during intermediate stages of hydrogenation using a second technique in the microscope, called dark-field imaging, which relies on scattered electrons. In order to pause the hydrogenation process, the researchers plunged the nanocubes into an ice bath of liquid nitrogen mid-reaction, dropping their temperature to 100 degrees Kelvin (-280 F). These dark-field images served as a way to check that the application of the electron beam hadn't influenced the previous observations and allowed the researchers to see detailed structural changes during the reaction. "With the average experiment spanning about 24 hours at this low temperature, we faced many instrument problems and called Ai Leen Koh [co-author and research scientist at Stanford's Nano Shared Facilities] at the weirdest hours of the night," recalled Fariah Hayee, lead co-author of the study and graduate student in the Dionne lab. "We even encountered a 'ghost-of-the-joystick problem,' where the joystick seemed to move the sample uncontrollably for some time." While most electron microscopes operate with the specimen held in a vacuum, the microscope used for this research has the advanced ability to allow the researchers to introduce liquids or gases to their specimen. "We benefit tremendously from having access to one of the best microscope facilities in the world," said Tarun Narayan, lead co-author of this study and recent doctoral graduate from the Dionne lab. "Without these specific tools, we wouldn't be able to introduce hydrogen gas or cool down our samples enough to see these processes take place." Pushing out imperfections Aside from being a widely applicable proof of concept for this suite of visualization techniques, watching the atoms move provides greater validation for the high hopes many scientists have for nanoparticle energy storage technologies. The researchers saw the atoms move in through the corners of the nanocube and observed the formation of various imperfections within the particle as hydrogen moved within it. This sounds like an argument against the promise of nanoparticles but that's because it's not the whole story. "The nanoparticle has the ability to self-heal," said Dionne. "When you first introduce hydrogen, the particle deforms and loses its perfect crystallinity. But once the particle has absorbed as much hydrogen as it can, it transforms itself back to a perfect crystal again." The researchers describe this as imperfections being "pushed out" of the nanoparticle. This ability of the nanocube to self-heal makes it more durable, a key property needed for energy storage materials that can sustain many charge and discharge cycles. Looking toward the future As the efficiency of renewable energy generation increases, the need for higher quality energy storage is more pressing than ever. It's likely that the future of storage will rely on new chemistries and the findings of this research, including the microscopy techniques the researchers refined along the way, will apply to nearly any solution in those categories. For its part, the Dionne lab has many directions it can go from here. The team could look at a variety of material compositions, or compare how the sizes and shapes of nanoparticles affect the way they work, and, soon, take advantage of new upgrades to their microscope to study light-driven reactions. At present, Hayee has moved on to experimenting with nanorods, which have more surface area for the ions to move through, promising potentially even faster kinetics.
www.nature.com/articles/ncomms14020
Nano
Invention opens the door to safer and less expensive X-ray imaging
Qiushui Chen et al. All-inorganic perovskite nanocrystal scintillators, Nature (2018). DOI: 10.1038/s41586-018-0451-1 Journal information: Nature
http://dx.doi.org/10.1038/s41586-018-0451-1
https://phys.org/news/2018-10-door-safer-expensive-x-ray-imaging.html
Abstract The rising demand for radiation detection materials in many applications has led to extensive research on scintillators 1 , 2 , 3 . The ability of a scintillator to absorb high-energy (kiloelectronvolt-scale) X-ray photons and convert the absorbed energy into low-energy visible photons is critical for applications in radiation exposure monitoring, security inspection, X-ray astronomy and medical radiography 4 , 5 . However, conventional scintillators are generally synthesized by crystallization at a high temperature and their radioluminescence is difficult to tune across the visible spectrum. Here we describe experimental investigations of a series of all-inorganic perovskite nanocrystals comprising caesium and lead atoms and their response to X-ray irradiation. These nanocrystal scintillators exhibit strong X-ray absorption and intense radioluminescence at visible wavelengths. Unlike bulk inorganic scintillators, these perovskite nanomaterials are solution-processable at a relatively low temperature and can generate X-ray-induced emissions that are easily tunable across the visible spectrum by tailoring the anionic component of colloidal precursors during their synthesis. These features allow the fabrication of flexible and highly sensitive X-ray detectors with a detection limit of 13 nanograys per second, which is about 400 times lower than typical medical imaging doses. We show that these colour-tunable perovskite nanocrystal scintillators can provide a convenient visualization tool for X-ray radiography, as the associated image can be directly recorded by standard digital cameras. We also demonstrate their direct integration with commercial flat-panel imagers and their utility in examining electronic circuit boards under low-dose X-ray illumination. Main The nature of the atomic constituents of a scintillator plays an important role in the radioluminescence process of the material because X-ray absorption increases exponentially with atomic number 6 . Although a wide range of scintillation materials containing heavy atoms have been characterized in detail for efficient X-ray scintillation, almost all of these materials are bulk crystals and grown by the Czochralski method 7 at temperatures above 1,700 °C. For bulk-form scintillators, such as PbWO 4 and Bi 4 Ge 3 O 12 , a certain distance of exciton migration is typically needed to transport charge carriers for subsequent trapping by luminescence centres 8 . However, excessive exciton migration is detrimental because it can cause either radioluminescence afterglow or low-efficiency X-ray scintillation. In addition, conventional activator-doped scintillators, such as thallium-activated CsI (CsI:Tl) and cerium-activated YAlO 3 (YAlO 3 :Ce), cannot produce tunable scintillation because of their fixed transition energies 9 , 10 . Despite enormous efforts, the development of scintillating materials that are low-temperature- and solution-processable, highly sensitive to X-rays and integrable to flexible substrates remains a daunting challenge. Recently, bulk crystals of organic–inorganic hybrid perovskites have been found to exhibit large X-ray stopping power 11 , 12 , 13 , 14 and the ability of efficiently converting X-ray photons into charge carriers 15 , 16 , 17 , 18 . The direct photon-to-current conversion can be attributed to the heavy Pb atom 19 and large electron–hole diffusion lengths available in organic–inorganic hybrid perovskites 20 , 21 , 22 , 23 , 24 , 25 . We reason that caesium lead halide perovskite nanocrystals 26 , which feature heavy constituent elements and tunable electronic bandgaps in the visible range, could be a promising candidate for high-efficiency X-ray scintillation. An appealing aspect of these perovskite nanocrystals is that their unique electronic structures render highly emissive triplet excited states 27 and anomalous fast emission rates 28 . By virtue of the effect of quantum confinement and increased overlap of electron and hole wavefunctions, the spatial distribution of luminescence centres and X-ray-generated excitons can be confined within the Bohr radius of the nanocrystals. Here we report experimental investigations of multicolour X-ray scintillation from a series of all-inorganic perovskite nanocrystals and demonstrate their use for ultrasensitive X-ray sensing and low-dose digital X-ray technology. In a typical bulk scintillator material, incident X-ray photons can interact with heavy atoms (for example, Pb, Tl or Ce) to produce a large number of hot electrons through the photoelectric effect 8 . These charge carriers are quickly thermalized to form low-energy excitons, which can subsequently be transported to defect centres or activators for radiative luminescence (Extended Data Fig. 1a ). We thus predict that high-energy (kiloelectronvolt-scale) X-ray photons can be converted to numerous low-energy visible photons via direct bandgap emissions in lead halide perovskite nanocrystals (Fig. 1a ). To validate this hypothesis, we prepared a series of perovskite nanocrystals (CsPbX 3 , with X = Cl, Br or I) by controlling the reaction of Cs-oleate with different PbX 2 precursors via a hot-injection solution method 29 (Extended Data Fig. 2 ). Transmission electron micrograph imaging reveals a cubic shape of the as-synthesized nanocrystals, with an average size of 9.6 nm (Fig. 1b ). Remarkably, under X-ray beam excitation the perovskite quantum dots (QDs) yield narrow and colour-tunable emissions (Fig. 1c , Extended Data Fig. 3 ). This unique property allows multicolour, high-efficiency X-ray scintillation to be realized (Fig. 1d, e , Extended Data Table 1 ). By contrast, the radioluminescence spectrum of conventional bulk scintillators (CsI:Tl, PbWO 4 , YAlO 3 :Ce and Bi 4 Ge 3 O 12 ) is almost invariable and exhibits a wide emission peak with a large full-width at half-maximum (Extended Data Fig. 1b ). This inherent limitation of conventional scintillators makes it difficult to achieve multicolour visualization of X-ray irradiation. Fig. 1: Full-colour radioluminescence from perovskite nanocrystal scintillators. a , Schematic representation of X-ray-induced luminescence of energy hν (where h is the Planck constant and ν is the frequency), generated in an all-inorganic perovskite lattice with a cubic crystal structure (see main text for details). b , Low-resolution transmission electron microscopy (TEM) image of the as-synthesized CsPbBr 3 nanocrystals. The inset shows a high-resolution TEM image of a single CsPbBr 3 nanocrystal and the corresponding electron diffraction pattern along the [100] zoom axis. c , Tunable luminescence spectra of the perovskite QDs under X-ray illumination with a dose rate of 278 μGy s −1 at a voltage of 50 kV. The material compositions of samples 1–12 are CsPbCl 3 (1), CsPbCl 2 Br (2), CsPbCl 1.5 Br 1.5 (3), CsPbClBr 2 (4), CsPbCl 2 . 5 Br 0.5 (5), CsPbBr 3 (6), CsPbBr 2 I (7), CsPbBr 1.8 I 1.2 (8), CsPbBr 1.5 I 1.5 (9), CsPbBr 1.2 I 1.8 (10), CsPbBrI 2 (11) and CsPbI 3 (12). The insets show photographs of the thin-film samples 3, 6 and 9, which emit blue, green and red colours, respectively, upon X-ray irradiation. d , Comparison of the optical sensitivity of various scintillator materials in response to exposure to X-rays produced at a voltage of 10 kV. e , CIE (Commission Internationale de l’Eclairage) chromaticity coordinates of the X-ray-induced visible emissions measured for samples 1–12. f , Multicolour X-ray scintillation (left, bright-field imaging; right, X-ray illumination at a voltage of 50 kV) from three types of perovskite nanocrystal scintillator (orange, CsPbBr 2 I; green, CsPbBr 3 ; blue, CsPbClBr 2 ). Full size image Inspired by the bandgap-tunable perovskite nanocrystal scintillators, we successfully developed a flexible prototype device for multicolour X-ray scintillation through a combination of solution processing and soft lithography (Fig. 1f , Extended Data Fig. 3 d, e ). The fabrication of the X-ray-sensing device was made possible by casting the oleate-capped perovskite nanocrystals onto the flexible substrate of interest. This flexible substrate allowed rapid X-ray multicolour visualization (Supplementary Video 1 ), which is inaccessible by current bulk scintillators. We then compared the sensitivity of the perovskite nanocrystals to X-ray illumination with that of several of the most widely used commercial bulk scintillators (CsI:Tl, PbWO 4 , YAlO 3 :Ce and Bi 4 Ge 3 O 12 ). We used low-dose irradiation of 5.0 μGy s −1 (all doses refer to doses in air) at 10 kV and 5 μA and found that the ability of CsPbBr 3 nanocrystal thin films (thickness of about 0.1 mm) to convert X-ray photons into visible luminescence is comparable to that of high-efficiency CsI:Tl bulk scintillators (thickness of 5.0 mm), whereas it compares much more favourably (more intense by a factor of 5 or more) than other bulk scintillators, including PbWO 4 , YAlO 3 :Ce and Bi 4 Ge 3 O 12 (Fig. 1d ). This superior performance is attributed to the large X-ray stopping power and high emission quantum yields of the lead halide QDs. Notably, conventional QDs (for example, CdTe QDs and carbon dots) exhibit low-efficiency X-ray-induced luminescence possibly due to weak X-ray absorption 30 , and thus are not suitable for practical use as scintillators (Fig. 1d , Extended Data Fig. 4 ). As a point of comparison, we also found that typical bulk single crystals of CsPbBr 3 and CH 3 NH 3 PbBr 3 do not exhibit noticeable visible emission under the same experimental conditions (Fig. 1d , Extended Data Fig. 5 ). The noteworthy scintillation performance of CsPbBr 3 nanocrystals with respect to their bulk counterparts presents a compelling case for investigating the origins of the scintillation process in our system. This process can be explained in part by the lack of exciton confinement in the bulk form, in which discrete or quantized energy levels that give access to visible emission cannot be generated 31 . We further investigated experimentally and theoretically the physical processes that govern the interaction between X-rays and perovskite nanocrystals. As shown in Fig. 2a , we compared the absorption coefficient of the CsPbBr 3 nanocrystals (highest atomic number Z max = 82; Kα = 88.0 keV) as a function of X-ray photon energy with two types of conventional QD (CdTe, Z max = 52, Kα = 31.8 keV; carbon, Z max = 6, Kα = 0.285 keV). The nature of heavy atomic constituents is critically important for efficient X-ray scintillation, because X-ray absorption scales with the effective atomic number, Z eff , as \({Z}_{{\rm{eff}}}^{4}/A{E}^{3}\) , where A is the atomic mass and E is the X-ray photon energy 6 . We thus speculate that the Pb-based perovskite nanocrystals are much more suitable for efficient X-ray absorption than QDs without the Pb component. We carried out an X-ray photoelectron spectroscopic investigation to record the kinetic process of electrons escaping from the CsPbBr 3 nanocrystal upon irradiation with soft X-rays (Fig. 2b ). To reveal the photoionization nature of the X-ray scintillation process under study, we measured the radioluminescence of the perovskite nanocrystals in response to synchrotron radiation (Fig. 2c , Extended Data Fig. 6 , Supplementary Video 2 ). We observed an abrupt enhancement in the scintillation intensity upon excitation at 14 keV, 16 keV and 36 keV, indicating an X-ray absorption resonance at the electronic edge of the Pb L, Cs K and Br K shells in the CsPbBr 3 structure. Density functional theory calculations confirmed that the electronic band structure of these perovskite nanocrystal scintillators is tunable, which is associated with the tailorability of their valence band through control of the halide composition (Extended Data Fig. 7 ). The bandgap energy of the perovskite nanocrystal under study is located in the range 1.7–3 eV, suggesting the feasibility of using such a nanomaterial to convert an absorbed dose of ionizing radiation into visible light (Fig. 2d ). In addition, the orbital contour plots of the CsPbBr 3 nanocrystal indicate that the presence of hole-like surface-vacancy-induced Coulomb-trapping states near the Fermi level beyond the valence band maximum is responsible for the electronically energetic confinement of excitons in the perovskite nanocrystal (Extended Data Fig. 8b ). Fig. 2: Mechanistic investigation of X-ray energy conversion by perovskite nanocrystals. a , Measured absorption spectra of CsPbBr 3 , CdTe and carbon as a function of X-ray energy. The attenuation coefficients were obtained from ref. 33 . b , X-ray photoelectron spectroscopic data of the CsPbBr 3 nanocrystals plotted against the binding energy of the electron. The photoemission peaks Cs 3 d , Pb 4 f and Br 3 d are indicated. a.u., arbitrary units. c , Measurement of X-ray-induced luminescence from the perovskite nanocrystals using synchrotron radiation. The electronic edge energies of Pb L, Cs K and Br K (shown as red squares) fall in the X-ray energy range 10–38 keV. The line is a guide for the eye. d , Calculated electronic band structures of the CsPbBr 3 nanocrystal. The inset shows the Brillouin zone of the cubic-phased crystal lattice (see Methods for details). e , Proposed mechanism of X-ray scintillation in a lead halide perovskite nanocrystal. Upon X-ray irradiation, a high-energy electron (red circles, e − ) is ejected from a lattice atom through photoelectric ionization (ionizing radiation creates an energetic electron and a hole in an inner electronic shell). Subsequently, the ejected high-energy electron produces secondary high-energy electrons. The generated hot charge carriers then undergo thermalization and produce low-energy excitons. Next, fast radiative recombination takes place, producing radioluminescence of energy hν in either a singlet (S) or triplet (T) state at the electronic band edge. f , Energy density on the surface of a CsPbBr 3 cluster as a function of particle distance, d , in the lattice. The red line is a fit with a Gaussian distribution function with a fitting coefficient of η = 0.9752. The particle distance corresponding to the maximum energy density is 10.32 Å. g , Schematic showing the basic design of a perovskite-nanocrystal-based photoconductor used for X-ray sensing. A 10-μm-thick layer of CsPbBr 3 QDs is spin-coated onto the substrate for X-ray photon–carrier conversion. Gold (Au) electrodes are placed onto the QDs for hole–electron extraction. h , Current–voltage characteristics of the as-fabricated photoconductor, recorded with and without X-ray illumination. Full size image Figure 2e presents a plausible mechanism for the high-intensity radioluminescence from the perovskite nanocrystals. At the initial conversion stage, an incident X-ray photon with energy lower than a few hundred kiloelectronvolts interacts with the lattice atoms of a perovskite nanocrystal, predominantly through the photoelectric effect. During this process a large number of high-energy electrons and holes can be created, and electronic transport occurs between the perovskite nanocrystals (Fig. 2f ). The hot electrons and holes are then quickly thermalized in the conduction and valence band edges. The X-ray-induced charge carriers in the perovskite nanocrystals were experimentally confirmed by measuring the current through a photoconductor upon X-ray illumination (Fig. 2g, h ). The trapping and radiative recombination of electron–hole pairs can be controlled to produce a desired luminescence colour by adjusting the bandgap energy. The mechanism of intense X-ray scintillation could be attributed in part to the strong X-ray stopping power and quantum confinement effects of perovskite nanocrystals. Additionally, the scintillation process is dominated by the presence of highly emissive triplet excited states (Fig. 2e ), large absorption cross-section within the bandgap (Extended Data Fig. 9a, b ) and fast emission output (Extended Data Fig. 9c–e ), which are characteristics of perovskite nanocrystals 27 , 28 , 32 . The solution-processability of the perovskite nanocrystals makes it possible to fabricate a thin-film scintillator device for ultrasensitive X-ray detection. In this device (Fig. 3a ), spin-coated CsPbBr 3 nanocrystals are used for X-ray sensing by converting high-energy X-ray photons into visible emission, which is readily detectable by a photomultiplier tube. A favourable characteristic of the prototype X-ray detector is its linear response to the X-ray dose rate, covering a range as broad as four orders of magnitude (Extended Data Fig. 10 ). The lowest detectable dose rate for X-ray detection is demonstrated to be 13 nGy s −1 . This value is about 420 times lower than the dose typically used for X-ray diagnostics (5.5 μGy s −1 ) 14 . This scintillation photodetector also exhibits a very fast response (scintillation decay time, τ = 44.6 ns) upon excitation with pulsed photons (661 keV) from a portable 137 Cs source (Fig. 3b ). The fast response to X-ray photons is critical to scintillation performance in medical radiography. The photostability of the perovskite nanocrystals was further examined under continuous or repeated cycles of X-ray illumination, as shown in Fig. 3c . Fig. 3: Ultrasensitive X-ray sensing and radiography using CsPbBr 3 nanocrystals. a , Radioluminescence measurements for a CsPbBr 3 -based scintillator as a function of dose rate. The left inset shows radioluminescence profiles measured at low dose rates. The detection limit of 13 nGy s −1 is derived from the slope of the fitting line, with a signal-to-noise ratio of 3. The right inset shows a schematic of the X-ray photodetector, which consists of a CsPbBr 3 nanocrystal thin film (about 120 μm thickness), a polydimethylsiloxane (PDMS) layer and a photomultiplier tube (PMT). All measurements were performed three times. Error bars are mean ± s.d. b , Measured radioluminescence decay of the CsPbBr 3 -based scintillator under excitation with a 137 Cs source (photon energy, 661 keV). The scintillation decay time is τ = 44.6 ns. c , Photostability of the CsPbBr 3 -based scintillator against continuous X-ray irradiation (wavelength λ = 530 nm, 50 kV; top) and repeated cycles of X-ray excitation at 30 kV with a time interval of 30 s ( λ = 530 nm; bottom). d , Schematic of the experimental setup used for real-time X-ray diagnostic imaging of biological samples. A beetle is placed between the X-ray source and a scintillation platform covered with perovskite QDs. e , f , Bright-field ( e ) and the X-ray ( f ) images of the sample, recorded with a digital camera. The X-ray images were recorded at a voltage of 50 kV. Full size image To assess the suitability of the perovskite nanocrystals as scintillators for X-ray phase-contrast imaging, we implanted a metallic needle into a green scarab beetle and imaged the biological specimen with X-rays against a background substrate comprising a thin film of solution-processed CsPbBr 3 nanocrystals (Fig. 3d ). We note that the CsPbBr 3 nanocrystals were chosen for this demonstration because their green emission at 530 nm matches well with the maximum wavelength response of a complementary metal-oxide-semiconductor sensor. As shown in Fig. 3e, f , owing to the large difference between the X-ray stopping powers of the needle and the beetle, the needle inside the beetle is clearly revealed by phase-contrast imaging recorded using a common digital camera. The concept of direct X-ray contrast imaging through the use of high-efficiency perovskite nanocrystals is readily applicable to high-throughput electronics inspection and tissue imaging, where common digital cameras can be conveniently used (Extended Data Fig. 11 ; Extended Data Table 2 ). We took a step further and tested the compliance of the perovskite nanocrystals to commercial flat-panel X-ray detectors equipped with α-Si photodiode arrays (Fig. 4a, b ). As shown in Fig. 4c , the perovskite-nanocrystal-based X-ray detector shows a modulation transfer function of 0.72 at a spatial resolution of 2.0 line pairs per millimetre, which is much higher than the spatial resolution of commercially used CsI:Tl-based flat-panel X-ray detectors (0.36 at 2.0 line pairs per millimetre). This high spatial resolution could be ascribed to the lower degree of light scattering in the nanoparticle-based thin film compared with that occurring in commercial bulk-scintillator-based films made of thick polycrystalline ceramics or long micropillars. We further used the prototype device to image the internal structures of electronic circuits and an Apple iPhone with a low X-ray dose of 15 μGy (Fig. 4d–f ). Unlike CsI:Tl scintillators, which have the issue of afterglow luminescence (scintillation decay time of 1,000 ns), our perovskite nanocrystals have a very fast response (44.6 ns) to X-rays, making them ideal for dynamic real-time X-ray imaging. Fig. 4: Prototype perovskite-nanocrystal-based flat-panel detector for digital radiography. a , Multilayered design of the flat-panel X-ray imaging system consisting of a thin-film-transistor (TFT) sensor panel, a pixelated α-silicon photodiode array, a CsPbBr 3 perovskite nanocrystal thin film (about 75 μm thick) and a protective Al foil cover (40 μm). b , Photograph of the packaged flat-panel detector. c , Spatial resolution of the X-ray imaging system, characterized by the modulation transfer function under 15 μGy of X-ray exposure. The blue circles and purple line show measured values and the black line is a fit to the data. d , e , Digital photograph of a network interface card ( d ) and corresponding X-ray image obtained using the flat-panel detector (70 kV and 2.5 mGy s −1 exposure for 6 ms) ( e ). f , Comparison of X-ray images of an Apple iPhone acquired with the perovskite scintillator deposited on an α-Si photodiode panel (left) and only with an α-Si photodiode (right). Full size image In conclusion, we have demonstrated inorganic perovskite nanocrystals as a new class of scintillators that are capable of converting small doses of X-ray photons into multicolour visible light. When considering the material’s solution-processability and practical scalability, it is envisioned that these scintillators are suitable for the mass production of ultrasensitive X-ray detectors and large-area, flexible X-ray imagers. Compared to conventional CsI:Tl scintillators—whose use is constrained by the risk of thallium poisoning, the presence of afterglow and high-temperature synthesis—perovskite nanocrystals offer several outstanding attributes, including relatively low toxicity, low-temperature solution synthesis, fast scintillation response and high emission quantum yield. Although there is still much to be learned regarding the origin of nanocrystal scintillation, these perovskite nanocrystals may hold substantial promise for advancing X-ray sensing and imaging industry. The thermal and environmental instability issues that are often associated with perovskite materials in photovoltaic and light-emitting-diode applications could be largely avoided through the X-ray scintillation settings. Methods Chemicals Caesium carbonate (Cs 2 CO 3 , 99.9%), lead( ii ) chloride (PbCl 2 , 99.99%), lead( ii ) bromide (PbBr 2 , 99.99%), lead( ii ) iodide (PbI 2 , 99.99%), oleylamine (technical grade 70%), oleic acid (technical grade 90%), 1-octadecene (technical grade 90%) and cyclohexane (chromatography grade 99.9%) were purchased from Sigma-Aldrich. Silicon wafers were obtained from Xilika Crystal Polishing Material Co., Ltd (Tianjin, China). SU-8 photoresist (2050) and developer solution were purchased from Microchem Corp. (Newton, MA). A Sylgard 184 silicone elastomer kit was purchased from Dow Corning for the preparation of polydimethylsiloxane (PDMS) substrates. Crystals of CsI:Tl, Bi 4 Ge 3 O 12 , YAlO 3 :Ce and PbWO 4 scintillators were purchased from Zhonghelixin Co., Ltd (Chengdu, China). CdTe QDs were obtained from Xingzi New Material Technology Development Co., Ltd (Shanghai, China). Unless otherwise noted, all the reagents were used without additional treatment. Synthesis of Cs-oleate as a caesium precursor In a typical synthesis procedure, Cs 2 CO 3 (0.4 g, 1.23 mmol), oleic acid (1.25 ml) and octadecene (15 ml) were added to a two-neck round-bottom flask (50 ml). The resulting mixture was heated to 100 °C under vigorous stirring and vacuum conditions for 0.5 h. After that, a nitrogen purge and vacuum were alternately applied to the flask three times to remove moisture and O 2 . Subsequently, the mixture was heated to 150 °C and the solution became clear, indicating the completion of the reaction between Cs 2 CO 3 and oleic acid. The Cs-precursor solution was kept at 150 °C in a nitrogen atmosphere before the synthesis of perovskite nanocrystal. Synthesis of CsPbX 3 (X = Cl, Br or I) nanocrystals The CsPbX 3 perovskite QDs were synthesized by a modified hot-injection procedure 29 . In a typical experiment, PbX 2 (0.36 mmol each for X = Cl, Br or I), oleic acid (1.0 ml), oleylamine (1.0 ml) and octadecene (10 ml) were added to a two-neck round-bottom flask (50 ml). The resulting mixture was heated to 100 °C under vigorous stirring and vacuum conditions for 0.5 h, at which time the moisture residue was removed by purging with nitrogen and vacuum suction. Then the mixture was heated to 160 °C until the PbX 2 precursors dissolved completely. A hot Cs-oleate precursor solution (1 ml) was injected quickly into the above reaction mixture. After 5 s of reaction, the flask was transferred into an ice bath. The CsPbX 3 QDs were obtained by centrifugation at 13,000 r.p.m. for 10 min and stored in 4 ml of cyclohexane before further use. Mixed-halide perovskite QDs were synthesized to tune the luminescence colour. Samples 1–12 correspond to the as-synthesized CsPbCl 3 (1), CsPbCl 2 Br (2), CsPbCl 1.5 Br 1.5 (3), CsPbClBr 2 (4), CsPbCl 2.5 Br 0.5 (5), CsPbBr 3 (6), CsPbBr 2 I (7), CsPbBr 1.8 I 1.2 (8), CsPbBr 1.5 I 1.5 (9), CsPbBr 1.2 I 1.8 (10), CsPbBrI 2 (11) and CsPbI 3 (12) QDs. Growth of lead halide perovskite single crystals The growth of CsPbBr 3 single crystals was carried out according to a method described in the literature 34 . In a typical procedure, CsBr (0.64 g, 3 mmol) and PbBr 2 (2.2 g, 6 mmol) were dissolved in 3 ml of dimethyl sulfoxide and stirred for 1 h. Subsequently, 1.5 ml of the mixture was transferred into a vial heated at 60 °C, and the temperature was raised to 100 °C with a heating rate of 10 °C h −1 . At 100 °C, the solution was filtered and then gradually heated to 120 °C. We observed the growth of small-sized crystals with increasing temperature. The resulting crystals were washed with hot dimethyl sulfoxide and dried in vacuo at 100 °C for 1 h. For the synthesis of CH 3 NH 3 PbBr 3 (MAPbBr 3 ) single crystals, a mixture of PbBr 2 and CH 3 NH 3 Br (1.5 mol each) was dissolved in a solution of N , N ,-dimethyl formamide (1 ml) at room temperature. The solution was purified by passing through a polytetrafluoroethylene filter with a pore size of 0.22 μm. The growth of the MAPbBr 3 single crystals was carried out in an oil bath heated at 60 °C and under ambient pressure. Synthesis of fluorescent carbon dots Fluorescent carbon dots were synthesized by a hydrothermal method. In a typical experiment, ammonium citrate (0.972 g, 4.0 mmol) was first dissolved in a 20-ml water solution. The solution was then transferred into a 30-ml Teflon-lined vessel at room temperature while stirring. Subsequently, the solution was heated to 190 °C and kept at that temperature for 10 h. After cooling to room temperature, the product was purified using dialysis, and the cut-off molecular weight of the dialysed membrane was equivalent to about 2,000. The carbon dot solution was concentrated using a rotary evaporator. Preparation of silica-coated perovskite nanocrystals The stability of perovskite nanocrystals was improved by coating with a silicon dioxide (silica) layer according to the literature 35 . In a typical procedure, the CsPbBr 3 QDs, dispersed in cyclohexane (2 ml), were introduced into a 50-ml flask containing 10 ml of toluene solution (≥99.5%; AnalaR NORMAPUR). 100 μl of tetramethoxysilane was injected quickly into the mixture at room temperature. After stirring for 2 h, the products were isolated through centrifugation at 13,000 r.p.m. for 8 min. The silica-coated CsPbX 3 (X = Br or I) QDs were either dispersed in cyclohexane or dried in air. Physical characterization Powder X-ray diffraction characterization was carried out by an ADDS wide-angle X-ray powder diffractometer with Cu Kα radiation (wavelength, λ = 1.54184 Å). TEM imaging was performed using a FEI Tecnai G20 transmission electron microscope with an accelerating voltage of 200 kV. X-ray photoelectron spectroscopy analysis was carried out using a Thermo escalab 250Xi instrument equipped with Al Kα monochromatized X-rays at 1,486.6 eV. Absorption spectra were measured by an ultraviolet–visible spectrophotometer (UV-2450, Shimadzu, Japan). Photoluminescence and radioluminescence spectra were obtained by an Edinburgh FS5 fluorescence spectrophotometer (Edinburgh Instruments Ltd, UK) equipped with a miniature X-ray source (AMPEK, Inc.). Photographs of the X-ray-induced luminescence were acquired with a digital camera (Nikon D7100 with AF Micro-Nikkor 60mm f/2.8D). For the time-resolved photoluminescence measurements, a pulsed excitation source was used. The scintillation decay measurement was carried out at the Institute of High Energy Physics of the Chinese Academy of Sciences with a 137 Cs source used for the pulsed excitation. The effective scintillation decay time ( τ eff ) can be calculated using the following formula: $${\tau }_{{\rm{e}}{\rm{f}}{\rm{f}}}=\frac{1}{{I}_{0}}{\int }_{0}^{{\rm{\infty }}}I(t){\rm{d}}t$$ where I ( t ) and I 0 denote the radioluminescence (or photoluminescence) intensity as a function of time, t , and the maximum intensity, respectively. Measurement of photoluminescence quantum yield The quantum yield was determined with an optical spectrometer equipped with an integrating sphere. Perovskite QDs were dispersed in cyclohexane. The excitation and luminescence emission were detected by a photomultiplier tube (PMT) through total internal reflection in the integration sphere. The photoluminescence quantum yield (PLQY) was calculated according to PLQY = P sample /( S ref − S sample ), where S ref and S sample are the excitation light intensities not absorbed by the solvent and the sample, respectively, and P sample is the integrated emission intensity of the sample (Extended Data Fig. 5e ). Measurement of exciton binding energy in perovskite nanocrystal scintillator The exciton binding energy ( E a ) was estimated by measuring the temperature-dependent radioluminescence intensity. By fitting data derived from the integrated luminescence intensity of the CsPbBr 3 QD scintillator with the Arrhenius formula: $$I(T)=\frac{I({T}_{0})}{1+CT\exp [-{E}_{{\rm{a}}}/({k}_{{\rm{B}}}T)]}$$ where I ( T 0 ) is the radioluminescence intensity at the low-temperature ( T 0 ) limit, k B is the Boltzmann constant, C is a constant and T is the temperature, we obtain an exciton binding energy of 49 meV. Fabrication of perovskite nanocrystal scintillator films on PDMS substrates The PDMS substrates were fabricated by a standard soft lithography microfabrication technique. Briefly, a photomask was first designed using Adobe Illustrator CS6. A 60-μm-thick layer of negative photoresists (SU-8 2015; 2,500 r.p.m., 60 s) was spin-coated onto a silicon wafer (3 inch; 1 inch = 1.54 cm). The wafer was prebaked at 60 °C for 10 min and then at 85 °C for 5 min. The resulting photoresist on the wafer was irradiated by an ultraviolet lamp for 20 s, followed by a post-baking treatment in an oven at 75 °C for 5 min. Next, the desired microstructure on the silicon wafer was produced using a developer solution. The PDMS substrates were fabricated with a premixed PDMS prepolymer and curing agent (10:1 by mass) under vacuum conditions, followed by heat treatment at 80 °C for 2 h. The PDMS replicas were carefully peeled off from the master. Finally, perovskite QDs dispersed in cyclohexane were coated onto the PDMS substrate. Radioluminescence measurement for perovskite nanocrystal scintillators The measurement of X-ray-induced luminescence was performed using a solid film comprising perovskite QDs. We note that perovskite QDs dispersed in solution are not suitable for scintillation characterization under X-ray excitation, because a low population of QDs in solution is inefficient for X-ray absorption. Unlike under visible-light excitation, a quartz cuvette is not used for measuring scintillation luminescence under X-ray excitation, because the excitation can be strongly absorbed by the cuvette. The scintillation decay times 36 of CsI:Tl, Bi 4 Ge 3 O 12 , YAlO 3 :Ce and PbWO 4 crystal scintillators are listed in Extended Data Table 1 . X-ray photoconductor devices To fabricate the X-ray photoconducting device, silica wafers with a 300-nm-thick SiO 2 layer were first cleaned by sonication in acetone, ethanol and deionized water separately. After drying with flowing nitrogen, the substrates were treated with oxygen plasma for 6 min. The solution of CsPbBr 3 QDs was spin-coated onto the Si/SiO 2 substrates at 500 r.p.m. for 30 s and subsequently annealed at 100 °C for 5 min. This procedure was repeated three times to produce a film with a thickness of about 10 μm. After that, 100-nm-thick gold electrodes were deposited onto the CsPbBr 3 QD film by thermal evaporation, using a shadow mask to control the size of the deposition. For the X-ray photon-to-current measurement, we used a commercially available, miniaturized X-ray tube (Amptek). The target in the X-ray tube was made of gold and the maximum output was 4 W. In our measurement, the X-ray tube voltage was kept at 50 kV while the peak X-ray energy was set at 10 keV with an Al/W filter and a 2-mm-diameter brass collimator. The distance between the X-ray source and the X-ray photoconducting device was about 30 cm. The current–voltage measurement of the devices was performed using a Signotone Micromanipulator S-1160 probe station equipped with a Keithley 4200 Semiconductor Parametric Analyzer. All the experiments were carried out at ambient conditions. X-ray scintillation detector and imaging The X-ray scintillation detector was constructed by coating of a PDMS substrate with perovskite QDs (layer thickness of 120 μm), followed by the attachment of a PMT. In a typical procedure, a solution of CsPbBr 3 QDs was spin-coated onto the PDMS substrate. The PDMS substrate was then coupled to the PMT for maximized collection of visible photons. For X-ray detection, a range of X-ray dose rates (0.013–278 μGy s −1 ) was applied by adjusting the current and voltage of the X-ray source. For X-ray imaging, a plastic disk coated with CsPbBr 3 nanocrystals was used. A green scarab beetle implanted with a metallic needle was employed as a specimen for X-ray imaging. In vivo multicolour optical bioimaging All the animal experiments were performed in compliance with institutional guidelines. Silica-coated perovskite QDs (CsPbBr 3 , CsPbBr 1.5 I 1.5 , CsPbBr 1.2 I 1.8 ; 100 μg, 50 μl) dispersed in a phosphate-buffered saline buffer solution were subcutaneously injected into the Balb/c nude mice (age, 4–6 weeks; weight, 18 g). An animal imaging system (Advanced Molecular Imager, Cold Spring Biotech Corp., Shanghai) equipped with an X-ray source was used to carry out in vivo radioluminescence imaging of the mice. The exposure time for in vivo imaging was set at 1 s. For in vivo multicolour optical imaging, optical filters (530 nm, 630 nm and 670 nm) were used to selectively record the X-ray-induced luminescence at different emission wavelengths (Extended Data Fig. 11 ). Construction of perovskite-based flat-panel X-ray imaging system The α-Si photodiode assay backplane was customized for commercial α-Si/CsI:Tl detectors supplied by iRAY Technology Shanghai, Inc. The active area of a photodiode array is 43.0 cm × 43.0 cm, consisting of 3,072 × 3,072 square pixels with a pixel pitch of 139 μm. CsPbBr 3 nanocrystals were first dispersed in cyclohexane. We coated the photodiode arrays (8.0 cm × 8.0 cm) with a thin film (thickness, 75 μm) of nanocrystals using a solution-processing method. After evaporation of cyclohexane, an aluminium film (40 μm thick) was added under vacuum, in a packaging process similar to that used in commercial CsI:Tl-based X-ray imaging systems. The aluminium film was used to protect the scintillators from moisture and light soaking. We note that a reflecting layer was coated on the surface of the aluminium film to enhance the light collection into the photodiode elements. The power consumption was 25 W for full-image acquisition and the X-ray source was operated at a voltage of 70 kV. X-ray imaging of electronic circuit boards was acquired with an X-ray exposure of 2.5 mGy s −1 for 6 ms, resulting in a dose of 15 μGy. The spatial resolution was determined by measurement of the modulation transfer function. Radioluminescence analysis using synchrotron radiation The characterization of the yield of X-ray-induced luminescence near electronic-shell edges was conducted using the synchrotron beamline in the Shanghai Synchrotron Radiation Facility. A thin film of CsPbBr 3 nanocrystals was cast onto a PDMS substrate. The X-ray excitation energies were 10–38 keV, and a portable spectrophotometer (Ocean Optics) was used to measure the radioluminescence. Density functional theory calculation For the calculation of the projected partial density of states (PDOS), density functional theory (DFT) calculations were carried out. We used the Cambridge Serial Total Energy Package (CASTEP) source code to perform the calculations with the rotation-invariant DFT+U method. In a typical procedure, a simple cubic phase with \(Pm\bar{3}m\) symmetrical lattice arrangement was modelled for bulk-phase CsPbBr 3 . Norm-conserving pseudopotentials of the Cs, Pb and Br atoms were generated by the OPIUM code in the Kleinman–Bylander projector form. A nonlinear partial core correction and a scalar relativistic averaging scheme were used to treat the spin–orbit coupling effect. In particular, we treated the 4 s , 4 p and 4 d states of the Br atoms as valence states, the 5 s , 5 p and 5 d states for Cs atoms, and the 5 d , 6 s and 6 p states for Pb atoms. The Rappe–Rabe–Kaxiras–Joannopoulos method was chosen to optimize the pseudopotentials during electronic minimization, particularly using a blocked-Davidson-scheme matrix diagonalization. For the calculations of the electronic states in the CsPbBr 3 material, we used self-consistent determination for on-site U correction on the localized p orbitals of Br sites to correct the on-site Coulomb energy of the electron spurious self-energy. The on-site electronic self-energy and related wavefunction relaxation in the semicore p , d or f orbitals in mixed-valence elements were used to obtain accurate orbital eigenvalues for the electronic structures and transition levels. An ab initio two-way crossover searching calculation was performed by two functionally compiled CASTEP-17 source codes. Using the self-consistent determination, on-site Hubbard U parameters for different orbitals of Br and Pb sites were obtained. Further, a time-dependent DFT calculation was performed with a two-electron-based Tamm–Dancoff approximation imported from the self-consistently corrected ground-state wavefunctions. Luminescence mechanisms in perovskite nanocrystal scintillator Two energy-transfer mechanisms for recombination luminescence exist in the perovskite nanocrystal scintillator. One is anisotropic electron and hole transport within the reciprocal Brillouin zone, which leads to a difference between the electron effective mass along different paths and the excitonic binding energy. This difference illustrates the probability that electronic transport within reciprocal band structures is directionally selected for luminescence. Another plausible route is the annihilation of shallow acceptor levels (Pb vacancies), which induces an absence of recombination centres. Such intrinsic lattice defects usually produce low-excitation energy levels compared with the ideal lattice and consequently hinder energy transfer during the light-absorption process. Anisotropic transport-induced luminescence contrast Using DFT calculations, the bandgap of a bulk CsPbBr 3 crystal was calculated to be about 2.02 eV, whereas in the CsPbBr 3 QDs the bandgap increases slightly to 2.22 eV (Fig. 2d ). This is because the large surface-to-volume ratio in the CsPbBr 3 nanocrystal induces an evident quantum confinement, thus leading to an enlarged vacuum Coulomb barrier for electronic transitions. We chose reciprocal, highly symmetrical points and lined up two different paths (X→R→M→Γ→R and Γ→R→M) within the Brillouin zone. As shown in the electronic band structure plot in Fig. 2d , the valence band edge and the conduction band edge are located at the same point R(1/2, 1/2, 1/2). Using effective mass theory, the effective mass for electrons and holes was found to be anisotropic along these two directions. In the directions X→R→M→Γ→R and Γ→R→M, the effective masses for electrons were calculated to be 0.03 m 0 and 0.11 m 0 and the effective masses for holes were calculated to be 0.12 m 0 and 0.24 m 0 , respectively, where m 0 is the rest mass of the electron. Thus, the Wannier–Mott exciton binding energy and radius are different in these two directions. By converting the reciprocal Brillouin zone area into a real-space diagram, we found that the Cs site at Γ(0, 0, 0) in the body-centred area is different in these two paths. Point R(1/2, 1/2, 1/2) denotes the position of the Pb site, whereas M(1/2, 1/2, 0) represents the location of the Br site. Owing to the different effective electron masses of the two paths, the path Γ→R→M is energetically favourable to the transport of electrons and holes. By contrast, the X→R→M direction is ruled out because the binding energies are too large to release electrons and holes for recombination. This implies a charge transfer process from the Cs site to the Pb site at the cubic apex point, finally reaching the Br site at the middle point of the cubic edge, namely, through the path Γ→R→M (Extended Data Fig. 7a ). Furthermore, we used an orbital calculation to retrieve the electronic and hole orbitals from the electronic band structure. Our results show that bound electrons stay at the Br sites at a non-bonding state in the p – π orbital level (Extended Data Fig. 7b ). Meanwhile, bound holes were found to stay at the Pb site with an s -orbital spherical distribution (Extended Data Fig. 7c ). The orbital contour plots reveal the localization of electrons and holes at perfect lattices. The stabilized charge state of the body-centred Cs site is Cs + because electrons are transferred from the Pb site to the Br site through the ionization of one s -state electron. Intrinsic lattice defects in perovskite nanocrystal scintillators Intrinsic lattice defects in perovskite nanocrystal scintillators are responsible for both luminescence and the quenching effect. Here we consider the low-energy native defects of a Br vacancy (V Br ) and a Pb vacancy (V Pb ). For V Br , the absence of one Br atom leaves one electron occupying the empty p orbitals of the nearest neighbouring Pb site. Accordingly, localized electronic orbitals were modelled for V Br in the neutral ( \({{\rm{V}}}_{{\rm{B}}{\rm{r}}}^{0}\) ) and singly positive ( \({{\rm{V}}}_{{\rm{B}}{\rm{r}}}^{+}\) ) states (Extended Data Fig. 7d, e ). Because the charge bound at the nearby Pb site is positively ionized, the p electronic orbitals of the \({{\rm{V}}}_{{\rm{Br}}}^{+}\) site show a transition from the correlated state to a repulsive behaviour between the two neighbouring Pb sites. The PDOS analysis also shows that the electronic level of V Br , which is localized at the bottom of the conduction band edge, serves as a shallow donor. The V Pb lattice defects produce an acceptor trap centre and a spin-polarized state (Extended Data Fig. 7f, g ). The singly negative state of a Pb site ( \({{\rm{V}}}_{{\rm{P}}{\rm{b}}}^{-}\) ) with one electron already captured could partially passivate the acceptor trap site with weakened charge localization. The process of local geometrical relaxation on different charge states of V Pb indicates that the Cs + sites centrosymmetrically move towards the V Pb centre (Extended Data Fig. 7h ). Upon the occurrence of a strong clustering effect to form \({{\rm{V}}}_{{\rm{Pb}}}^{2-}\) near the V Pb site, the electronically active acceptor trap centre can be completely terminated. In this case, formation of local Cs–Br motifs is possible. We also considered V Cs and \({{\rm{V}}}_{{\rm{Cs}}}^{-}\) sites in the lattice and found no effects on the electronic properties of the host lattice. Accordingly, V Br and V Pb produce the electronic and hole levels for luminescence recombination in the form of photon emission (Extended Data Fig. 7i ). Further, we calculated the excited energy and thermodynamic transition levels in the CsPbBr 3 QDs. From the internal bulk lattice to the surface region, the dimensions of the material decrease but its surface-to-volume radio is increased. The electronic donor trap levels experience a transition from a localized state below the conduction band edge to a delocalized state in the conduction band (Extended Data Fig. 7j ). During the process of radiation ionization and release, the number of bound electrons is increased accordingly. By contrast, with a decreased dimension of the host lattice, the trapping ability of the acceptor is decreased where the hole level shifts from a delocalized state in the valence band to a localized state above the valence band. Therefore, the quenching effect in the bulk CsPbBr 3 materials for luminescence recombination is caused by annihilation of a hole level that is deeply buried in the valence band. The structural transition from the QD to the bulk form occurs from the surface to the bulk, thus the hole level is annihilated. Intrinsic quantum confinement in CsPbBr 3 nanocrystals The intrinsic effect of quantum confinement in CsPbBr 3 nanocrystals was examined by additional theoretical study of their surface electronic properties. In a typical procedure, we first built a simplified model of the CsPbBr 3 structure composed of 293 atoms with a particle size of 12.06 Å, namely, a lattice group (6 × 6 × 6) truncated from bulk CsPbBr 3 crystal, using the radial coordinated structural formation program (RCSFP) (Extended Data Fig. 8a ). DFT calculations yield that the orbital contour plots of the CsPbBr 3 QD show the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO) and surface-vacancy-induced Coulomb trapping (SVIC) states (Extended Data Fig. 8b ). The electronic structures show that the SVIC state is formed owing to unsaturated p orbitals of surface Br sites and is electronically localized at the apex corner regions of the QD. Additionally, the PDOS of the CsPbBr 3 QD indicates that such SVIC sites are mainly distributed near the Fermi level, beyond the valence band maximum, thus exhibiting a hole-like feature and being strongly confined by the LUMO orbitals (Extended Data Fig. 8c ). This leads to the suppression of the long-distance transport of electron–hole pairs across the particle surface or between particles. Furthermore, a model was built to perform a simulation of the energetic evolution on the surface of the QDs as a function of particle distance (Extended Data Fig. 8d ). To investigate surface confinement, we further calculated the relative energy level of the SVIC state as a function of particle distance (Extended Data Fig. 8d ). It is obvious that at a distance of 12.06 Å from the particle surface, the SVIC energy implies a strong hole-like confinement, merely 0.056 eV above the valence band maximum. Our results suggest that the mean confinement path of electronic transport within the lattice is approximately 10.32 Å (Fig. 2f ). Indeed, the intrinsic energetics on the surface of the QDs is reasonable for energy confinement of the thermalized low-energy excitons inside the nanocrystal, resulting in a high yield of X-ray scintillation light. Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request.
Medical imaging, such as X-ray or computerised tomography (CT), may soon be cheaper and safer, thanks to a recent discovery made by chemists from the National University of Singapore (NUS). Professor Liu Xiaogang and his team from the Department of Chemistry under the NUS Faculty of Science had developed novel lead halide perovskite nanocrystals that are highly sensitive to X-ray irradiation. By incorporating these nanocrystals into flat-panel X-ray imagers, the team developed a new type of detector that could sense X-rays at a radiation dose about 400 times lower than the standard dose used in current medical diagnostics. These nanocrystals are also cheaper than the inorganic crystals used in conventional X-ray imaging machines. "Our technology uses a much lower radiation dose to deliver higher resolution images, and it can also be used for rapid, real-time X-ray imaging. It shows great promise in revolutionising imaging technology for the medical and electronics industries. For patients, this means lower cost of X-ray imaging and less radiation risk," said Prof Liu. The team's research breakthrough was the result of a collaborative effort with researchers from Australia, China, Hong Kong, Italy, Saudi Arabia, Singapore and the United States. It was first published in the online edition of Nature on 27 August 2018, and a patent for this novel technology has been filed. Nanocrystals light the way for better imaging X-ray imaging technology has been widely used for many applications since the 1890s. Among its many uses are medical diagnostics, homeland security, national defence, advanced manufacturing, nuclear technology, and environmental monitoring. A crucial part of X-ray imaging technology is scintillation, which is the conversion of high-energy X-ray photons to visible luminescence. Most scintillator materials used in conventional imaging devices comprise expensive and large inorganic crystals that have low light emission conversion efficiency. Hence, they will need a high dose of X-rays for effective imaging. Conventional scintillators are also usually produced using a solid-growth method at a high temperature, making it difficult to fabricate thin, large and uniform scintillator films. To overcome the limitations of current scintillator materials, Prof Liu and his team developed novel lead halide perovskite nanocrystals as an alternative scintillator material. From their experiments, the team found that their nanocrystals can detect small doses of X-ray photons and convert them into visible light. They can also be tuned to light up, or scintillate, in different colours in response to the X-rays they absorb. With these properties, these nanocrystals could achieve higher resolution X-ray imaging with lower radiation exposure. To test the application of the lead halide perovskite nanocrystals in X-ray imaging technology, the team replaced the scintillators of commercial flat-panel X-ray imagers with their nanocrystals. "Our experiments showed that using this approach, X-ray images can be directly recorded using low-cost, widely available digital cameras, or even using cameras of mobile phones. This was not achievable using conventional bulky scintillators. In addition, we have also demonstrated that the nanocrystal scintillators can be used to examine the internal structures of electronic circuit boards. This offers a cheaper and highly sensitive alternative to current technology," explained Dr. Chen Qiushui, a Research Fellow with the NUS Department of Chemistry and the first author of the study. Using nanocrystals as scintillator materials could also lower the cost of X-ray imaging as these nanocrystals can be produced using simpler, less expensive processes and at a relatively low temperature. Prof Liu elaborated, "Our creation of perovskite nanocrystal scintillators has significant implications for many fields of research and opens the door to new applications. We hope that this new class of high performance X-ray scintillator can better meet tomorrow's increasingly diversified needs." Next steps and commercialisation opportunities To validate the performance of their invention, the NUS scientists will be testing their abilities of the nanocrystals for longer times, and at different temperatures and humidity levels. The team is also looking to collaborate with industry partners to commercialise their novel imaging technique.
10.1038/s41586-018-0451-1
Medicine
Good sleep can increase women's work ambitions
Leah D. Sheppard et al, Too Tired to Lean In? Sleep Quality Impacts Women's Daily Intentions to Pursue Workplace Status, Sex Roles (2022). DOI: 10.1007/s11199-022-01321-1 Journal information: Sex Roles
https://dx.doi.org/10.1007/s11199-022-01321-1
https://medicalxpress.com/news/2022-10-good-women-ambitions.html
Abstract An assumption of sleep and self-regulation theories is that sleep quality impacts mood which, in turn, prompts individuals to revise their work-related goals. We propose that gender differences in emotion, emotional regulation, and career aspirations layer complexity onto these basic assumptions. In the current work, we investigate the effect of daily sleep quality – via positive affect – on intentions to pursue more status and responsibility at work (i.e., aspirations), as a function of participant gender. We test our model using experience sampling methodology, surveying 135 full-time employees residing in the United States twice daily across two consecutive work weeks (10 workdays), for a total of 2,272 observations. We find that among women, but not men, sleep quality is positively related to positive affect which, in turn, relates to greater daily intentions to pursue more status and responsibility at work. We discuss the implications of our work for research and practice. Access provided by MPDL Services gGmbH c/o Max Planck Digital Library Working on a manuscript? Avoid the common mistakes “Years ago, Sheryl [Sandberg] famously told us to ‘lean in’ but as comedian Ali Wong quips, many of us are tired, and just want to “lie down’” (Ly, 2017 ). This tongue-in-cheek quote taken from a professional woman’s LinkedIn blog suggests that intentions to pursue more status and responsibility at work fluctuate as a function of physical or affective states, one of which is the state of exhaustion. Though this is an intuitive notion – that the energy one has on any given day to devote to achieving their goals will shape their daily persistence and engagement at work (Ilies & Judge, 2005 ) – scholars have generally not conceived of aspirations as impacted by daily states, such as the sense of exhaustion and crankiness following a night of poor sleep. Instead, most of the literature treats aspirations as something akin to a personality trait (e.g., Huang et al., 2014 ), or at least a ‘mid-level’ characteristic – not quite as stable as personality but certainly not as situational as behavioral intentions (e.g., Judge & Kammeyer-Mueller, 2012 ). Despite this general tendency in the literature, some research has shown that even temporary situational factors can impact career aspirations. Indeed, Steffens et al. ( 2018 ) found that participants’ aspirations fluctuated as a function of being randomly assigned to receive either positive or negative feedback about their leadership potential, thereby lending support to the notion that aspirations might also fluctuate daily in response to various events or emotional states. The opening quote, in referencing Sheryl Sandberg’s work on women and leadership, implies that women’s career aspirations might be differentially impacted by situational factors relative to men’s, a suggestion that receives support from a recent collection of research. For example, Fritz & van Knippenberg ( 2017 ) found that both women’s and men’s leadership aspirations were positively impacted by the presence of a cooperative organizational climate, but for women it was the element of perceived collaboration among employees that mattered most, whereas for men it was perceived support from the organization. Meanwhile, Joo et al. ( 2018 ) found that participating in formal leadership mentoring improved the leadership self-efficacy of men mentees more than women mentees. In the current work, we build upon this research using time-lagged experience sampling methodology (ESM) that focuses on the situational characteristic of sleep quality – a naturally varying, daily condition. Specifically, we investigate whether sleep quality influences self-reported, daily intentions to pursue more status and responsibility at work, through its influence on next-day positive affect, and the extent to which this pathway is moderated by participant gender. Figure 1 depicts our model. Fig. 1 Theoretical Model Full size image The Impact of Sleep on Work Outcomes The impact of sleep – defined as an immobile state comprised of reduced physical responsiveness (Siegel, 2005 ) – on work-related outcomes went largely overlooked for a long time (Mullins et al., 2014 ), despite that well over half of Americans report sleep problems at least a few times a week (Swanson et al., 2011 ). Fortunately, interest in sleep among management scholars has increased significantly in the past decade, and sleep is now known to impact a variety of work outcomes, such as leader behavior (Barnes et al., 2016 ), ethical conduct (Barnes et al., 2011 ), and work engagement (Kuhnel et al., 2017 ; Lanaj et al., 2014 ). Most relevant to the current work, Schilpzand et al. ( 2018 ) found that sleep quality positively predicted employees’ tendency to engage in proactive goal setting the following day at work. The mechanism underlying this effect is explained by the sleep and self-regulation model (Barnes, 2012 ), which proposes that sleep influences self-regulation (i.e., the ability to effectively manage one’s goal-directed actions; Karoly, 1993 ), a component of which is emotion regulation, or an individual’s capacity to “monitor, evaluate, and modify the nature and course of an emotional response, in order to pursue his or her goals and appropriately respond to environmental demands” (Nolen-Hoeksema, 2012 , p. 163). The effectiveness of self-regulation, in turn, impacts daily work engagement and progress toward short- and long-term work goals (Rothbard & Wilk, 2011 ). Depending on observed short-term goal progress, individuals revise their goals in an upward or downward direction (Donovan & Williams, 2003 ). Drawing from this rationale, we expect higher sleep quality would predict enhanced mood (Franzen et al., 2008 ), operationalized as daily positive affect in the current research. Positive affect, in turn, should predict greater daily intent to pursue more status and responsibility at work. Accordingly, we offer the following within-person hypotheses: Hypothesis 1 Higher sleep quality will have a positive effect on next day’s affect. Hypothesis 2 Higher affect will lead to higher daily career aspirations. Hypothesis 3 Positive daily affect will mediate the association between nightly sleep quality and career aspirations. The Moderating Role of Participant Gender Relative to men, women tend to confront more numerous and substantial obstacles in their paths to high-ranking positions in their organizations, such as gender bias and discrimination (Braddy et al., 2020 ; Eagly & Carli, 2007 ), as well as less encouragement, training, and development efforts from supervisors (Hoobler et al., 2014 ). A recent meta-analysis also suggests that women have lower leadership aspirations overall relative to men (Netchaeva et al., 2022 ), which might be partially attributable to obstacles to advancement that they face or anticipate. In response to these findings, practical advice to organizational leaders who wish to increase the representation of women in powerful roles often focuses on lowering the structural barriers to women’s entry (Eagly & Carli, 2007 ; King, 2020 ). Specifically, scholars have asserted that organizations have a significant part to play in shaping women’s motivation to achieve high-status organizational roles, through offering women positive workplace experiences and encouragement (Ibarra et al., 2013 ). The rationale underlying this advice is that women’s intentions to pursue more status and responsibility at work are uniquely impacted by situational factors that present as hindrances or facilitators – among which daily physical and affective states might be included, though they have not received substantial research attention to date. Hindrances to and facilitators of aspirations aside, neuroscience research demonstrates that women exhibit greater emotional reactivity and less automatic emotion regulation relative to men on both self-report and electrophysiological indicators (Bradley et al., 2001 ), particularly in response to negative stimuli (Yu et al., 2020 ; Yuan et al., 2010 ). As such, the research suggests that women experience emotional states that are more prolonged and less tempered than the emotional states experienced by men (Fischer et al., 2004 ; Plant et al., 2000 ). Alternatively, sociocultural expectations about emotion regulation (Eagly, 1987 ); that is, women are believed to be more emotionally reactive and expressive than men (McRae et al., 2008 ) which might mean that men experience disproportionate pressure to keep their emotions ‘in check.’ Regardless of the exact mechanism by which it occurs, research suggests there is a gender difference in the strength of the associations between daily experiences, resultant emotional states, and attitudinal or behavioral outcomes (Kring & Gordon, 1998 ). As such, we might expect that women’s emotional states exert a greater influence over their work-related attitudes and behavioral intentions relative to men’s. The research findings, however, are mixed (Atwater et al., 2016 ; Bear et al., 2014 ; Ghasemy et al., 2020 ; Molders et al., 2019 ; Ye et al., 2018 ), and might therefore be dependent on the outcome of interest. When it comes to the specific outcome of career aspirations, we would expect that men largely persist regardless of daily emotional experiences, perhaps due to stereotypes that prescribe ambition, consistency, and toughness for men (Prentice & Carranza, 2002 ). Indeed, given that having high career aspirations is prescribed for men (Eagly & Karau, 2002 ; Prentice & Carranza, 2002 ), we would predict that men’s aspirations are less impacted than women’s by emotions resulting from daily events. Accordingly, we offer the following hypotheses: Hypothesis 4 The positive effect of positive affect on career aspirations will be moderated by participant gender, such that it will be stronger for women than men. Hypothesis 5 The indirect effect of sleep quality on career aspirations via positive affect will be moderated by participant gender, such that the positive indirect effect will be stronger for women than men. Participant Age, Job Level, and Sleep Quantity We included several control variables in our model, given their theoretical and/or empirical associations with our variables of interest. At the between-person level, we controlled for participant age and current job level. Theoretically, younger participants might express greater career aspirations than older participants simply because they have a longer period in which to achieve those aspirations. Moreover, older participants, particularly those closer to retirement, might be more inclined to express a desire to scale back as opposed to ‘lean in’ at work. It might also be the case that as women, in particular, age and confront barriers to promotion, they revise their aspirations (Netchaeva et al., 2022 ). We considered current job level as a control, as this might exhibit an association with daily career aspirations. That is, individuals who already hold positions in the highest ranks of their organizations might not reasonably have additional status and responsibility left to attain. At the within-person level, we considered the association between sleep quality and quantity , or the number of hours a participant reported in the diary portion of the study that they slept the previous night. Extant research indicates that sleep quality and sleep quantity are correlated (Barnes et al., 2011 ; Lanaj et al., 2014 ), though sleep quality exhibits stronger associations with work outcomes (Litwiller et al., 2017 ), which is why we selected this as our focal variable rather than sleep quantity. Given the association between sleep quality and quantity, we controlled for sleep quantity in our model. Method Participants This research project was reviewed and approved by the institutional review board of the authors’ university prior to the collection of data. We initially recruited 152 full-time employees who resided within the Eastern Standard Time (EST) time zone of the United States to participate in a 10-day diary study in which participants were surveyed twice daily (Monday to Friday, two consecutive weeks). Participants were recruited through TurkPrime panel. This sample size and methodology was selected based on recent research investigating the impact of sleep on work-relevant outcomes using ESM designs (e.g., Schilpzand et al., 2018 ). One-hundred and thirty-five participants responded to a one-time survey in which we gathered demographic information (e.g., participant gender, age, current job level) and our daily surveys (88.81% response rate), yielding a total of 2272 observations (85.33% complete rate) and 1135 days of data. Note that we retained data for those who completed our daily surveys for at least three days, given that three data points per participant are needed to capture and test within-person associations. The average age of participants was 39.6 years ( SD = 6.50). The racial identity of participants was as follows: White ( n = 102, 75.6%), Black ( n = 15, 11.1%), Asian ( n = 11, 8.1%), Hispanic ( n = 1, 0.7%), American Indian ( n = 1, 0.7%), and mixed race/ethnicity (5, 3.7%). Fifty-nine of the participants were men (43.7%), and 49 were supervisors (36.3%). Of the supervisory participants, 25 ( n = 51%) were women. Participants worked in a variety of industries, including education (17, 12.6%), health care (15, 11.1%), public administration (14, 10.4%) and finance (12, 8.9%). Measures Sleep Quality (Level 1) In the daily survey taken around noon, participants were asked to make an overall evaluation of the quality of their previous night’s sleep. Participants responded to a four-item scale (α = 0.81) developed by Jenkins et al. ( 1988 ), using a response format ranging from 1 ( not at all ) to 5 ( to a very large extent ). Items were “Had trouble falling asleep”, “Had trouble staying asleep”, “Woke up after your usual amount of sleep feeling tired and worn out”, and “Woke up several times during the night.” Following previous research (e.g., Barnes et al., 2011 ), we reverse-coded the scale items so that higher scores correspond to better sleep quality. Positive Affect (Level 1) We assessed positive affect using a 10-item scale (α = 0.96) developed by Watson et al. ( 1988 ) also in the noon survey. Participants indicated the extent to which each state (e.g., interested, excited, attentive, etc.) described their current feelings on a five-point scale, ranging from 1 ( not at all ) to 5 ( extremely ). Higher scores indicate more positive affect. Career Aspirations (Level 1) We assessed daily career aspirations in the daily evening survey at about 7 p.m. with a 3-item scale (α = 0.93) that we developed for this study. Each item had the stem, “Based on the day I had today…”, which was followed by: “…I would welcome more responsibility at work,” “…I want to pursue a position with more status at work,” and “…I want a position of greater influence at work.” Participants rated the extent to which they agreed with the survey items on a five-point scale, ranging from 1 ( strongly disagree ) to 5 ( strongly agree ). Higher scores indicate greater aspirations. Measurement of Control Variables Current job level was adapted from the scales used by Gino et al. ( 2015 ). Participants were asked in the initial one-time demographics survey to indicate their current job level on a scale ranging from 1 ( entry-level ) to 9 ( CEO/President ). We assessed sleep quantity on the noon survey with the following item adapted from Lanaj et al. ( 2014 ): “How many hours did you sleep last night?” We also included day of the study uncentered as a control to address time-related trends in experience sampling (Beal, 2015 ). Conclusions remained unchanged when these controls were excluded from analyses. We retained them in the model to provide more robust support for our predictions (Becker, 2005 ). Procedure We first administered a one-time survey to assess participants’ demographic information (e.g., gender, age) and their current job level, for which they were compensated $1.50. Approximately one week later, we began administering the daily surveys, which consisted of both a noon and an evening survey. The noon survey assessed sleep quality and quantity from the previous night and current positive affect, and the evening survey assessed intentions to pursue more status and responsibility at work. The noon surveys were sent at 11:00 AM and participants were instructed to complete it by 2:00 PM. The evening surveys were sent at 7:00 PM and closed at 11:00 PM. Participants were paid $1.50 for each noon and evening survey that they completed. Altogether, a participant could earn a total of $31.50 over the course of the study. Results Measures, data, and code for this research are available upon reasonable request by e-mailing the corresponding author. Table 1 shows the percentages of within-person variance. The within-person (Level 1) variables showed significant within-person variance (ranging from 26 to 58%), justifying the use of multilevel analysis (Bolger & Laurenceau, 2013 ). It also supported the notion that career aspirations can fluctuate daily. Table 1 Variance Composition of Level 1 Variables Full size table Descriptive statistics and correlations are presented in Table 2 . Note that participant gender did not exhibit significant associations with sleep quality (nor quantity) or positive affect. Participant gender was significantly correlated with daily career aspirations, such that men exhibited higher daily aspirations, as well as with current job level, such that men held higher-ranking positions in their organizations relative to women. Table 2 Means, Standard Deviations, and Correlations Full size table Preliminary Analyses We conducted multilevel confirmatory factor analyses (MCFA) using Mplus 8.4 (Muthén, 2017 ) to assess the discriminant validity of the focal constructs in our model. Results from MCFA indicate that our model fit the data well: χ 2 (116) = 642.12; CFI = 0.91; SRMR within = 0.06; RMSEA = 0.06. We used the Satorra-Bentler scaled chi-square difference test for model comparisons (Satorra & Bentler, 2010 ). The results indicated that our proposed model was better than a model in which sleep quality and positive affect were loaded together: χ 2 (118) = 1205.6; CFI = 0.81; SRMR within = 0.09; RMSEA = 0.09; Δ χ 2 (2) = 478.31, p < .01, or a model in which positive affect and daily career aspirations were combined: : χ 2 (118) = 1857.53; CFI = 0.70; SRMR within = 0.12; RMSEA = 0.11; Δ χ 2 (2) = 634.07, p < .01. Hypothesis Testing A multilevel path analysis using Mplus’s Bayesian estimator was conducted to test our hypotheses. We group-mean-centered our within-person variables and estimated their associations using random slopes (Bolger & Laurenceau, 2013 ). Group-mean centering the within-person variables allowed us to test how intra-individual fluctuations in sleep quality affects positive affect, and, in turn, daily career aspirations. In addition, group-mean centering helps remove between-person variance, thereby eliminating common method variance attributed to response tendency and social desirability biases (Gabriel et al., 2019 ). We left participant gender uncentered due to its dichotomous nature, and then added this cross-level interaction to the within-person models. Next, we modeled participant gender as a predictor of the within-person positive affect–aspirations slope. To reduce statistical complexity, we followed previous research (e.g., Wang et al., 2011 ) and modeled the within-person controls as fixed slopes in our analyses. Table 3 Results from Multilevel Path Analysis Full size table Hypothesis 1 predicted that higher sleep quality would lead to greater positive affect, and Hypothesis 2 predicted that positive affect would positively predict daily aspirations. As shown in Table 3 , the association between sleep quality and positive affect was positive and significant ( γ = 0.29, SE = 0.04, p < .001), and positive affect was positively related to daily aspirations ( γ = 0.16, SE = 0.05, p < .001). As such, we found support for Hypotheses 1 and 2. Hypothesis 3 predicated that positive affect would mediate the association between sleep quality and aspirations. Since the traditional bootstrap approach is not available in multilevel analysis, we conducted Bayesian analyses with 20,000 iterations through Markov chain Monte Carlo simulation (Muthén, 2010 ) to test indirect and moderated indirect effects. This method has been used in recent experience-sampling research to test mediation models (e.g., Hu et al., 2020 ). An indirect effect is significant if the 95% credibility interval does not include zero. Our results revealed that sleep quality was positively associated with aspirations via positive affect (coefficient = 0.042; 95% CI [0.015, 0.077]), thereby providing support for Hypothesis 3 . Hypothesis 4 Predicted that the association between positive affect and aspirations would be moderated by participant gender, such that it would be stronger for women than men. As shown in Table 3 , the interaction term between positive affect and participant gender on aspirations was significant ( γ = − 0.15, SE = 0.08, p = .04). Following recommendations by Preacher et al. ( 2006 ), we conducted simple slopes tests to examine the pattern of this interaction term. Positive affect was positively related to aspirations for women (1 = men; 0 = women; γ = 0.16, SE = 0.05, p < .001) but not for men ( γ = 0.01, SE = 0.06, p = .92; see Fig. 2 ). As such, Hypothesis 4 was supported. Hypothesis 5 predicted that the indirect effect of sleep quality on aspirations via positive affect would be moderated by participant gender. Our results showed that the indirect effect was significant for women (coefficient = 0.042; 95% CI [0.015, 0.077]) but not men (coefficient = 0.000; 95% CI [-0.033, 0.032]). Also, the indirect difference was significant (coefficient = 0.043; 95% CI [0.002, 0.090]; see Table 4 ). Hypothesis 5 was supported. Fig. 2 Variance Composition of Level 1 Variables Full size image Table 4 Summary of Indirect Effects and Conditional Indirect Effects Full size table Discussion In the current research, we investigated the impact of sleep quality on next-day intentions to pursue more status and responsibility at work (i.e., career aspirations), via positive affect, and investigated the extent to which this was moderated by participant gender. Drawing on twice daily observations of 135 full-time employees in the U.S. generated with an experience sampling methodology that took place over 10 working days, we found that better quality sleep enhanced positive affect, and positive affect, in turn, led to higher career aspirations. However, this was only true for women. The current study makes several contributions to the literature. First, the results of this research suggest that intentions to pursue more status and responsibility may be shaped by sleep quality and self-regulation. Though some research has certainly shown that career aspirations can be meaningfully impacted by a variety of situational factors (e.g., Hoobler et al., 2014 ; Joo et al., 2018 ) and even experimental manipulation (e.g., Steffens et al., 2018 ), our results go further to suggest that it is appropriate to conceive of career aspirations as impacted by daily states. We hope that this work will provide the foundation for future research to explore other physical and affective states that impact individuals’ intentions to pursue more workplace status and responsibility. We further contribute to the literature on gender differences in leadership attainment and aspirations. Research, including the current work, has shown that men occupy higher-ranking organizational positions and exhibit greater leadership aspirations, relative to women (Bloch et al., 2021 ; Catalyst, 2021 ; Netchaeva et al., 2022 ). Our findings provide some insight into one possible reason why women are less likely to pursue and, subsequently, attain leadership positions, relative to men. That is, our research suggests that women’s intentions to strive toward enhanced status and responsibility are uniquely sensitive to daily events and the emotional states they elicit. Specifically, women’s intentions to pursue more status and responsibility appear to be susceptible to diminishment (vs. enhancement) on days during which they experience a less (vs. more) positive mood, as a function of sleep quality. In uncovering this phenomenon, our work contributes to an evolving body of literature indicating that career aspirations are impacted by situational factors in meaningfully different ways for men and women (Fritz & van Knippenberg, 2017 ; Hoobler et al., 2014 ; Joo et al., 2018 ). Moreover, our work contributes to emergent research elucidating the role of emotions in understanding the gender gap in leader emergence and leaders’ workplace outcomes (e.g., Adams & Webster, 2022 ; Richard et al., 2022 ). For example, a recent study showed that women report feeling more frustrated and tense at work relative to men, and that this difference is exacerbated among individuals holding higher occupational rank (Taylor et al., 2022 ). Certainly, more research is needed to understand gender differences in emotional experiences at work, as these have clear consequences for career aspirations and achievement. Finally, our research contributes to a growing body of research on the effects of sleep quality on work-relevant outcomes. Sleep has been shown to have significant consequences for a variety of work outcomes, and, fortunately, is amenable to improvement with some effort from organizational leaders (e.g., by allowing schedule flexibility, encouraging unplugging in the evenings; Litwiller et al., 2017 ; Perlow, 2012 ). While previous research has considered the impact of sleep on ethical conduct and leadership outcomes (e.g., Barnes et al., 2011 ; Barnes et al., 2015 ), we know relatively less about how sleep quality relates to personal striving, which can have meaningful implications for organizations. The current research is the first to suggest that sleep has consequences for women’s intentions to pursue more status and responsibility at work. Limitations and Future Research Directions A strength of the current research was our use of experience sampling methodology, which can remove between-person confounds (e.g., individual differences) and strengthen the causal inference of our model. Also, this design allowed us to assess the proximal antecedents of aspirations and capture the intrapersonal nature of this phenomenon. Nevertheless, our work was limited in that all our measures were of a self-report nature, though common method bias for Level 1 associations was mitigated by person-mean centering. Future research might replicate this research using physiological assessments or spousal reports of sleep quality. Next, though our findings are suggestive of the potential importance of emotion regulation for maintaining the daily aspirations of women, confirming emotion regulation as a mechanism facilitating the influence of positive affect on daily aspirations was beyond the scope of the current research. As such, future research should investigate this possible mechanism, and might further examine whether interventions such as emotion regulation training can mitigate the impact of mood on daily aspirations (LeBlanc et al., 2019 ). In developing our hypothesis regarding the interaction between positive affect and participant gender, we proposed that men would be more likely to regulate their emotions to maintain their daily aspirations even on days with low positive affect, given that they have been socialized to temper their emotions and persist in the face of setbacks (Eagly, 1987 ). As such, we predicted that it would be women who would be more susceptible of experiencing a decline in their daily aspirations on days marked by low positive affect. However, an alternative explanation that we could not rule out in the current work is that women hold themselves to a higher standard of performance and/or have less leadership efficacy than men, and so on days during which poor sleep quality and affect interfere with their work performance, they doubt their capacity to take on more responsibility at work. This decreased confidence, in turn, could express itself as reduced reported interest in pursuing more status and responsibility at work. Another alternative explanation is that our findings were shaped by social desirability bias. That is, men might have truly experienced decreases in their daily aspirations as a result of low positive affect but were simply less likely to report this due to social desirability bias. Indeed, men are expected to be ambitious (e.g., Prentice & Carranza, 2002 ) and therefore might be particularly susceptible to socially desirable responding in this domain. Future research should explore these potential alternative explanations. We should note that it was beyond the scope of the current study to link daily aspirations to specific, goal-directed behaviors aimed at obtaining more status and responsibility at work (e.g., asking for a promotion), or actual leader emergence, though the desire for more status and responsibility at work is certainly an antecedent to advancement (Luria & Berson, 2013 ). Moreover, our selected methodology of experience sampling might not have lent itself to effectively detecting the behavioral outcomes of aspirations, which likely unfold over a longer timeline than can be assessed with daily measurement over the course of two work weeks. Nevertheless, future research should investigate whether sleep and mood impact workplace advancement via career aspirations. Finally, in the current work we did not explore variables that operate at the intersection of work and family to influence the impact of sleep quality and mood on downstream consequences. Future research should explore the impact of factors such as perceived organizational support, flexible work arrangements, and work-family balance/conflict on the associations between sleep quality, mood, and career aspirations, given that many professional women report taking on a disproportionate share of household labor (Dush et al., 2018 ). Though we did not find that sleep quality was related to participant gender, an individual’s ability to mitigate the impact of a negative mood and stress emerging from a night of poor sleep might be shaped by other stressors that do differ meaningfully for women and men. Future research should explore this possibility, as understanding the different pathways through which women and men aspire to attaining more status at work might precede interventions that close the gender gap in leader emergence. Practice Implications Our findings suggest that organizational leaders should be especially cognizant of the impact of daily affect on aspirations for women, as this could have implications for organizational efforts to increase the representation of women in leadership roles. Women who are noticed early on by management for their interest in or aptitude for leadership might be offered additional resources and support to bolster their cognitive and emotional resources against the ego-depleting impact of daily disruptions to sleep and mood. To tap the potential of all workers, managers should offer resources that enable better sleep (e.g., flexible work schedules, on-site exercise facilities), and model or encourage practices that reduce sleep deprivation among employees, such as advising against after-hours work emails and limiting excessive work demands (Lanaj et al., 2014 ; Syrek & Antoni, 2014 ). Emotionally intelligent leaders can also attempt to cultivate positive emotional states among employees, first by regulating their own emotions and then by helping employees to acknowledge, understand, and regulate their emotions at work (Sy et al., 2005 ). Our work has further practical applications for individuals who wish to better understand how their own physical and affective states impact their daily motivation and striving toward career goals, as well as the mental health professionals who support them in this endeavor. Indeed, the insights we provide with this work might highlight the importance of self-care practices for ambitious individuals who might otherwise be inclined to neglect daily habits that bolster both sleep quality and mood, such as exercise and a healthy diet (Ingram et al., 2020 ). Research also demonstrates the effectiveness of cognitive behavioral therapy for enhancing sleep quality and mood (Totterdell & Kellett, 2008 ), which involves simple and effective methods of cognitive reframing and retraining that can be practiced daily. Conclusion Taken together, the findings emerging from the current research suggest that, among women, better quality sleep leads to a better mood which, in turn, relates to greater intentions to pursue more status and responsibility at work. Our findings provide potential insights into the gender gap in leader emergence and contribute to a burgeoning literature investigating the work outcomes of sleep quality. As interruptions to sleep are likely to be exacerbated with technology-driven extensions to the workday, we hope that this work inspires future research aimed at better understanding the impact of sleep quality and emotional states on career aspirations, particularly among women. Data, code, and materials are available from the corresponding author upon reasonable request
If women want to lean in to work, they may first want to lay down for a good night's rest. A Washington State University-led study indicated that sleep quality impacted women's mood and changed how they felt about advancing in their careers. Meanwhile, men's aspirations were not impacted by sleep quality. The researchers discovered this finding in a two-week-long survey study of 135 workers in the U.S. Each day the participants first noted how well they had slept and the quality of their current mood, and then later in the day how they felt about striving for more status and responsibility at work. "When women are getting a good night's sleep and their mood is boosted, they are more likely to be oriented in their daily intentions toward achieving status and responsibility at work," said lead author Leah Sheppard, an associate professor in WSU's Carson College of Business. "If their sleep is poor and reduces their positive mood, then we saw that they were less oriented toward those goals." For the study published in the journal Sex Roles, Sheppard and co-authors Julie Kmec of WSU and Teng Iat Loi of University of Minnesota-Duluth surveyed full-time employees twice a day for two consecutive work weeks for a total of more than 2,200 observations. The participants answered questions about their previous night's sleep and current mood around noon every day and in the evenings answered questions about their intentions to pursue more responsibility, status, and influence at work. Both men and women reported good and bad sleep quality over the course of the study, notably with no gender difference in reported sleep quality. However, women more often reported lowered intentions to pursue more status at work on days following a night of poor sleep. The researchers can only speculate about exactly why sleep's impact on mood effects women's aspirations and not men's, but they suspect it may have to do with gender differences in emotion regulation as well as societal expectations—or some combination of these forces. Neuroscience research has shown that women tend to experience greater emotional re-activity and less emotion regulation than men, and this can be reinforced by cultural stereotypes of women as more emotional. At the same time, stereotypes of men as being more ambitious than women likely add more pressure for them to scale the corporate ladder, so perhaps poor sleep quality would be less likely to deter men from their work aspirations. These findings hold some good news for women who want to advance their careers, though, Sheppard said. For instance, they might take some practical steps to improve work aspirations, ranging from practicing meditation to help with both sleep and emotion regulation to putting better boundaries on work hours—and of course, simply striving to get better sleep. "It's important to be able to connect aspirations to something happening outside the work environment that is controllable," she said. "There are lots of things that anyone can do to have a better night's sleep and regulate mood in general."
10.1007/s11199-022-01321-1
Space
Mars' oceans formed early, possibly aided by massive volcanic eruptions
Robert I. Citron et al, Timing of oceans on Mars from shoreline deformation, Nature (2018). DOI: 10.1038/nature26144 Journal information: Nature
http://dx.doi.org/10.1038/nature26144
https://phys.org/news/2018-03-mars-oceans-early-possibly-aided.html
Abstract Widespread evidence points to the existence of an ancient Martian ocean 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 . Most compelling are the putative ancient shorelines in the northern plains 2 , 7 . However, these shorelines fail to follow an equipotential surface, and this has been used to challenge the notion that they formed via an early ocean 9 and hence to question the existence of such an ocean. The shorelines’ deviation from a constant elevation can be explained by true polar wander occurring after the formation of Tharsis 10 , a volcanic province that dominates the gravity and topography of Mars. However, surface loading from the oceans can drive polar wander only if Tharsis formed far from the equator 10 , and most evidence indicates that Tharsis formed near the equator 11 , 12 , 13 , 14 , 15 , meaning that there is no current explanation for the shorelines’ deviation from an equipotential that is consistent with our geophysical understanding of Mars. Here we show that variations in shoreline topography can be explained by deformation caused by the emplacement of Tharsis. We find that the shorelines must have formed before and during the emplacement of Tharsis, instead of afterwards, as previously assumed. Our results imply that oceans on Mars formed early, concurrent with the valley networks 15 , and point to a close relationship between the evolution of oceans on Mars and the initiation and decline of Tharsis volcanism, with broad implications for the geology, hydrological cycle and climate of early Mars. Main Distinct geological boundaries (contacts) lining the northern plains of Mars for thousands of kilometres have been interpreted as palaeo-shorelines and evidence of an early ocean 2 , 3 , 4 , 6 , 7 . However, observed long-wavelength deviations (by up to several kilometres) in shoreline elevation from an equipotential have been used as an argument against the emplacement of the contacts by a body of liquid water, the interpretation of the features as shorelines, and the existence of a Martian ocean 9 . Perron et al . 10 showed that the elevation changes of two extensive contacts, Arabia (contact 1) and Deuteronilus (contact 2), can be explained by deformation due to 30°–60° and 5°–25° of post-Tharsis true polar wander (TPW), respectively, because a varying rotation pole also changes the orientation of a planet’s equatorial bulge, or polar flattening, altering equipotential surfaces (such as sea levels) globally. Such large magnitudes of TPW can be driven by ocean loading/unloading, but only if Tharsis formed far from the equator 10 . If Tharsis formed near the equator, then the remnant fossil bulge would have prevented ocean loading from causing large amounts of post-Tharsis TPW (see Extended Data Fig. 1 ). Most evidence points to the formation of Tharsis near the equator 11 , 12 , 13 , 14 , 15 . Mars’ remnant rotational figure (fossil bulge) is close to the equator, indicating a palaeopole of (259.5 ± 49.5° E, ° N), the likely pre-Tharsis orientation of Mars 14 . The pre-Tharsis palaeopole also matches the likely orientation of Mars during valley network formation 15 . Formation of Tharsis probably drove only limited (approximately 20°) TPW to reach Mars’ current configuration, which precludes the possibility that surface loads drove sufficient TPW to deform the shorelines 10 , 16 . We propose that the Arabia shoreline instead formed before or during the early stages of Tharsis emplacement, which initiated > 3.7 billion years (Gyr) ago 17 when the rotation pole of Mars was at the palaeopole (259.5° E, 71.1° N) corresponding to the fossil bulge 14 . The Arabia shoreline, potentially emplaced at least 4 Gyr ago 6 , would have been modified by both topographic changes from Tharsis (which dominates Mars’ topography and gravity on a global scale; see Extended Data Fig. 2 ), and the approximately 20° of Tharsis-induced TPW. The Deuteronilus shoreline, which differs less from a present-day equipotential than the older Arabia shoreline, is dated to about 3.6 Gyr ago 18 , after most of Tharsis was emplaced. However, Tharsis had complex and multi-stage growth that extended into the Hesperian and Amazonian 17 , 19 , meaning that the Deuteronilus shoreline would have been deformed by the late stages of Tharsis’ emplacement. We examine a chronology in which shoreline deformation is due mainly to Tharsis ( Table 1 ), and compare expected deformation due to Tharsis with the elevation profiles of the Arabia and Deuteronilus contacts. Table 1 Possible evolution of Martian shorelines Full size table Assuming the Arabia shoreline formed before Tharsis, and the Deuteronilus shoreline formed after most of Tharsis was emplaced, we compare the best fits for the deformation expected from Tharsis to the current topography of the shorelines, including an offset factor Z to represent sea level at the time of shoreline formation. We also examine the Isidis shoreline, which formed 100 million years (Myr) after Deuteronilus 18 . For the Arabia shoreline emplaced before Tharsis, deformation is expressed as the contribution of Tharsis to Mars’ topography along the shoreline, and the change in topography from limited Tharsis-induced TPW. For the Deuteronilus and Isidis shorelines emplaced during the late stages of Tharsis growth, deformation is taken as the percentage of Tharsis’ contribution to topography occurring after the shorelines formed, and no contribution from TPW (because reorientation should occur within tens of thousands of years to a few million years after the Tharsis plume reaches the surface 20 , much less than the 100 Myr or more that lies between Tharsis initiation and Deuteronilus formation). See Methods for more details. We show that the Arabia shoreline’s deviations from an equipotential can be explained almost entirely by deformation due to Tharsis emplacement ( Fig. 1 ). Our best fit (equation (3) with Z = −2.3 km) yields a root-mean square misfit σ rms of 0.615 km, comparable to the error values from Perron et al . 10 , and follows the slope of the shoreline data better from 1,000 km to 6,600 km. The limited Tharsis-induced TPW has a negligible effect. A slightly lower σ rms is obtained if only 80% of Tharsis topography was emplaced after the Arabia shoreline formed ( Extended Data Fig. 3 ). However, the difference between the fits using 80% or 100% of Tharsis’ topography is negligible considering the scatter in the shoreline data. Our model therefore suggests that the Arabia shoreline formed before or during the early stages of Tharsis’ growth. Figure 1: Comparison of Arabia shoreline topography to shoreline deformation models. a , Change in topography ∆ T caused by TPW of 20° (equation (1)) and Tharsis uplift (equation (2)), illustrating that the latter is much more important. b , Current topography of the Arabia shoreline from Perron et al . 10 (data originally from ref. 7 ) compared to the Perron et al . 10 model of deformation due to post-Tharsis TPW (with T e = 200 km) and our model of deformation due to Tharsis emplacement and induced TPW (∆ T Tharsis + ∆ T TPW − 2.3 km). The starting point for the shoreline is (24.91° W, 13.48° N). PowerPoint slide Full size image The Deuteronilus shoreline’s deviations from an equipotential can be explained by deformation due to the emplacement of about 17% of Tharsis topography ( Fig. 2 ), indicating that the shoreline formed during the late stages of Tharsis’ growth. Our best fit (equation (4) with C = 0.17 and Z = −3.68 km) yields σ rms = 0.110 km. Our fit successfully recovers the low elevation of the Phlegra contact, and also captures the decrease in elevation across Utopia and Elysium West. Neither our model nor the Perron et al . 10 model captures the full elevation increase of the Tantalus segment, which may result from the topographic bulge from nearby Alba Patera 18 . For the Isidis shoreline, subsequent loading of the Utopia basin is also required to explain the shoreline’s topography (see Extended Data Fig. 4 and Methods). Figure 2: Comparison of Deuteronilus shoreline topography to shoreline deformation models. Current Deuteronilus topography (data and contact names from ref. 18 ) compared to the Perron et al . 10 model and our model of deformation due to partial Tharsis emplacement (0.17∆ T Tharsis − 3.68 km). The starting point for the shoreline is (96.40° W, 63.69° N). PowerPoint slide Full size image The relation between the shorelines and global deformation due to Tharsis and its associated TPW is illustrated in Fig. 3a–c (also see Extended Data Fig. 2 ). We estimate the volume of water that filled the northern plains to the Deuteronilus and Arabia shorelines by subtracting the relevant Tharsis and TPW contributions from Mars’ topography (0.25° per pixel gridded MOLA data 21 ) and filling the lowlands to shoreline elevation ( Fig. 3d–f ). We estimate a Deuteronilus ocean volume of about 1.2 × 10 7 km 3 , and an Arabia ocean volume of about 4.1 × 10 7 km 3 . These are lower limits because we do not remove excess terrain, such as Elysium, polar deposits, lava/sediment basin deposits, and short-wavelength Tharsis topography (that is, variations in Tharsis topography that occur over short length scales). For the Arabia ocean, use of a map of Mars with excess terrain removed 15 yields an ocean volume of about 5.5 × 10 7 km 3 . The ocean volumes we compute are slightly lower than previous estimates 22 because the Tharsis topography we subtract is negative in much of the area enclosed by the northern ocean basin. Figure 3: Shoreline locations relative to current topography, deformation due to Tharsis/TPW, and computed ocean extents. a , MOLA topography. b , Tharsis contribution to topography (equation (2)). c , Deformation due to approximately 20° TPW from the palaeopole corresponding to the fossil bulge 14 with T e = 58 km (equation (1)). d , Ocean basin filled to the Deuteronilus shoreline, with the topography of Mars at the time of the Deuteronilus shoreline’s formation (MOLA topography minus 17% of Tharsis topography). e , Ocean basin filled to the Arabia shoreline, with the topography of Mars at the time of the Arabia shoreline’s formation (MOLA topography minus Tharsis topography and deformation due to TPW). Short-wavelength remnants of Tharsis are visible because Tharsis topography is only modelled up to degree-5. f , The ‘Mars without Tharsis’ map 15 , which is similar to e , but with short-wavelength Tharsis, Elysium, and polar topography removed, filled to the Arabia shoreline. Shorelines are plotted for Deuteronilus (white), Arabia (magenta) and Isidis (cyan), and colour scales denote elevation (or changes in elevation) in kilometres. PowerPoint slide Full size image Short-wavelength deviations in shoreline elevation from our model may be due to our assumptions that both lithospheric thickness and the rate of Tharsis emplacement are spatially uniform. Spatial variations in lithospheric thickness 23 would allow for non-uniform responses to phenomena such as TPW 10 , plate flexure and dynamic topography from the Tharsis plume 24 . Spatially variable Tharsis emplacement could also affect shoreline modification. Another consideration is ocean loading, but the computed effect on shoreline elevations is small (see Extended Data Fig. 5 and Methods). Several other short- and long-wavelength processes could have deformed the shorelines in the 3.5 Gyr or more since their emplacement, including dynamic topography from mantle plumes 24 , lithospheric deformation 22 , 25 , glacial erosion 26 and plate flexure from loading/unloading. For example, loading of the Utopia basin may have tilted Isidis (see Methods) and deformed sections of the Deuteronilus shoreline 18 . Other loads that post-date shoreline formation include Elysium, the polar deposits, and sections of Tharsis. Such loads could also induce a small amount (<2°) of post-Tharsis TPW 16 . Plate flexure associated with impact basins could also deform shorelines. While basins of over 1,000 km in diameter pre-date the Deuteronilus shoreline, some basins may have been coincident with or post-date the Arabia shoreline. Short-wavelength deformation may also be a consequence of the difficulty in identifying the shorelines themselves 18 . Increased accuracy in shoreline identification and dating 18 can help to reconstruct the history of shoreline formation and modification. Several potential shorelines 6 , such as the Ismenius, Acidalia, Elysium, Aeolis and Meridiani contacts, have been relatively unexamined owing to their high degree of discontinuity 7 . Shorelines may also be mismapped; for example, portions of the Meridiani shoreline may be part of the Arabia shoreline 22 . A re-evaluation of shorelines with full consideration of the various deformation processes may enable the development of a chronology of oceans on Mars. In particular, the Meridiani shoreline 6 , 27 may pre-date the Arabia shoreline and have contained a larger volume of water 22 . Accurate dating of the Arabia shoreline is necessary to determine whether the shoreline formed before or during the early stages of Tharsis’ growth. Formation of the Arabia shoreline after some limited early Tharsis growth is suggested by Arabia segments that border Acheron Fossae and Tempe Terra 6 , two of the oldest Tharsis units, which are located well north of the expected pre-Tharsis crustal dichotomy boundary (the stark difference in elevation and crustal thickness between the northern lowlands and southern highlands). However, it is possible that the Archeron Fossae and Tempe Terra contacts were misidentified as belonging to the Arabia shoreline, or that the Arabia shoreline initially followed the pre-Tharsis dichotomy boundary, and formed the Tempe Terra and Archaeon Fossae contacts only after early Tharsis uplift and deposition. The decline in ocean volume from the pre- or early-Tharsis Arabia shoreline to the late-Tharsis Deuteronilus shoreline suggests that Tharsis volcanism may have played a critical part in the evolution of a Martian ocean. After Tharsis was mostly emplaced, by about 3.6 Gyr ago, only short-lived lakes may have been stable 28 , although a Late Hesperian/Early Amazonian ocean has also been suggested on the basis of tsunami evidence 8 , 29 . Outgassing from Tharsis could have contributed to either heating 30 or cooling 31 the planet, both of which could produce the decrease in ocean volume from the Arabia shoreline to the Deuteronilus shoreline as Tharsis activity declined. Either a large ocean was in place before Tharsis volcanism initiated, and shrank as Tharsis volcanism cooled the planet, or an ocean arose as a result of heating caused by Tharsis outgassing and decreased in volume as Tharsis volcanism declined. It is also possible that each shoreline represents the transient warming of an otherwise frozen ocean or glacial state 30 , producing a liquid ocean in periods of heightened Tharsis activity (which, owing to enhanced surface heat flux, may also have resulted in catastrophic circum-Tharsis groundwater discharge 1 ). If episodic warming was sufficient to melt most of Mars’ glaciers, the decrease in ocean volume may record a declining surface water budget. Although geochemical evidence for a northern ocean is ambiguous 32 , an ocean supported by the degassing of sulfur from Tharsis could explain the lack of widespread carbonate deposits observed in the northern plains 33 . The evolution of water on Mars is critical to understanding the past climate and habitability of the planet. Although shorelines on Mars have provided compelling evidence for a Martian ocean, to explain their deviations from an equipotential has been a challenge. We show that the topography of Martian shorelines can be quantitatively explained by deformation due to the emplacement of Tharsis and resulting TPW (in the case of the Arabia shoreline) or by the latter stages of Tharsis emplacement (in the case of the Deuteronilus shoreline). Formation of the Arabia shoreline before (or during the early stages of) Tharsis emplacement suggests that the Arabia ocean was concurrent with valley network incision 15 , which probably occurred as part of a globally active hydrosphere capable of supporting such an ocean 5 . The consistency between the topography of the Martian shorelines, their ages, and the chronology of topographic changes due to Tharsis emplacement and associated TPW, suggests that the Arabia and Deuteronilus contacts are evidence that Martian oceans existed, and may have been linked to Tharsis volcanism. Methods Arabia shoreline (pre- or early-Tharsis formation) We assume deformation of the Arabia shoreline since its formation is due to global changes in topography resulting from Tharsis’ formation (emplacement and loading) and the approximately 20° of Tharsis-induced TPW ( Table 1 ). The topographic response to TPW is given by the change in the flattening of the planet caused by the difference between the centrifugal potential at the initial and final rotation poles 10 . For a shoreline in place before TPW occurs, the deformation of the shoreline topography due to TPW 10 is: where a is the mean planetary radius, ω is the rotation rate, g is the surface gravity, γ is the angular distance between a given current colatitude and longitude ( θ , φ ) and the palaeopole and h 2 and k 2 are the secular (fluid-limit) degree-2 Love numbers that depend on the density and elastic structure of Mars. The unnormalized degree-2 Legendre polynomial P 2,0 (cos η ) = (3cos 2 η − 1). The change in topography due to the emplacement of Tharsis and its associated loading is: where S Tharsis and N Tharsis are Tharsis’ contribution to the shape and geoid of Mars, respectively. We use gravity and shape coefficients for Tharsis up to degree-5 from Matsuyama and Manga 14 . The current topography of the Arabia shoreline should therefore follow the deformation profile ∆ T A , given by: where Z is a constant to adjust for the sea level at the time of the shoreline’s emplacement. We minimize the least-squares misfit ( σ rms ) between equation (3) and the shoreline elevation data for the Arabia contact examined in ref. 10 (data originally from ref. 7 ). We assume a fixed palaeopole (259.5° E, 71.1° N), corresponding to the fossil bulge 14 . We use an elastic lithosphere thickness T e = 58 km, the expected value at the time of Tharsis’ emplacement 14 , corresponding to h 2 = 2.0 and k 2 = 1.1 15 . We also test whether the Arabia shoreline can be explained by deformation due to only a certain percentage of Tharsis’ emplacement and associated loading, by multiplying ∆ T Tharsis in equation (3) by a factor C , corresponding to the percentage of Tharsis topography emplaced after shoreline formation (see Extended Data Fig. 3 ). Deuteronilus shoreline (late-stage Tharsis formation) The Deuteronilus shoreline post-dates the initiation of Tharsis by >100 Myr, and therefore probably formed after a large portion of Tharsis was emplaced. The shoreline also probably post-dates most Tharsis-induced TPW, which should have occurred within a few million years of load emplacement 20 . Estimates of load-driven TPW on Mars suggest timescales less than 10 Myr 34 , 35 , 36 , well within the required pre-Deuteronilus timescale. Although a fraction of the 20° of Tharsis-induced TPW may be due to relaxation of the lithosphere and occur on longer timescales 37 , this should have a negligible effect given the small influence of TPW on shoreline deformation ( Fig. 1a ). Accordingly, we assume that deformation to the Deuteronilus shoreline since its formation is due to the topographic response of Mars to only the late stages of Tharsis’ emplacement and associated loading. The current topography of the Deuteronilus shoreline should therefore follow the deformation profile ∆ T D , given by: where ∆ T Tharsis is deformation due to Tharsis (equation (2)), C is a constant representing how much of Tharsis formed after Deuteronilus was formed, and Z is a constant to adjust for sea level at the time of the shoreline’s formation. We minimize the misfit between equation (4) and the Deuteronilus shoreline elevation data 18 , to determine the optimal amount of Tharsis topography that should post-date the shoreline’s formation. Isidis shoreline (late-stage Tharsis formation) Because the Isidis shoreline is 100 Myr younger than Deuteronilus, we assume a similar deformation profile and compare equation (4) to the Isidis shoreline data 18 . We use the same value of C optimized for the Deuteronilus shoreline, but allow Z to vary, reflecting that sea level could change in the 100 Myr between the formation of the Deuteronilus contact and the Isidis contact, but deformation from Tharsis topography should not change substantially. Although C should be slightly less for Isidis, optimizing for C would result in an unrealistic C = 0 because deformation due to Tharsis along the Isidis shoreline results in a tilt that is opposite to the present tilt ( Extended Data Fig. 4 ). For the Isidis shoreline, our model predicts that deformation due to Tharsis would have tilted Isidis opposite to its present tilt ( Extended Data Fig. 4 ). While this appears contradictory, the mismatch is possible if Isidis was tilted to its present orientation by loading of the Utopia basin. The Utopia basin has a large positive gravity anomaly 38 , 39 , indicating about 18 km of excess fill 40 . Such a load would have caused elastic plate flexure and a peripheral bulge, which could have tilted the Isidis basin. Using a plate flexure model, McGowan and McGill 41 show that loading of Utopia could have tilted Isidis to an even greater extent than currently observed. Therefore, some amount of reverse tilting (as our model predicts) is possible. The timing of Utopia loading relative to the subsequent Tharsis deformation is irrelevant provided that Utopia loading also occurred after the Isidis shoreline formed. We expect loading of Utopia to occur after Isidis shoreline formation because a shrinking Martian ocean would evaporate from the Utopia basin last, depositing the non-volatile component of the ocean there. Additionally, if the ocean became cold and glacial during its decline 42 , then receding glaciers may also have loaded Utopia with excess sediment. The deposits in the base of Utopia basin date to the early Amazonian (<3–3.46 Gyr ago) 43 , 44 , after the emplacement of the Isidis shoreline. The eastern portion of Utopia also contains volcanic deposits from Elysium that date to the Amazonian 44 , which could also contribute to loading. While loading from the ocean itself is expected to produce some plate flexure, it is not sufficient to explain the tilt of the Isidis basin (see Extended Data Fig. 5c ), and water loading/unloading of Utopia is also insufficient to explain Isidis’ tilt 41 . Therefore, deposition of material from a receding liquid, muddy or frozen ocean may explain the tilt of the Isidis basin, even if some reverse tilting is caused by deformation due to Tharsis. Effect of elastic lithosphere thickness The gravity and shape coefficients we use to subtract Tharsis topography are based on an assumed T e = 58 km, the expected value at the time of Tharsis loading 14 . However, the estimate of T e yields a 90% confidence interval with a minimum and maximum of 26 km and 92 km, respectively 14 . A thinner or thicker T e would alter the deformation due to Tharsis (and TPW) because the Love numbers used to compute Mars’ deformation would change. To estimate the effect of T e = 26 km or 92 km on deformation due to Tharsis, we recompute Tharsis’ gravity and shape coefficients following the method of ref. 14 . Using a fixed Tharsis centre location (258.6° E, 9.8° N), Matsuyama and Manga 14 compute the degree-2 gravity coefficients of Tharsis using a minimization technique with four unconstrained model parameters ( T e , non-dimensional Tharsis load Q , palaeopole colatitude θ R and palaeopole longitude φ R ), where the palaeopole corresponds to the axis of rotation when the fossil (remnant) bulge was formed. This results in probability density functions for each unconstrained parameter, with the weighted averages (expected values) used to compute the gravity and shape coefficients. We redo this analysis, as described in section 5 of ref. 14 , but with T e treated as a constrained parameter. This allows us to estimate the expected values of Q , θ R and φ R for a given value of T e . We find that for T e = 26 km, = 3.95, = 17.9°, and = 259.1°. For T e = 92 km, = 1.57, = 14.2°, and = 259.3°. Tharsis’ degree-2 gravity coefficients are recomputed using these values. The degree-3 to -5 gravity coefficients of Tharsis are computed from minimization against the observed degree-3 to -5 gravity coefficients, and are therefore not dependent on T e . Shape coefficients for Tharsis are computed up to degree 5 following section 7 of ref. 14 . We compute the load Love numbers using the ALMA code 45 , with a five-layer model as described in ref. 14 . We construct new best-fit deformation profiles for T e = 26 km and 92 km, but with the corresponding Tharsis gravity and shape coefficients that we computed for each T e . The best-fit profiles for T e = 26 km and 92 km are compared with the nominal T e = 58 km profiles in Extended Data Fig. 3 . All best-fit profiles are relatively similar, showing that changes in T e do not have much effect on our conclusions. Effect of plate flexure Although Perron et al . 10 found that plate flexure due to loading of the ocean basin should not substantially affect the shoreline elevations, their analysis was for T e = 200 km, whereas we use T e = 58 km. The ocean basin resulting from our analysis also has less volume and a different shape, because we subtract Tharsis topography, which has a negative component in much of the Borealis basin. To compute plate flexure due to ocean loading, we expand the surface density of the ocean load in spherical harmonics and compute the associated displacement using the method described in ref. 46 . For the Arabia ocean, the ocean load is computed by subtracting the pre-Tharsis topography of Mars from the best-fit Arabia ocean elevation ( Z = −2.3 km). The pre-Tharsis Martian topography is computed by subtracting the deformation due to Tharsis and TPW, equations (2) and (1), from Mars’ current topography (0.25° per pixel gridded MOLA data 21 ). For the ocean level corresponding to the Deuteronilus and Isidis shorelines, only 17% of deformation due to Tharsis was subtracted from Mars’ current topography, and the ocean elevation Z was set to −3.68 km and −3.95 km, respectively. We use a Young’s modulus of 70 GPa, Poisson ratio of 0.25, and an assumed value of T e = 58 km. The loaded shoreline topography is compared to the unloaded topography in Extended Data Fig. 5 . We compute a maximum magnitude of deflection of 134 m, 84 m and 57 m, for the Arabia, Deuteronilus and Isidis shorelines, respectively. The mean magnitude of deflection is 35 m for the Arabia shoreline and 17 m for the Deuteronilus and Isidis shorelines. Deformation of the shorelines due to unloading of the ocean basin is negligible. Data availability The data that supports the findings of this study are available on request from the corresponding author. Gravity and shape coefficients for Tharsis are included in the Supplementary Information . Shoreline data should be requested from the respective sources.
A new scenario seeking to explain how Mars' putative oceans came and went over the last 4 billion years implies that the oceans formed several hundred million years earlier and were not as deep as once thought. The proposal by geophysicists at the University of California, Berkeley, links the existence of oceans early in Mars history to the rise of the solar system's largest volcanic system, Tharsis, and highlights the key role played by global warming in allowing liquid water to exist on Mars. "Volcanoes may be important in creating the conditions for Mars to be wet," said Michael Manga, a UC Berkeley professor of earth and planetary science and senior author of a paper appearing in Nature this week and posted online March 19. Those claiming that Mars never had oceans of liquid water often point to the fact that estimates of the size of the oceans don't jibe with estimates of how much water could be hidden today as permafrost underground and how much could have escaped into space. These are the main options, given that the polar ice caps don't contain enough water to fill an ocean. The new model proposes that the oceans formed before or at the same time as Mars' largest volcanic feature, Tharsis, instead of after Tharsis formed 3.7 billion years ago. Because Tharsis was smaller at that time, it did not distort the planet as much as it did later, in particular the plains that cover most of the northern hemisphere and are the presumed ancient seabed. The absence of crustal deformation from Tharsis means the seas would have been shallower, holding about half the water of earlier estimates. "The assumption was that Tharsis formed quickly and early, rather than gradually, and that the oceans came later," Manga said. "We're saying that the oceans predate and accompany the lava outpourings that made Tharsis." It's likely, he added, that Tharsis spewed gases into the atmosphere that created a global warming or greenhouse effect that allowed liquid water to exist on the planet, and also that volcanic eruptions created channels that allowed underground water to reach the surface and fill the northern plains. Following the shorelines The model also counters another argument against oceans: that the proposed shorelines are very irregular, varying in height by as much as a kilometer, when they should be level, like shorelines on Earth. This irregularity could be explained if the first ocean, called Arabia, started forming about 4 billion years ago and existed, if intermittently, during as much as the first 20 percent of Tharsis's growth. The growing volcano would have depressed the land and deformed the shoreline over time, which could explain the irregular heights of the Arabia shoreline. Similarly, the irregular shoreline of a subsequent ocean, called Deuteronilus, could be explained if it formed during the last 17 percent of Tharsis's growth, about 3.6 billion years ago. "These shorelines could have been emplaced by a large body of liquid water that existed before and during the emplacement of Tharsis, instead of afterwards," said first author Robert Citron, a UC Berkeley graduate student. Citron will present a paper about the new analysis on March 20 at the annual Lunar and Planetary Science conference in Texas. Tharsis, now a 5,000-kilometer-wide eruptive complex, contains some of the biggest volcanoes in the solar system and dominates the topography of Mars. Earth, twice the diameter and 10 times more massive than Mars, has no equivalent dominating feature. Tharsis's bulk creates a bulge on the opposite side of the planet and a depression halfway between. This explains why estimates of the volume of water the northern plains could hold based on today's topography are twice what the new study estimates based on the topography 4 billion years ago. New hypothesis supplants old Manga, who models the internal heat flow of Mars, such as the rising plumes of molten rock that erupt into volcanoes at the surface, tried to explain the irregular shorelines of the plains of Mars 11 years ago with another theory. He and former graduate student Taylor Perron suggested that Tharsis, which was then thought to have originated at far northern latitudes, was so massive that it caused the spin axis of Mars to move several thousand miles south, throwing off the shorelines. Since then, however, others have shown that Tharsis originated only about 20 degrees above the equator, nixing that theory. But Manga and Citron came up with another idea, that the shorelines could have been etched as Tharsis was growing, not afterward. The new theory also can account for the cutting of valley networks by flowing water at around the same time. "This is a hypothesis," Manga emphasized. "But scientists can do more precise dating of Tharsis and the shorelines to see if it holds up." NASA's next Mars lander, the InSight mission (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport), could help answer the question. Scheduled for launch in May, it will place a seismometer on the surface to probe the interior and perhaps find frozen remnants of that ancient ocean, or even liquid water.
10.1038/nature26144
Biology
Structure of a protein complex related with cell survival revealed
Fabrizio Martino et al, RPAP3 provides a flexible scaffold for coupling HSP90 to the human R2TP co-chaperone complex, Nature Communications (2018). DOI: 10.1038/s41467-018-03942-1 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-03942-1
https://phys.org/news/2018-04-protein-complex-cell-survival-revealed.html
Abstract The R2TP/Prefoldin-like co-chaperone, in concert with HSP90, facilitates assembly and cellular stability of RNA polymerase II, and complexes of PI3-kinase-like kinases such as mTOR. However, the mechanism by which this occurs is poorly understood. Here we use cryo-EM and biochemical studies on the human R2TP core (RUVBL1–RUVBL2–RPAP3–PIH1D1) which reveal the distinctive role of RPAP3, distinguishing metazoan R2TP from the smaller yeast equivalent. RPAP3 spans both faces of a single RUVBL ring, providing an extended scaffold that recruits clients and provides a flexible tether for HSP90. A 3.6 Å cryo-EM structure reveals direct interaction of a C-terminal domain of RPAP3 and the ATPase domain of RUVBL2, necessary for human R2TP assembly but absent from yeast. The mobile TPR domains of RPAP3 map to the opposite face of the ring, associating with PIH1D1, which mediates client protein recruitment. Thus, RPAP3 provides a flexible platform for bringing HSP90 into proximity with diverse client proteins. Introduction The R2TP/Prefoldin-like (R2TP/PFDL) complex collaborates with the HSP90 molecular chaperone to facilitate assembly, activation, and cellular stability of a range of multiprotein complexes, including RNA polymerase II (Pol II), complexes of PI3 kinase-like kinases (PIKKs) such as TOR and SMG1, and small nuclear ribonuclear protein (snRNPs) complexes, amongst others 1 , 2 , 3 , 4 , 5 , 6 , 7 . Yeast R2TP complexes comprise four subunits, RuvB-like AAA+ ATPases Rvb1p and Rvb2p, a TPR domain-containing protein Tah1p, and a PIH domain protein Pih1p. Metazoan R2TP complexes contain the orthologous proteins RUVBL1, RUVBL2, RPAP3, and PIH1D1, respectively. However, whereas the TPR domain-containing component of the yeast R2TP complex is a small (12 kDa) protein, Tah1p, in human R2TP this is a large (75 kDa) multi-domain protein, RPAP3 (RNA polymerase II associated protein 3) (or hSPAGH), containing two TPR domains. The C-terminal region in RPAP3 has been annotated as a protein domain (pfam13877), which is also present in other proteins, such as CCDC103 8 , a dynein arm assembly factor that interacts with RUVBL2 9 . In mammals, the R2TP core components associate with additional subunits of the prefoldin (PFDL) module, forming the R2TP/PFDL complex. This PFDL module includes prefoldin and prefoldin-like proteins PFDN2, PFDN6, URI1, UXT, PDRG1, and it associates with two additional components, the RNA polymerase subunit POLR2E/RPB5 and WDR92/Monad 5 , 10 . In addition, R2TP/PFDL interacts with additional proteins that serve as adaptors between R2TP/PFDL and the clients (see later) 5 , 11 , 12 . RPAP3 was first identified and named after a systematic analysis of complexes containing components of the transcription and RNA processing machineries using protein affinity purification coupled to mass spectrometry 13 . RPAP3 was then found to be a component of the multi-subunit R2TP/PFDL complex 14 . Subsequently it was found to associate with Pol II subunits and HSP90 when Pol II assembly is blocked by α-amanitin, implicating both RPAP3 and HSP90 in Pol II assembly in the cytoplasm 10 . Pol II subunits RPB1, RPB2, and RPB5 all co-precipitate with RPAP3, but RPAP3 seems to associate independently with RPB1 and RPB5-containing complexes, suggesting the existence of different RPAP3 complexes as intermediates in Pol II assembly. RPAP3 also binds some subunits of RNA Pol I and it may therefore play a more general role in the assembly of all RNA polymerases 10 . The mechanistic details of how RNA Pol II subunits are recruited to R2TP and how R2TP and HSP90 contribute to Pol II assembly are poorly understood. Unconventional prefoldin RPB5 Interactor 1 (URI1) interacts with the RPB5/POLR2E subunit of Pol II, and this suggests that the PFDL module contributes to recruit Pol II assembly intermediates to the R2TP/PFLD complex 10 , 15 . Recruitment of PIKK proteins to R2TP is mediated by the phosphopeptide-binding PIH domain at the N-terminus of Pih1p/PIH1D1, which recognizes a specific phosphorylated acidic motif, generated by casein kinase 2 (CK2) 2 , 3 , 4 . This motif is conserved in Tel2p/TELO2, a component of the TTT (Tel2p/TELO2–Tti1p/TTI1–Tti2p/TTI2) complex that also interacts directly with PIKKs, thereby bridging their interaction to R2TP. A similar PIH-binding motif is also found in Mre11p/MRE11A suggesting that R2TP may also play a role in the assembly of MRN complexes involved in DNA double-strand break repair 2 . Neither Pol II nor snRNPs subunits contain this motif, and must therefore be recruited to R2TP through alternative mechanisms. Biogenesis of box C/D snoRNP requires R2TP and additional factors such as NUFIP1 and the Zinc-finger HIT domain proteins ZNHIT3 and ZNHIT6, which have been proposed to function as adaptors between R2TP and C/D core proteins 12 . Interestingly, ZNHIT2, another protein of the same family, was recently shown to bind RUVBL2 and regulate assembly of U5 small ribonucleoprotein 5 . ZNHIT2 may function as a bridging factor between the U5 snRNP and the R2TP/PFDL, a function where the Ecdysoneless (ECD) protein could also contribute 5 , 16 . Human ECD homolog interacts with the pre-mRNA-processing-splicing factor 8 (PRPF8) 17 , and the R2TP 18 . Phosphorylated ECD interacts with the PIH1D1 subunit, as well as with RUVBL1 in a phosphorylation-independent manner 18 . Therefore, it seems that sets of different adaptors collaborate to bring specific clients to R2TP/PFLD. Previous structural and biochemical studies have defined most of the pairwise interactions of R2TP core components. The TPR domain of yeast Tah1p mediates interaction with the conserved MEEVD C-terminal tail peptide of HSP90 2 , 19 , 20 , 21 , 22 , while its C-terminal extension couples Tah1p to the CS-domain of Pih1p 2 , 21 . The central region of Pih1p mediates recruitment of Pih1p–Tah1p to the Rvb1p–Rvb2p heterohexameric ring 23 , 24 . The N-terminal PIH domain of Pih1p/PIH1D1 binds a CK2-phosphorylation motif on Tel2p/TELO2, mediating recruitment of the TTT complex to R2TP 2 , 3 . Most recently, we have determined the cryo-EM structure of the intact yeast R2TP complex, in which a single Tah1p–Pih1p sub-complex binds a heterohexameric Rvb1–Rvb2 ring 24 , a finding subsequently confirmed by others 25 . In metazoan R2TP, the small (12 kDa) single-TPR domain protein Tah1p is replaced by the much larger (75 kDa) RPAP3/hSpagh whose N-terminal half contains a tandem pair of TPR domains that bind in concert to a single HSP90 dimer 2 . However, the function of the rest of RPAP3 is unknown. To our knowledge, the subunit stoichiometry and the structural organization of a metazoan R2TP complex have not been determined. To gain further insight into how R2TP/PFDL functions in the assembly, activation and stabilization of its ‘client’ systems, we have determined the cryo-EM structure of human R2TP core complex. Our data reveal a substantially elaborated architecture compared with the yeast system, in which RPAP3 rather than PIH1D1 plays the central organizational role, incorporating additional domains and functions to address the assembly of a variety of large complexes. We identify the C-terminal domain in RPAP3 as a helical bundle that binds selectively to the ATPase domain of RUVBL2. As well as scaffolding the interaction of PIH1D1 with the RUVBL1–RUVBL2 ring, RPAP3 provides a flexible tether for HSP90, allowing it to interact with a highly diverse set of client proteins and complexes. Results Recruitment of R2TP components by RPAP3 The human TPR domain protein RPAP3 is roughly six times larger than its yeast equivalent Tah1p, and we sought to determine whether it may provide docking sites for other components of the human R2TP complex (Fig. 1a ). Yeast Pih1p constructs containing the C-terminal CS domain, and the isolated CS domain itself, are unstable in isolation, but are stabilized by interaction with the C-terminal tail of Tah1 26 . We found that human PIH1D1 protein was also unstable when expressed in isolation, and we used this property to identify a minimal PIH1D1 binding motif in RPAP3 by co-expressing PIH1D1 with GST-tagged RPAP3 constructs and looking for co-purification of PIH1D1 in GST pull-downs from cell lysates. As well as in the full-length GST-RPAP3, we found that constructs that contained residues 400–420 of RPAP3, immediately downstream of the second RPAP3 TPR domain, were able to form a stable and soluble complex with full-length PIH1D1 or its isolated CS domain, when co-expressed (Fig. 1b ). PIH1D1-CS was not co-purified when co-expressed with a GST-RPAP3 construct lacking residues 400–420 (Fig. 1c ). We conclude that residues 400–420 of RPAP3 and the CS domain of PIH1D1 are together both necessary and sufficient to mediate the interaction of the two proteins. Fig. 1 Mapping the interactions in human R2TP core components. a A cartoon for sequence and domains of the components of the human R2TP complex. b GST pull-down experiments depicting the interactions between the several regions in RPAP3 and PIH1D1. FL stands for full length, CS for the CS domain in PIH1D1, and MW for molecular weight markers. Be aware that for simplification, several PIH1D1 and RPAP3 constructs are indicated within the same lines on top of the gel. Some minor contaminants are present in some of the samples. c Pull-down experiments showing that removal of residues 401–420 from an RPAP3 construct eliminates the interaction with the CS domain in PIH1D1. d Pull-down experiments demonstrating the interaction of RPAP3–RBD with RUVBL2. This interaction is not affected when the DII domains in RUVBL2 are removed Full size image We found that full-length RPAP3 protein in the absence of PIH1D1 was fully competent to bind to the assembled RUVBL1–RUVBL2 heterohexamer but it binds RUVBL2 and not RUVBL1 when they are not forming a complex, suggesting that it is RUVBL2 that mediates most of the interactions to recruit RPAP3 (Fig. 1d ). Dissection analysis of RPAP3 identified a segment of the polypeptide between Valine 541 and Glycine 665 as necessary and sufficient to bind RUVBL2 (hereinafter referred to as RBD, RUVBL2-Binding Domain). The RPAP3–RBD was also able to pull-down an RUVBL2 construct lacking the DII ‘insertion’ domain (RUVBL2-ΔDII), suggesting the RPAP3–RBD domain interacts with the ATPase domain face of RUVBL2 rather than the DII domain face implicated in dodecamer formation (Fig. 1d ). The RBD is the only domain essential to maintain the RPAP3–RUVBL2 interaction, since an RPAP3–PIH1D1 complex where the RBD is truncated did not bind RUVBL2 (Supplementary Fig. 1 ). An N-terminal 3xMyc tag in RUVBL1 was used to allow RUVBL1 and RUVBL2 to be discriminated in sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). The N-terminal end of RUVBLs locates at the DII domain face, and thus the tag is unlikely to affect binding of the RBD to the ATPase domain face. Control experiments rule out any deleterious effects of the GST tag and the tag in RUVBL1 (Supplementary Fig. 1 ). Together, our interaction mapping reveals a very different assembly of the PIH domain and TPR domain-containing components of human R2TP than in the yeast system 24 . Instead of Pih1p acting as the central scaffold that connects the HSP90-recruitment factor Tah1p to the AAA+ ring, RPAP3 takes at least part of this role, interacting with HSP90, RUVBL1–RUVBL2 and PIH1D1, and providing additional domains, which may mediate recruitment of other factors to the R2TP core 27 . Most significantly, the primary interaction between the TP component and the RUVBL1–RUVBL2 heterohexamer in human R2TP, is mediated by a RUVBL2-binding domain located at the C-terminus of RPAP3, corresponding to the previously annotated protein domain (pfam13877). RPAP3–PIH1D1 but not RBD disrupts dodecameric RUVBL1–RUVBL2 The cryo-EM images of RUVBL1–RUVBL2 alone, obtained using the same experimental conditions as those used later for the full R2TP, showed that they exist as a back-to-back dodecameric complex with the DII domains mediating the interaction between two hexamers, and the ATPase rings facing outward (Fig. 2a ), as previously described 28 . In all cases, we used Adenosine 5′-diphosphate (ADP) in the buffer, which stabilizes RUVBL1–RUVBL2-containing complexes 29 , 30 . When RUVBL1–RUVBL2 was incubated with a fragment of RPAP3 comprising residues 430–665, up to 3 RBDs decorated the ATPase face of the RUVBL ring, one per each RUVBL2 molecule in the complex (Fig. 2b ). In our conditions, one ring in the dodecamer was saturated whereas the other contained a variable number of RBDs, indicating we had not reached saturation. The location of the RBD at the ATPase face was consistent with its binding to RUVBL2-ΔDII in pull-down experiments (Fig. 1d ) and with cross-links detected between the RBD and residues K453 of RUVBL1 and K417 in RUVBL2, both exposed at the ATPase face of the RUVBL ring (Supplementary Fig. 2 ). These experiments show that the interaction of RPAP3 residues 430–665, which include the RBD, with RUVBL1–RUVBL2 was insufficient to disrupt the dodecamer. Fig. 2 Cryo-EM imaging of RUVBL1–RUVBL2 and the RBD domain. a 2D averages corresponding to top and side views obtained from cryo-EM images of the RUVBL1–RUVBL2 preparation in an ADP-containing buffer. b After incubation with RPAP3 430–665 , RBDs decorate the ATPase side of both RUVBL rings without disrupting the dodecamer, and a representative 2D average of the complex between RUVBL1–RUVBL2 and RBD is shown. At the right end of the panels, one view of the 3D structure of RUVBL1–RUVBL2–RBD complex with RBD domains in yellow. Note that one of the RBD domains in the bottom ring is less visible at the threshold used for rendering, probably reflecting variable occupancy. Also, the scale of the 3D structure has been enlarged with respect to the 2D average, for clarity Full size image R2TP could be reconstituted by mixing RUVBL1–RUVBL2 and RPAP3–PIH1D1 (Fig. 3a ). For cryo-EM studies, R2TP was assembled from RUVBL1–RUVBL2 and RPAP3–PIH1D1 sub-complexes, each co-expressed and purified by affinity and gel-filtration chromatography (Fig. 3b ). Images of the fully assembled human R2TP complex revealed a single hexameric ring of RUVBL1–RUVBL2 in which its ATPase face was decorated with the RBD domain of RPAP3 (Fig. 3c , Supplemental Fig. 3 ). The remaining regions of RPAP3 appeared as blurred density at the opposite side of the RUVBL1–RUVBL2 ring, tilted with respect to the ring, indicative of substantial structural flexibility in their connection to the core of the complex. Fig. 3 Cryo-EM imaging of the R2TP complex. a Pull-down experiments showing the in vitro reconstitution of R2TP. M indicates molecular weight markers. b Purification of RUVBL1–RUVBL2 and PIH1D1–RPAP3 sub-complexes, used for the reconstitution of R2TP for cryo-EM. M indicates molecular weight markers. c Two representative side view averages of R2TP. RUVBL1–RUVBL2 rings are decorated by the RBD at the top (labeled with white arrows). A blurred and very flexible region locates at the bottom of the ring. d A representative side view average of R2TP reconstructed using the RPAP3–ΔNT–PIH1D1 sub-complex and RUVBL1–RUVBL2. Flexible regions at the bottom end of R2TP disappear when the N-terminal half of RPAP3 is removed, but dodecameric RUVBL1–RUVBL2 is disrupted. e 3D structure of R2TP obtained applying 3-fold symmetry. RBDs are bound to RUVBL1–RUVBL2 but the flexible regions in the complex are not resolved. Scale bar, 2.5 nm Full size image Flexible regions at the opposite end of the RUVBL ring are attributed to the N-terminal TPR-containing region of RPAP3. In support of this, this density is absent from the cryo-EM images when R2TP was reconstituted with a truncated version of RPAP3 (residues 395–665) bound to PIH1D1, but from which the TPR domains and N-terminus of RPAP3 have been removed (RPAP3-ΔNT) (Fig. 3d ). These images suggested that PIH1D1 located close to the RUVBL ring, as in yeast 24 , 25 . Therefore, RPAP3–PIH1D1 and RPAP3–ΔNT–PIH1D1 are sufficient to disrupt the RUVBL1–RUVBL2 dodecamers. Since RPAP3 residues 430–665 bind the RUVBL ring without affecting the dodecameric assembly of RUVBL1–RUVBL2, our data suggest that the RPAP3 region that binds PIH1D1 together with PIH1D1 itself are responsible for these effects, likely because it locates within the DII-domain face of the RUVBL ring, as in yeast 24 . This is the side of the RUVBL ring where most protein interactions have been described in other complexes that contain a RUVBL1–RUVBL2 hexamer, such as INO80 31 . These experiments do not discard that RPAP3 alone could be sufficient to disrupt the RUVBL1–RUVBL2 dodecamer. R2TP can engage up to 3 RPAP3 molecules R2TP images were classified according to the number of RBDs per ring, by masking out the rest of the molecule and without assuming symmetry (Supplementary Fig. 3 ). This classification revealed that most of the RUVBL rings contained 3 RBDs (>67%), suggesting that R2TP can incorporate up to 3 RPAP3 molecules, one for each RUVBL2 in the complex. This was also the subgroup where the RBDs displayed better quality in the cryo-EM images. It is noteworthy that several orientations of R2TP complexes containing 3 RBDs may seem to contain only 2 apparent RBDs in the two-dimensional (2D) averages (Fig. 3c ), but this is because 2 RBDs coincide in the same direction of the projection for many of the most abundant rotations of R2TP along its longitudinal axis in our data set, thus masking each other (Supplementary Fig. 3 ). Images of the most abundant, RBD-saturated complex were then processed applying 3-fold symmetry (Fig. 3e ). The structure revealed a RUVBL1–RUVBL2 hexamer decorated by 3 RBDs. However flexible regions located at the opposite end of the RUVBL ring did not follow the 3-fold rotational symmetry. These regions included the TPR end of RPAP3, which was mapped using RPAP3-ΔNT (Fig. 3d ), and also a density in the DII-face of the RUVBL ring that should contain PIH1D1. The rigid and flexible segments of R2TP were processed separately with dedicated image processing strategies (see later). Cryo-EM structure of the RUVBL1–RUVBL2–RBD complex The structure of RUVBL1–RUVBL2–RBD was first resolved using 3-fold symmetry, revealing a helical domain bound to each RUVBL2 in the hexameric ring (deposited as R2TP-C3 symmetry in EMDB) (Fig. 4a ). When the processing was performed without symmetry, small differences in the quality of each RBD in the ring were observed. Thus, to attain maximum resolution of the RUVBL1–RUVBL2–RBD interaction, particles were rotated by 120 and 240° so that each of the 3 RBDs in every particle was placed in the same position, allowing the classification of all the available RBD data (Fig. 4b, c ). Similar methodology was applied previously by others to resolve the cryo-EM structure of the apoptosome 32 (see details in Methods). Three-dimensional (3D) classification generated subgroups corresponding to small differences in some of the α-helices of the RBD, and the one that reached the best resolution was refined locally to an estimated average resolution of 3.6 Å (Supplementary Fig. 3 ). Analysis of the local resolution revealed that most of the map showed resolutions between 3.0 and 3.5 Å, with some external parts ranging between 3.5 and 5 Å (deposited as R2TP–1RBD in EMDB) (Fig. 4b ). The RBD structure revealed a α-helical domain sitting on top of each RUVBL2 subunit, projecting out from the ATPase domain face of the RUVBL1–RUVBL2 ring (Fig. 4c ). Fig. 4 Cryo-EM structure of RUVBL1–RUVBL2–RBD. a Top and side view of the cryo-EM density for the RUVBL1–RUVBL2–RBD complex containing 3 RBDs. RUVBL1 is colored in orange, RUVBL2 in blue, and RBD in yellow. Scale bar, 2.5 nm. b Top and side view of the 3.6 Å resolution structure of the RUVBL1–RUVBL2–RBD complex, processed as indicated in Methods, and displaying 1 RBD per RUVBL ring, and colored according to resolution difference from 3.0 to 5.5 Å 46 . Scale bar, 2.5 nm. c Two views of a 1:1:1 RUVBL1–RUVBL2–RBD complex, as seen from the inside or the outside of the RUVBL1–RUVBL2 ring. The position of the C-terminal helices of RUVBL1 and RUVBL2 are indicated. Scale bar, 2.5 nm Full size image Previously described interactions with RUVBL1–RUVBL2 such as in SWR1 33 , INO80 11 , or yeast R2TP 24 , 25 are mediated by the face of the ring presenting the DII ‘insertion’ domain. Interaction with the opposite ATPase domain face of the RUVBL1–RUVBL2, as seen here with RPAP3-RBD, has not previously been found, although contacts between this face of the RUVBL ring have been observed in crystals of Rvb1/Rvb2 dodecameric complexes 34 . RPAP3–RBD recognizes specific features in RUVBL2 Side chains and other high-resolution details are clearly visible in the RUVBL1–RUVBL2 component of the map (Supplementary Fig. 4 ), such as the ADP density and the surrounding side chains, allowing an accurate atomic modeling of these two proteins in the complex. As no experimental crystallographic or Nuclear magnetic resonance (NMR) structure has been reported for the RPAP3-RBD, a structural model was generated using the I-TASSER server 35 , which predicted that residues 541–665 comprise 8 highly conserved α-helices preceded by a long and poorly structured region (Supplementary Fig. 4 ). Attempts to crystallize the RBD resulted in a crystal structure at 1.8 Å resolution of a proteolytic fragment comprising residues 578–624 (deposited as 6FM8 in PDB). This fragment accounted for a third of the RBD domain approximately, including most of helix 3, and the complete helices 4 and 5 (Fig. 5a ). The crystal structure supported assignment of the correct register of amino acids, and it helped to model the connection between helices 5 and 6. The conformation of such a short segment will be strongly influenced by crystal packing, and thus the overall conformation of the RBD observed in cryo-EM was considered for the model. The disposition of secondary structure elements in the prediction clearly matched the density for the RBD we observed in the cryo-EM, and the model was then flexibly fitted within the map (see Methods for details) (Supplementary Movie 1 ). The fitted structure of the RBD and the crystal structure of the RUVBL1–RUVBL2 hetero-hexamer (PDB 2XSZ) 29 were refined in Phenix.refine and adjustments made in COOT 36 . The atomic model obtained (Fig. 5b ) showed good cross-correlation with the cryo-EM map (Supplementary Fig. 4 ). Density for side chains were also visible in most of the helices of the RBD (Supplementary Fig. 5 ), and this, together with the crystal structure of the RBD fragment, were used to tune and validate the fitting of the atomic model into the cryo-EM map. Nonetheless, some side chains in the interaction surface between RUVBL1–RUVBL2 and RBD are not well defined, and thus detailed protein–protein interactions are mostly discussed at the level of secondary structural elements, except where side chain density was clear. Fig. 5 Structural model of RUVBL1–RUVBL2–RBD. a Crystal structure of a fragment of the RBD domain. H3, H4, and H5 stand for helices 3, 4, and 5, respectively. b Structural model of RUVBL1-RUVBL2-RBD. Color codes: RUVBL1 (orange color), RUVBL2 (blue color), and the RBD (yellow color). The panel on the left shows only two subunits from RUVBL1–RUVBL2 as seen from the interior of the ring. The panel on the right is a close-up view of the RBD to highlight the N-terminal region and the positioning of the 8 helices in the structure (H1 to H8). Scale bar, 2.5 nm. c Two cross-links identified by XL-MS between RUVBL1–RUVBL2 and the RBD are indicated on the structural model Full size image We subjected the assembled complex to analysis by cross-linking mass spectrometry (XL-MS) (Supplementary Fig. 2 ). XL-MS identified cross-links between residue K453, a lysine in RUVBL1 C-terminal helix, and K622 in RPAP3, and between K417 in RUVBL2 and K495 in RPAP3—both cross-links are compatible with the structure (Fig. 5c ). Although the bound RPAP3-RBD is proximal to some regions in RUVBL1 and an RPAP3–RUVBL1 cross-link was observed, as expected from the pull-down data, the bulk of the RPAP3 interaction with the AAA+ ring occurs with RUVBL2, and is mediated by selective interaction with conserved features that are absent in RUVBL1 (Fig. 5b ). As a control during the adjustment of the RBD model into the cryo-EM map, we found that any fitting in the reverse N- and C-terminal orientation did not agree with the cryo-EM density but, in addition, was incompatible with the XL-MS data. RPAP3–RBD is comprised of 8 helical segments (H1 to H8), and they make contacts with RUVBL2 through H1, H6, and the loop connecting H5 and H6 (Fig. 5b ). Although no secondary structure is predicted for RPAP3 immediately N-terminal of H1, the electron density for this segment was good enough to model a loop between Leu541 and Asn548 based on the presence of three prolines (Pro543, Pro544, Pro546) (Fig. 5b ). This loop leads toward the outer edge of the RUVBL1–RUVBL2 ring (see later). H1 contacts the C-terminal region of RUVBL1 (Fig. 5b ), whereas H6 sits between 2 helixes of RUVBL2 that are positioned in a V-shape fashion (Figs. 5 b, 6a ). The V-shaped fold of RUVBL2 that accommodates H6 of the RPAP3–RBD is rich in positively charged amino acids such as Arg392, which is moved away from its location in the crystal structure of RUVBL1–RUVBL2 alone (PDB 2XSZ) 29 (Fig. 6a ), suggesting that it may be repositioned to avoid clashing and/or to make contacts with the RBD. Density for the side chain of Arg623 of the RBD is also well defined, pointing toward residues in RUVBL2, with which it may establish contacts (Fig. 6a ). Fig. 6 Structural details of RUVBL1–RUVBL2–RBD. a R392 residue in RUVBL2 of the RUVBL1–RUVBL2–RBD complex (model in blue color within the density) shows a different conformation to its position in the crystal structure of RUVBL1–RUVBL2 (structure in gray color) (PDB 2XSZ) 29 . R623 in RBD is also visible and pointing toward RUVBL2. b Helix 1 and 6 are 100% identical in those species analyzed (red color). Other regions shown in magenta color also revealed high conservation Full size image The RPAP3–RBD is strongly conserved in chordates, and the amino acid sequences of H1 and H6 are identical in humans, rats, mice, chicken, and Xenopus (Fig. 6b , Supplementary Fig. 4 , 100% identical regions labeled in red color), highlighting its functional importance, most likely for the interaction with RUVBL2. The regions of RUVBL2 that interact with RPAP3-RBD are also very well conserved, but these are also conserved in yeast suggesting that RPAP3, and not RUVBL2, evolved to exploit the characteristics of this region in RUVBL2. RPAP3 loops around the RUVBL1–RUVBL2 hexamer Cryo-EM images of R2TP suggested that the RPAP3 molecules span both faces of the RUVBL1–RUVBL2 ring (Fig. 3c ). This would be facilitated by the long unstructured region (residues 420–540) that connects the tandem TPR domains and PIHID1-binding region of RPAP3 to the C-terminal RBD (Fig. 1a ). In an extended conformation, this segment could comfortably stretch over several nanometers, allowing the RBD to bind on one face of RUVBL1–RUVBL2, while the rest of RPAP3 is located on the opposite face of the ring. The visible N-terminal loop on the RBD (Leu541–Asn548) comes in from the edge of the ATPase domain face (Fig. 5b ), suggesting that the preceding polypeptide chain runs across the rim of the RUVBL1–RUVBL2 ring. Consistent with such a trajectory, we identified a cross-link between RPAP3-Lys495, which lies within the unstructured region, and RUVBL2-Lys417, which projects from the rim of the RUVBL1–RUVBL2 ring (Fig. 5c ). RPAP3 provides a flexible tether for HSP90 Human R2TP was fully competent to bind HSP90 and this interaction is mediated by the TPR domains in the N-terminal half of RPAP3 (Supplementary Fig. 6 ). Unlike the RBD, which is rigidly bound, this region is disordered relative to the RUVBL1–RUVBL2 (Fig. 3 ). Two image-processing strategies were used to define the structure of the flexible regions in R2TP comprising the TPR domains of RPAP3 and PIH1D1. Extensive classification was used to select a subset of particles displaying a more homogenous conformation in the flexible regions (deposited as R2TP-subgroup1 in EMDB) (Fig. 7a ). In this structure, several compact densities were resolved on the DII domain face of the RUVBL ring. Densities at the further end of the complex, opposite to the RUVBL ring, had been assigned as comprising the N-terminal region of RPAP3 and the TPR domains (Fig. 3d ), and dimensions were sufficient to accommodate the TPR domains of 3 RPAP3 subunits. This interpretation was further supported by images of R2TP reconstituted using N-terminally GST-labeled RPAP3, where the additional density for the GST fusion mapped to the face of the RUVBL ring opposite to that bound by the RBD (Supplementary Fig. 6 ). On the other hand, density for PIH1D1 located within the DII region beneath the RUVBL ring. Fig. 7 Structural analysis of the flexible region in R2TP. a Side and bottom view of the output 3D classification for a subset of particles with a more homogenous conformation for the flexible regions. Scale bar, 2.5 nm. b Low-resolution structure of PIH1D1 region in R2TP complexes. Density for PIH1D1, highlighted in orange color. Scale bar, 2.5 nm. c One view of the structure of yeast R2TP (EMD 3678) 24 . Rvb1–Rvb2 is shown in white transparency whilst the density for the yeast Tah1p–Pih1p complex is shown in orange color Full size image To analyze the region around PIH1D1, images corresponding to R2TP complexes containing 3 RBDs were classified and aligned masking out flexible regions except for the vicinities of the RUVBL1–RUVBL2 ring, and image processing was performed without applying any symmetry. This approach resolved a clear region of density within the cage formed by the DII domains in the RUVBL ring (deposited as R2TP-subgroup2 in EMDB) (Fig. 7b ), in a similar position to that occupied by Pih1p and Tah1p in yeast R2TP 24 (Fig. 7c ). As in the yeast system, the inherent flexibility of this region did not permit a sufficiently high resolution to clearly define secondary structure elements. Discussion HSP90 is required for the stabilization, activation, and assembly of a diverse range of proteins and complexes involved in cellular processes as fundamental and as varied as transcription, cell cycle progression, centrosome duplication, telomere maintenance, siRNA-mediated gene silencing, apoptosis, mitotic signal transduction, innate immunity, and targeted protein degradation 37 . HSP90s ability to chaperone this very broad protein clientele is provided by co-chaperones 38 , which act as adaptors, facilitating recruitment of client proteins to HSP90. R2TP/PFDL is the most complex HSP90 co-chaperone yet described, and is known to facilitate HSP90s participation in the assembly of RNA polymerases, PIKK complexes, and snoRNPs 39 , 40 , although this list is likely to be far from complete. It recruits HSP90 via its TPR domain component Tah1p or RPAP3, and at least some of its clientele through a CK2 phosphorylation-dependent interaction with the PIH domain of Pih1p/PIH1D1 2 , 3 . The best characterized of these interactions involves Tel2p/TELO2, itself a component of an additional ‘adaptor’ layer, the TTT complex (Tel2p/TEL2, Tti1p/TTI1, and Tti2p/TTI2), which ultimately bridges R2TP, and thereby HSP90, to PIKK proteins 7 , 41 , 42 . Comparable PIH-binding motifs can be identified in a range of other proteins in yeast and metazoa, suggesting that there are many R2TP-mediated HSP90 dependencies yet to be described, some of which will likely also involve multiple adaptor systems. The structure of the human R2TP core components revealed here is well suited to deal with a high degree of diversity in the client proteins it brings to HSP90 (Fig. 8 ). In both yeast and human R2TP, a PIH domain client-recruitment component maps to the same face of the RUVBL ring as the TPR domain(s) necessary for HSP90 recruitment, facilitating direct interaction of the chaperone and the client (or at least client adaptor). Fig. 8 A cartoon for the structural and functional model for R2TP. a Human R2TP. HSP90 dimers can engage with each R2TP complex with sufficient conformational flexibility to reach and act in a diversity of client proteins. Up to 3 RBDs serve to anchor 3 RPAP3 to the RUVBL1–RUVBL2 scaffold, whereas a central segment of RPAP3 helps to recruit PIH1D1. The number of RPAP3 molecules per RUVBL ring in vivo is not known, and two options are shown in the figure. A long and poorly structured link between the RBD and TPR domains in RPAP3 results in substantially conformational flexibility of the TPR regions. For simplicity, although 3 RBDs are bound to the RUVBL ring, only 2 RPAP3s are shown bound to HSP90 in the cartoon. b Yeast R2TP. Conformational adaptability of yeast R2TP is limited to the flexibility of the C-terminal tails in Hsp90. Only one Hsp90 binds each R2TP. Full size image In the yeast system, the single TPR domain of Tah1p is part of a well-ordered cluster of domains along with the CS and PIH domains of Pih1p, that lies in the cup formed by the DII domains of the RUVBL ring 24 , 25 . Conformational adaptability in the yeast system is thus limited to the inherent flexibility of the ~30 residues linking the last globular domain of the bound HSP90 to the TPR-binding MEEVD motif at its extreme C-terminus. The much more elaborate structure of RPAP3 in human R2TP allows for a far greater level of adaptability, while retaining the topological proximity of TPR and PIH domains required to bring HSP90 and client together. RPAP3 is divided into two regions that are located at the two opposite faces of the RUVBL1–RUVBL2 ring. As revealed in our cryo-EM structures, the interaction of the RBD of RPAP3 with the ATPase face of the RUVBL1–RUVBL2 ring provides a tight anchor for the C-terminus of the protein, while allowing considerable flexibility for the CS-binding segment, TPR domains and N-terminus of the protein on the other face, coupled to the RBD by the long flexible central segment that spans the rim of the ring. As the necessary and sufficient interaction of the C-terminal α-helical bundle RBD of RPAP3 occurs with a surface of RUVBL2 presented on the ‘uncluttered’ ATPase face, the heterohexameric RUVBL ring is capable of binding up to 3 RPAP3 molecules, and the presence of 3 symmetrically equivalent RBDs is evident in cryo-EM images of saturated complexes. Nonetheless, the number of RPAP3 molecules per RUVBL ring in vivo, and/or in the context of a larger assembly, cannot be determined by our work. Cryo-EM shows that PIH1D1, with which RPAP3 also interacts, binds asymmetrically to the DII domain-face of the ring in a similar location as in the yeast R2TP complex 24 , 25 . The RPAP3–PIH1D1 sub-complex behaved as an elongated heterodimer in sedimentation velocity experiment, with an estimated average molecular mass of 103,900 ± 320 Da determined by sedimentation equilibrium assays (Supplementary Fig. 6 ), which corresponds to a 1 RPAP3:1 PIH1D1 heterodimer (104,212 Da). We have been unable to define PIH1D1 within R2TP at high resolution, due to the flexibility in this region. However, based on the dimensions of this region in the maps, we speculate that it could be conceivable that the RUVBL ring may only accommodate 1 PIH1D1 molecule, as in yeast 24 , 25 . In this hypothetical scenario, while three RPAP3 molecules may simultaneously bind to the three RUVBL2 subunits in the AAA-ring, only one at a time could be fully engaged through the additional interaction with the single copy of PIH1D1. This could provide a mechanism whereby multiple copies of HSP90, bound to the TPR domains, could be brought to bear, or additional client components potentially recruited via the poorly understood N-terminal region of RPAP3, could be introduced—further work will be required to test these possibilities. The R2TP core complex analyzed here was reconstituted in vitro, without subunits of the associated PFDL module or other interacting proteins whose presence would add still further complexity. These additional components may facilitate recruitment of specific clients, such as RNA Pol II or PIKKs, but may also have important functional influence on the conformational state of the R2TP core components. It is also likely that there will be steric competition or collaboration between some of these additional components. For instance, the ZNHIT2 protein was recently found to bind RUVBL2 and it has been proposed to bridge R2TP and U5 snRNP 5 . The structure of the RUVBL1–RUVBL2–RBD complex we describe here raises the possibility that RPAP3 and ZNHIT2 could either compete or collaborate for binding to RUVBL2. Since each RUVBL1–RUVBL2 ring contains 3 RUVBL2 molecules, complexes containing RPAP3 and ZNHIT2 proteins could be also conceptually possible. Along with the structure of the yeast R2TP complex 24 , 25 , the cryo-EM structure of the human R2TP core presented here provides a clear understanding of the architecture and evolution of this complex HSP90 co-chaperone. These studies suggest a mechanism for how R2TP brings HSP90 and clients (or client adaptors) into proximity, and at least for yeast suggest some involvement of the ATPase activity of the RUVBL proteins in modulating this, although the significance of this for R2TP function in vivo is far from clear. The ultimate question of how HSP90 functions with R2TP/PFLD to facilitate the assembly of large multiprotein complexes such as Pol II remains to be determined. Methods Cloning N-terminal His-tagged RuvBL1 and untagged RuvBL2 were cloned as indicated in Lopez-Perrote et al. 28 . For pull-down experiments, a 3xMyc tag was incorporated to the N-terminus of RUVBL1. The RPAP3 full length (FL) was purchased from GenScript. The RPAP3 1-430 , RPAP3 1-400 , RPA3 1-420 , RPAP3 395-665 , RPAP3 430-665 , RPAP3 523-665 , RPAP3 430-541 , RPAP3 541-665 , and RPAP3 395-665 (RPAP3-ΔNT) gene truncations were cloned using NdeI sites and ligation-free cloning using infusion cloning (Clonetech lab Inc.) into a modified pGex6p modified plasmid named p3E (University of Sussex, UK), which resulted in N-terminal GST-tagged RPAP3 genes (Supplementary Table 1 ). The human PIH1D1 full-length gene and PIH1D1 180-290 truncated genes were cloned into a pET28b using NdeI site, which resulted in 6XHis-hPIH1D1. PIH1D1 constructs were co-expressed with RPAP3 since PIH1D1 was insoluble when expressed alone. RuvBL1 full-length gene was cloned into NheI and BamHI sites of a modified pRSETA plasmid (containing 3×Myc tags) and RuvBL2 gene was cloned into pET28b using NdeI and BamHI sites. Human Hsp90 full-length beta gene was cloned into modified pET28b plasmid (contacting 6His-2Xstrep-tags) using NdeI, which resulted in 6His-2Xstrep-PreSc- Hsp90FL beta gene. Protein expression and purification The human RUVBL1–RUVBL2 protein complex and RUVBL2 used in the pull-down experiments, and human HSP90 beta were transformed into Rosetta (DE3) pLysS cells (F − ompT hsdS B (r B − m B − ) g al dcm (DE3) pLysSRARE (Cam R ), Merck Millipore Ltd.). The cells were grown in the presence of ampicillin and kanamycin antibiotics at 37 °C until the cells reached their log-phase. Then the cells were induced by the addition of 1 mM IPTG. The cells were further grown at 25 °C overnight for protein expression. The cell mass was pelleted by spinning the cell culture at 6238× g for 10 min. The cells were lysed using a sonicator in 20 mM HEPES pH 7.5, 140 mM NaCl (HEPES buffer) and 1 tablet of EDTA-free protease inhibitor (Sigma-Aldrich Ltd.). The cell lysate was centrifuged at 20,000× g for an hour at 4 °C. The clear supernatant was loaded on to the equilibrated Talon beads in 20 mM HEPES pH 7.5, 140 mM NaCl for His-tag affinity-chromatography. The beads were washed with HEPES buffer to remove the contaminant proteins. The proteins of interest were eluted with 500 mM imidazole in HEPES buffer. The eluted proteins were analyzed by 4–12% SDS-PAGE and concentrated to 6 mg ml −1 using viva-spin (MWCO 10K, Sartorius). The proteins were further purified by size-exclusion chromatography (SEC) using S200 10/300 column (GE Healthcare Ltd.). The GST-tagged RPAP3FL and RPAP3 shorter gene constructs alone and in complex with PIH1D1 full length and PIH1D1 180-290 were expressed in Rosetta (DE3) pLysS cells. The cells were lysed similar to RUVBL1/2 and HSP90 beta. The proteins were purified using GST-tag affinity chromatography by adding the clear cell lysate to the GST-beads. The GST-bound proteins were washed with HEPES buffer and they were eluted with 50 mM glutathione. The proteins were incubated with PreScission protease (3C protease) to cleave the GST-tag at 4 °C overnight and the proteins free of GST-tag were concentrated to 20 mg ml −1 using viva-spin with MWCO of 10k. The proteins were further purified by gel filtration chromatography using S200 26/60 column in degassed 20 mM HEPES pH 7.5, 500 mM NaCl. For those pull-down experiments using untagged RUVBL1, 6xHis-tagged RUVBL1 was purified and the tag cleaved using PreScission protease. RUVBL1–RUVBL2 complexes used for cryo-EM studies were produced and purified as before 28 . N-terminal His 10 -tagged human RuvBL1 and untagged human RuvBL2 were co-transformed into Escherichia coli BL21 (DE3) cells grown in LB medium. The lysate was applied to a HisTrap HP column (GE Healthcare) equilibrated in 50 mM Tris-HCl pH 7.4, 300 mM NaCl, 10% (v/v) glycerol, and 20 mM imidazole. Elution was performed using a 20–500 mM imidazole gradient, followed by a preparative SEC using a Sephacryl S300 column (GE Healthcare) equilibrated in 50 mM Tris, 300 mM NaCl and 1 mM DTT. The uncropped SDS-PAGE gels of the SEC of the RUVBL1–RUVBL2 and RPAP3–PIH1D1 used for cryo-EM are shown in Supplementary Fig. 7 . The purification was monitored by SDS–PAGE. RUVBL1 used for the pull-down experiments contained an N-terminal 3xMyc tag, to help distinguish RUVBL1 and RUVBL2 in SDS-PAGE. Pull-down assay for interaction mapping Twenty micromolar GST-RPAP3 (full length and different lengths of RPAP3) and RPAP3–PIH1D1 and RPAP3–PIH1D1 180–290 complexes were mixed with 30 μl GST beads, which were equilibrated in 50 mM HEPES pH 7.5, 140 mM NaCl. Sixty micromolar of the RUVBL2 alone and RUVBL1–RUVBL2 complex were added to the above mixture for the interaction mapping study. In these experiments, a 3xMyc tag was incorporated to the N-terminus of RUVBL1 to help in distinguishing RUVBL1 and RUVBL2 in SDS-PAGE. The above reaction mixture was incubated for 45 min at 4 °C rotating at 20 rpm min −1 . The beads were washed three times with 500 μl of HEPES buffer and the bound fraction was eluted with 50 mM glutathione. Human R2TP complex assembly was monitored by pull-down. For this, the purified RPAP3–PIH1D1 and RUVBL1–RUVBL2 proteins were used for the R2TP complex assembly. The 20 μM GST–RPAP3–PIH1D1 complex and 60 μM RUVBL1–RUVBL2 proteins were used in the experiments; 30 μl GST-beads (GE Healthcare) equilibrated in 50 mM HEPES pH 7.5, 140 mM NaCl (HEPES buffer) were added to the above protein complexes. Then the protein mixture was incubated for 45 min rotating at 20 rpm at 4 °C. The beads were washed three times with HEPES buffer to remove the non-specific proteins bound on to the GST-beads. The bound fraction was eluted with 50 mM glutathione in 20 mM HEPES, pH 7.5, 140 mM NaCl. For the co-expression experiments shown in Fig. 1 , we co-expressed Gst-RPAP3FL, Gst-RPAP3FL-RuvBL2FL, Gst-RPAP3 541–665 -RuvBL2FL and Gst-RPAP3 541–665 -RuvBL2 ∆DII. The interactions were analyzed using GST-affinity chromatography and the bound fractions were eluted with 50 mM glutathione in the 20 mM HEPES pH 7.85, 140 mM NaCl, and the quality of the protein complex was visualized using SDS-PAGE. Cryo-EM of human R2TP To image R2TP at high resolution, 0.45 μM RUVBL1–RUVBL2 (estimated as dodecamers) was incubated with 9 μM of RPAP3–PIH1D1 complex for 20 min in ice and the mix dialyzed for 5 h against 25 mM HEPES pH 7.8, 130 mM NaCl, 10 mM 2-Mercaptoethanol. After dialysis, the sample was recovered and incubated with ADP (pH 7.0) for 1 h at a final concentration of 0.5 mM. Subsequently, aliquots of 2.5 μl were applied to Quantifoil R1.2/1.3 carbon grids after glow-discharge, and then flash frozen in liquid ethane. An initial test data set was obtained in Grenoble Instruct Center, France, using a FEI Polara microscope operated at 300 kV. High-resolution structures were obtained from data collected in a Titan Krios (eBIC, Diamon, Oxford, UK), automatically, with three images per hole, using a GATAN K2-Summit detector in counting mode and a slit width of 20 eV on a GIF-Quantum energy filter (Supplementary Tables 2 , 3 ). High-resolution image processing of RUVBL1–RUVBL2–RBD As general methodology, MotionCorr2 43 was used for whole-frame motion correction, GCTF 44 for estimation of the contrast transfer function parameters, and RELION-2.0 45 for all subsequent steps. Local motion was corrected in MotionCorr2 dividing the frame in 36 patches (6 × 6 patches), with dose weighting. A manually picked subset of micrographs was used to obtain 2D references for template-based particle picking. The selected particles were then submitted to several rounds of 2D and 3D classifications, to discard low-quality particles and some remaining RUVBL1–RUVBL2 dodecamers. Low-pass filtered versions of previous structures were used as starting point of classifications and refinements, to reduce bias. A specific classification protocol was designed to analyze the stoichiometry of RBDs bound to the RUVBL ring. For this, Class3D in RELION was used but using a mask covering the edge of the RUVBL ring and the regions outside the ring. Particles were split in up to eight groups using this focused classification strategy. The majority of particles were grouped into one class, containing 3 RBDs and corresponding to 67.3% of the particles. The remaining classes corresponded to particles containing 2 RBDs (16.4%) or 1 RBD (16.3%). Images of R2TP containing 3 RBDs were then processed applying 3-fold rotational symmetry and using standard procedures in RELION 45 . For the refinement of the RUVBL1–RUVBL2–RBD complex at high resolution, 96,406 particles showing the best parameters after 2D classification in RELION were selected and subjected to a round of automatic 3D refinement in RELION to generate a consensus 3D model. When refinement was performed without applying rotational symmetry, similar results were obtained, but some differences in the quality of the different RBDs bound to one RUVBL ring, suggested that the complex had a rigid conformation at the RUVBL ring, and relatively flexible RBDs (or variable quality). To improve the quality of the structure defining the interaction between the RBD and the RUVBL ring, we applied the method previously developed to solve a similar situation for the structure of the apoptosome 32 . For this, each particle was rotated 120 and 240° so that all RBDs were then placed in the same position. Subsequently, particles were classified using the Class3D utility in RELION and using a mask representing 1 RBD and the ring of RUVBL1–RUVBL2. The most populated class was automatically refined using the Ref3D utility using the same mask used for Class3D, local search of angles and without applying symmetry. Further details on the strategy applied here can be found in the Methods section of Zhou et al. 32 . When applied to our data, resolution was improved from 3.8 Å when applying 3-fold rotational symmetry to 3.57 Å, estimated using gold standard Fourier Shell Correlation (FSC) between two independent maps using cut-off of FSC = 0.143. B factor sharpening was performed using automatic procedures in Relion2. Local resolution was estimated using ResMap 46 . Structures were visualized using UCSF Chimera 47 . Cryo-EM and processing of RUVBL1–RUVBL2–RBD and R2TP-ΔNT The reconstitution of complexes between RUVBL1–RUVBL2 and RPAP3 430–665 , and the reconstitution of R2TP using the RPAP3–ΔNT–PIH1D1 sub-complex instead of RPAP3–PIH1D1 were performed with the same protocol used for the R2TP used in cryo-EM experiments. For consistency, all these observations were performed with the same buffer conditions used for the assembly of R2TP, and in the presence of ADP. Vitrifications, the general image processing, 2D classifications, and the generation of 2D averages and 3D volumes were performed following similar strategies to those described for the R2TP images, but the cryo-EM micrographs were collected in a 200-kV FEI Talos Arctica operated with a FEI Falcon II detector and located at the Centro Nacional de Biotecnologia (CNB) in Madrid. Image processing of flexible regions For refinement of the flexible regions of the R2TP complex, the same set of 96,406 particles used to resolve the structure of RUVBL1–RUVBL2–RBD was classified searching for a subset of particles of R2TP with a more homogenous conformation for the flexible regions after extensive 3D classification steps using Relion. This subset, containing a selection of 27,385 particles, was then refined, converging to a structure with an estimated average resolution of 8.72 Å using gold standard Fourier Shell and FSC = 0.143. To analyze the structural details for the interaction between PIH1D1 and RUVBL1–RUVBL2, the images of R2TP complexes containing 3 RBDs were refined without symmetry and using a mask that removed the influence of the flexible regions except the vicinities of the DII domains in RUVBL1–RUVBL2. One subset of 182,351 particles corresponding to complexes containing 3 RBDs and that displayed a defined density in DII-domain face of the RUVBL1–RUVBL2 reached an estimated resolution of 6.58 Å using gold standard Fourier Shell and FSC = 0.143. Modeling and refinement of RUVBL1–RUVBL2–RBD De novo modeling of the C-terminal domain of RPAP3 was performed using a strategy based on homology modeling and molecular dynamics simulation. First, analysis of the sequence in the secondary structure prediction server PSIPRED revealed 8 α-helices consecutively, starting from Ala547 to C-terminus Gly665. A long and disordered region of RPAP3 N-terminal to this helical domain contributed to a clear identification of the domain. The sequence of RPAP3-RBD domain (residues 541–665) was submitted to I-TASSER homology modeling server 35 , which provided up to five different atomic models for the query sequence. The best solution according to the I-TASSER scoring was fitted within the target cryo-EM map. The target cryo-EM map density contained 8 α-helices, the same as the predicted atomic model in I-TASSER. The prediction was first fitted as a rigid body in the map, followed by a flexible fitting using molecular dynamics (MD) simulations in AMBER 48 (Supplementary Movie 1 ). Flexible fitting was performed in two orientations: one from the initial fitted conformation and the second forcing the reverse orientation as control. The reverse orientation was discarded since no reasonable fitting of the model into the cryo-EM map was possible after the simulation. A model for RUVBL1–RUVBL2–RBD was built initially by the fitting as a rigid body of the crystal structure of the human RUVBL1–RUVBL2 truncated hexamer (PDB 2XSZ) 29 into the cryo-EM map with the addition of the model generated for the RBD using AMBER. The full structure of the RUVBL1–RUVBL2–RBD was refined using Phenix 49 and Coot 36 . The information of the crystal structure of the RPAP3 fragment comprising residues 578–624 helped to model the connectivity between helix 5 and helix 6 of the RBD, which faces RUVBL2. Density for side chains was also visible in most of the helices of the RBD (Supplementary Fig. 5 ), and this information was used during modeling. Detailed protein–protein interactions are mostly discussed at the level of secondary structural elements, except where side chain density was clear. R2TP reconstitution and purification using GraFix For the experiments using GraFix 50 , the different complexes were analyzed using a linear 10–40% sucrose gradient together with a 0–0.15% of glutaraldehyde gradient. Fifty microliters of the mixture were used for each gradient and run at 125812 g using a SW60Ti rotor, 16 hours and 4 °C. Fractions of 100 μl were collected from top to bottom of the gradient, the fixation reaction was stopped by adding glycine pH 7.0 at final concentration of 100 mM. Blue-Native system (Invitrogen) was used to analyze the fractions. Cross-linking coupled to mass spectrometry R2TP was reconstituted as for the cryo-EM experiments and cross-linked with isotopically coded N -hydroxysuccinimide (NHS) esters disuccidinimidyl suberate (DSS H 12 /D 12 ) and bis-sulfosuccidinimidyl suberate (BS3 H 12 /D 12 ) (Creative Molecules, Canada) at a final excess concentrations of 100 and 250×. The reactions were incubated for 45 min at 37 °C, and quenched by adding 50 mM NH 4 HCO 3 (final concentration) for another 15 min. The cross-linked sample was freeze-dried and then resuspended in 50 mM NH 4 HCO 3 to reach 1 mg ml −1 final protein concentration. The sample was then reduced using 10 mM DTT and alkylated with 50 mM iodoacetamide. Subsequently, proteins were digested with trypsin (Promega, UK) using 1:20 enzyme-to-substrate ratio, at 37 °C, and the incubation was done overnight. A final concentration of 2% (v/v) formic acid was added to acidify the samples, and the peptides were fractionated by peptide SEC in a Superdex Peptide 3.2/300 column (GE Healthcare) with 30% (v/v) acetonitrile/0.1% (v/v) TFA as mobile phase and using a flow rate of 50 μl min −1 . Fractions were collected, lyophilized, and resuspended in 2% (v/v) acetonitrile and 2% (v/v) formic acid. Fractions were analyzed by nano-scale capillary LC–MS/MS using an Ultimate U3000 HPLC (ThermoScientific Dionex, USA) and a flow of approximately 300 nl min −1 . Peptides were separated on a C18 Acclaim PepMap100 3 μm, 75 μm × 250 mm nanoViper (ThermoScientific Dionex, USA) and eluted with a acetonitrile gradient. The analytical column outlet was directly interfaced via a nano-flow electrospray ionization source, with a hybrid dual pressure linear ion trap mass spectrometer (Orbitrap Velos, ThermoScientific, USA). A resolution of 30,000 was used for data-dependent analysis for the full mass spectrometry spectrum, followed by 10 MS/MS spectra in the linear ion trap. Mass spectrometry spectra were collected over a 300–2000 m / z range. MS/MS scans were collected using threshold energy of 35 for collision-induced dissociation. For data analysis, Xcalibur raw files were converted into the open mzXML format through MSConvert (Proteowizard) with a 32-bit precision, and the converted files were directly used as input for xQuest searches on a local installation ( ). The following criteria were used for the selection of cross-linked precursor MS/MS data: a mass shift of 12.07532 Da among the heavy and the light cross-linkers; precursor charge ranging from 3+ to 8+; maximum retention time difference 2.5 min. Searches were performed against an ad hoc database containing the RUVBL1, RUVBL2, PIH1D1, and RPAP3 sequences plus their reverse as decoy. A number of parameters were set to perform the xQuest searches: maximum number of missed cleavages (excluding the cross-linking site) 3; peptide length 4–50 amino acids; fixed modifications carbamidomethyl-Cys (mass shift 57.02146 Da); mass shift of the light cross-linker 138.06808 Da; mass shift of mono-links 156.0786 and 155.0964 Da; MS1 tolerance 10 ppm, MS2 tolerance 0.2 Da for common ions and 0.3 for cross-link ions; search in enumeration mode (exhaustive search). The following criteria were used to filter search results: MS1 mass tolerance window −3 to 7 ppm. Finally, each MS/MS spectra was manually inspected and validated. Sedimentation velocity assay of TP complex Four-hundred microliters of 10 μM and 5 μM RPAP3–PIH1D1 prepared in 25 mM Hepes, 130 mM NaCl, 0.1 mM TCEP, pH 7.8, were loaded into analytical ultracentrifugation cells. The experiments were carried out at 10 °C and 149,103× g in an XL-I analytical ultracentrifuge (Beckman-Coulter Inc.). This was equipped with UV-VIS absorbance and Raleigh interference detection systems, and the sedimentation profiles were recorded at 280 nm. Least-squares boundary modeling of sedimentation velocity data and the continuous distribution c(s) Lamm equation model was used to calculate sedimentation coefficient distributions, as implemented by SEDFIT 14.1 51 . The program SEDNTERP 52 was used to correct experimental s values to standard conditions (water, 20 °C, and infinite dilution) to obtain the corresponding standard s values (s20,w). Sedimentation equilibrium assay Short column (90 μl) Sedimentation Equilibrium experiments were carried out at speeds ranging from 4536× g to 6532× g and at 286 nm, and using the same experimental conditions as those described for the Sedimentation Velocity experiments. A high-speed centrifugation run using 185,795× g was performed to estimate the corresponding baseline offsets after the last equilibrium scan. Weight-average buoyant molecular weights of proteins were determined by fitting a single species model to the experimental data using the HeteroAnalysis program, and corrected for solvent composition and temperature using the program SEDNTERP 52 . RPAP3 523–665 expression, purification, and crystallization The gene encoding RPAP3 523–665 fragment was cloned into the NdeI and BamHI site of the p3E plasmid (home grown plasmid from University of Sussex). BL21 (DE3) E . coli cells (F¯ omp T g al dcm lon hsd SB(rB- mB-) λ(DE3 * lacI lacUV5-T7 g ene 1 ind1 sam7 nin5 ), NZYTech) were transformed with p3E plasmid containing RPAP3 523–665 . The transformed cells were grown in LB media at 37 °C for 5 h followed by induction with 1 mM IPTG. The cells were further grown for 15 h at 20 °C. Cells were harvested using 5000 ×g for 10 min. The cell pellet was re-suspended in 20 mM HEPES pH 7.8, 140 mM NaCl, 0.5 mM TCEP, and sonicated at 4 °C. The cell lysate was spun at 20,000 g for 1 h at 4 °C and the supernatant was used for the GST affinity chromatography. The purified GST-RPAP3 523–665 protein was treated with PreScission protease overnight at 4 °C to remove the GST-tag and the protein was further purified by SEC. Purified RPAP3 523–665 protein was concentrated to 8 mg ml −1 using 10k MWCO viva-spin (Sartorius). The crystallization trials were set up using 0.2 μl protein and 0.2 μl crystallization screen buffer using sitting drop method. The crystallization trial trays were incubated at 14 °C. Small crystals appeared in 0.5 M potassium thiocyanate, 0.1 M bis-tris propane pH 7.0 (well H6) of SaltRX crystal screen (Hampton research Ltd) after 1 month of incubation. The crystals were flash frozen in liquid nitrogen using 30% glycerol as a cryo-protectant and the data were collected at Diamond Light source, UK. Wavelength used was 0.9763 Å. The data were processed using standard methodology, and programs of the CCP4 suite 53 , Xia2 54 , REFMAC 55 , BUSTER ( ), and COOT 36 , together with the ARCIMBOLDO software 56 (Supplementary Table 4 ). Data availability Data supporting the findings of this manuscript are available from the corresponding authors upon reasonable request. The EM maps have been deposited in EMDB and PDB: EMD-4289 (R2TP-C3symmetry); EMD-4287 (R2TP-1RBD); EMD-4290 (R2TP-subgroup1); EMD-4291 (R2TP-subgroup2); PDB 6FO1 (model of RUVBL1–RUVBL2–RBD with 1RBD), and PDB 6FM8 (crystal structure of RBD fragment). Change history 31 July 2018 In the originally published version of this article, the affiliation details for Hugo Muñoz-Hernández, Carlos F. Rodríguez and Oscar Llorca incorrectly omitted ‘Centro de Investigaciones Biológicas (CIB), Spanish National Research Council (CSIC), Ramiro de Maeztu 9, 28040 Madrid, Spain’. This has now been corrected in both the PDF and HTML versions of the Article.
A team from the Spanish National Cancer Research Centre (CNIO) has determined for the first time the high-resolution structure of a complex (R2TP) involved in key processes for cell survival and in diseases such as cancer. This achievement was made with high-resolution cryo-electron microscopy, a technique brought to the CNIO by Óscar Llorca, director of the Structural Biology Programme and lead author on the paper published in Nature Communications. In 2017, the Nobel Prize for Chemistry was awarded to three scientists (Jacques Dubochet, Joachim Frank, and Richard Henderson) for their work on the development of cryo-electron microscopy. This technique can capture images of individual molecules, which are used to determine their structure and to ascertain biological processes in atomic detail. Óscar Llorca and his team have used this technique to learn about the structure and functioning of a complex system called R2TP, which is involved in key processes for cell survival such as the activation of the kinases mTOR, ATR and ATM, proteins that are the target of various cancer drugs currently being developed. mTOR, ATR and other related kinases do not work in isolation but rather by interacting and forming complexes with other proteins, which are essential for their normal functioning. The assembly of these structures with multiple components does not take place in cells spontaneously. The R2TP system and the HSP90 chaperone are crucial for the assembly and activation of mTOR and other related kinases, but how this happens in cells is still somewhat of a mystery. "If we understand this assembly pathway, :explains the researcher, "we will be able to identify new ways of targeting the activity of these kinases." Thanks to cryo-electron microscopy, "we have been able to visualise, for the first time, the high-resolution structure of the human R2TP system," says Llorca. The researchers were surprised by the unexpected complexity of the human R2TP system compared to its yeast homologues. The microscope images show that R2TP is a large platform capable of putting HSP90 in contact with the kinases on which HSP90 must act. When viewed under the microscope, R2TP looks like a jellyfish with three very flexible tentacles made up of RPAP3 protein. The kinases of the mTOR family are recruited to the base of its head, while HSP90 is hooked by the tentacles and taken to the kinases, thanks to their flexibility. "This first observation of the human R2TP system has allowed us to understand its structure and functioning mechanisms, which were previously unknown. Our next steps will be to study the details of how R2TP and HSP90 are able to assemble the complexes made up of kinases of the mTOR family, in order to find ways of interfering with these processes," concludes Llorca. "The R2TP system is also involved in the activation of other essential molecules for the cell and in the development of cancer, such as the RNA polymerase, telomerase, or the 'splicing' system, areas that we intend to explore in the future."
10.1038/s41467-018-03942-1
Earth
How politics, society, and tech shape the path of climate change
Frances Moore, Determinants of emissions pathways in the coupled climate–social system, Nature (2022). DOI: 10.1038/s41586-022-04423-8. www.nature.com/articles/s41586-022-04423-8 Journal information: Nature
http://dx.doi.org/10.1038/s41586-022-04423-8
https://phys.org/news/2022-02-politics-society-tech-path-climate.html
Abstract The ambition and effectiveness of climate policies will be essential in determining greenhouse gas emissions and, as a consequence, the scale of climate change impacts 1 , 2 . However, the socio-politico-technical processes that will determine climate policy and emissions trajectories are treated as exogenous in almost all climate change modelling 3 , 4 . Here we identify relevant feedback processes documented across a range of disciplines and connect them in a stylized model of the climate–social system. An analysis of model behaviour reveals the potential for nonlinearities and tipping points that are particularly associated with connections across the individual, community, national and global scales represented. These connections can be decisive for determining policy and emissions outcomes. After partly constraining the model parameter space using observations, we simulate 100,000 possible future policy and emissions trajectories. These fall into 5 clusters with warming in 2100 ranging between 1.8 °C and 3.6 °C above the 1880–1910 average. Public perceptions of climate change, the future cost and effectiveness of mitigation technologies, and the responsiveness of political institutions emerge as important in explaining variation in emissions pathways and therefore the constraints on warming over the twenty-first century. Main The global trajectory of anthropogenic greenhouse gas emissions is the most important determinant of projected global temperature increases in this century and beyond, swamping the magnitude of internal climate variability or model differences 1 . However, this key driver of Earth’s future climate is treated as exogenous in almost all climate science 3 . Moreover, although emissions pathways arise from complex interactions among social, political, economic and technical systems, these elements are often analysed separately within disciplinary silos, neglecting interactions and feedback that can give rise to or stymie rapid change 5 . Understanding the potential for nonlinear dynamics in the socio-technical systems producing both greenhouse gases and climate policy is essential for identifying high-impact intervention points and better informing policy 4 , 6 , 7 . However, the coupling and interaction among social, political, economic, technical and climate systems—and their implications for emissions and temperature trajectories over the twenty-first century—have not been widely examined (although refs. 2 , 8 , 9 provide some exceptions). Evidence regarding the likely emissions path over the twenty-first century is mixed. On the one hand, although emissions growth may have decelerated in recent years, with some evidence of declining emissions in a few advanced economies, global emissions continue to grow 10 . National commitments under the Paris Agreement remain inadequate to meet either the 1.5-°C or 2-°C temperature target 11 and it is unclear whether government policies are yet sufficient to deliver on these emissions pledges 12 . Carbon dioxide emissions from energy infrastructure currently in place or under development will exceed the 1.5-°C carbon budget, and standard energy-system models struggle to simulate pathways that meet either temperature target without the widespread deployment of negative emissions technologies that are highly speculative 13 , 14 , 15 . The pace of decarbonization that is required to meet the Paris temperature targets vastly exceeds anything in the historical record at the global scale 16 . On the other hand, specific cases of very rapid change in energy systems do exist, with accelerating deployment as market or policy conditions shift and technology costs fall. Path dependencies, increasing returns to scale and learning-by-doing cost reductions can produce sudden, tipping-point-like transitions that cannot be extrapolated from past system behaviour 17 , 18 . Recent examples include the rapid fall in coal generation in the UK electricity mix and the dominance of electric vehicles sales in Norway 19 , 20 . Standard energy models, which mostly rely on linear extrapolations of past behaviour, repeatedly underpredict the rate of renewable energy growth 21 . Historically, technological innovation and government policies often motivated by energy security concerns 22 have also, in notable cases, spurred rapid shifts in energy systems, one of the fastest examples of which being the transition to kerosene lighting in the nineteenth century 23 . Social norms that shape individual behaviour and preferences can exhibit similar tipping-point style dynamics 24 . These changes, via collective action operating though political institutions, could in turn affect the regulatory and market conditions in which energy technologies compete. The presence of both positive and negative feedback processes within the political system has also been documented, as policy changes can both create new interest groups and activate incumbents against further change 25 , 26 , 27 . These coupled feedback processes could give rise to complex behaviour and a wide range of plausible emissions pathways but, although the space of possibility is wide, that does not mean it is unknowable. Our goal is to model the drivers of potential emissions scenarios over the twenty-first century and, in doing so, shed light on how both climate policy and emissions arise from more fundamental socio-politico-technical forces and the key parameters governing these dynamics. The main contributions are threefold. First, we present a stylized model of the coupled climate–social system, focusing on coupling across individual to global scales and on feedback processes documented across a wide range of relevant disciplinary literatures. This model is distinct from previous work that represents feedback processes within energy systems 28 or between the climate, the economy and emissions pathways 29 in that climate policy is still specified exogenously in these applications. By contrast, in this model, climate policy and greenhouse gas emissions arise endogenously from the coupled interaction of the climate, social, political and energy systems. Second, we used this model to systematically examine potential dynamics of the system, highlighting feedback, connections and thresholds across different components. Finally, after partially constraining the set of parameter values using historical data, we examined the space of possible emissions and policy trajectories over the twenty-first century arising from the model. These fall into five clusters associated with particular parameter combinations, enabling these future trajectories to be classified on the basis of their underlying social, political and technical characteristics. Overall, we find that the socio-politico-technical feedback processes can be decisive determinants of climate policy and emissions futures. Our parameterized model implies a high likelihood of accelerating emissions reductions over the twenty-first century, moving the world decisively away from a no-policy, business-as-usual baseline. Feedback and model structure The positive and negative feedback processes operating within the coupled climate–social system are critical to understanding system behaviour and dynamics. The feedback processes that are represented in the model were identified in a two-step process. First, potentially relevant system feedback processes were described during a four-day interdisciplinary workshop. Second, targeted searches were conducted across relevant literatures in psychology, economics, sociology, law, political science and engineering to evaluate the evidentiary literature for or against candidate feedback processes, resulting in eight key feedback processes being included in the final model. This section briefly describes each feedback process, and Table 1 and Fig. 1 describe how these feedback processes are coupled together in the model and the model structure. Table 1 Description of the climate–social model components and key parameters Full size table Fig. 1: The climate–social model components and feedback processes. Components are shown in black and the model feedback processes in green. Feedback processes are identified as positive (+) (that is, reinforcing) or negative (−) (that is, dampening). The black arrow shows a connection between components (policy-adoption effect) that is not directly part of a particular feedback process. Descriptions of the components and key parameters governing both feedback strength and component behaviour are given in Table 1 . Full size image Social-conformity feedback The social networks in which individuals are embedded at home, work, school or leisure have a strong influence on opinions and behaviour 30 , 31 . Social norms (that is, representations of the dominant or acceptable practices or opinions within a social group) are costly for individuals to violate and, over the long term, can shape individual identities, habits and world-views 32 , 33 . Studies in the USA have shown that perceived social consensus, that is, the degree to which individuals believe a particular opinion or action is dominant within their social group, can partially explain belief in climate change and support for climate policies 34 . A large body of literature has also shown that social norms are one important determinant of the probability that an individual engages in pro-environmental behaviour, such as conserving energy or adopting solar panels 35 , 36 , 37 . A tendency towards social conformity can lead to tipping-point-type dynamics in which a system transitions suddenly from a previously stable state given a sufficient critical mass of proponents of the alternate norm 24 , 38 . The model includes the social conformity effect in two ways: formation of public opinion regarding climate policy and individual decisions on adopting pro-climate behaviour (Fig. 1 ). Climate change perception feedback The anthropogenic influence on the Earth’s climate system is increasingly apparent 39 , 40 , 41 . Assessments of the contribution of anthropogenic warming to the probability of particular extreme events are increasingly routine 42 . It has been hypothesized that this emerging signal of climate change in people’s everyday experience of weather might lead to widespread acknowledgement of the existence of global warming and possibly, by extension, support for mitigation policy 43 . A large number of studies have connected stated belief in global warming with local temperature anomalies: people appear to be able to identify local warming 44 , 45 and are more likely to report believing in climate change if the weather is (or is perceived to be) unusually warm 46 , 47 , 48 , 49 . In effect, people appear to be using their personal experience of weather as evidence informing their belief in climate change 49 . However, this so-called ‘local warming effect’ is complicated 50 . Several papers have found evidence that interpretations of weather events are filtered through pre-existing partisan identities or ideologies 45 , 51 , 52 . This suggests the presence of motivated reasoning (that is, the rejection of new information that contradicts pre-existing beliefs) in the processing of climate-change-related information 53 , 54 . Moreover, the perception of weather anomalies might well be complicated by a ‘shifting-baselines’ effect in which people’s perception of normal conditions is quickly updated on the basis of recent experience of weather 55 . Political interest feedback The large-scale emissions reductions that are required to stabilize the climate system cannot be accomplished by individuals acting alone, meaning the question of how individual support or opposition to climate policy translates into collective action through the political system is critical. This process is not straightforward—it is subject to political–economic constraints operating through complex political and government institutions and cannot be modelled as a simple linear function of public opinion 56 , 57 , 58 . The political economy literature has documented a positive feedback effect in which initial policy change establishes powerful interests able to lobby against policy reversal and for further change, the establishment of the wind energy industry in Texas being one example 26 , 27 . Although most examples in the literature are ones of reinforcing feedback processes, Stokes 27 also documents instances of balancing feedback processes—where small policy changes activate powerful incumbents to lobby against further changes that threaten their interests. Credibility-enhancing display feedback Although the ability of individuals to alter the trajectory of greenhouse gas emissions is limited, individual adoption of pro-environmental behaviours can have spillover effects to the larger social network. Changing behaviour to better align one’s consumption or practices with how one believes society ought to function can strengthen this moral identity and send a normative signal to other community members about desirable collective outcomes 59 , 60 . Engaging in costly personal actions aligned with collective goals can act as ‘credibility enhancing displays’, increasing the persuasiveness of the actor. Kraft-Todd et al. 61 use this framework to explain why community ambassadors promoting solar panel installation were more effective if they had installed solar themselves. For climate change more generally, Attari, Krantz and Weber 62 , 63 found that the personal carbon footprints of researchers advocating climate policy affects their credibility and the impact of their message. Expressive force of law feedback To the extent legal or judicial institutions are perceived as legitimate, changes in laws coming out of them can provide information about desirable or common attitudes within the population, feeding back to reinforce the attitudes or behaviour of the society that produced them. Tankard and Paluck 64 identify signals from governing institutions as one of three sources of information about community norms. Legal scholars have developed the theory of the ‘expressive function’ of law—the idea that law and regulation work on society not only by punishing undesirable behaviour but also by signalling what kind of behaviour is praiseworthy and what is reprehensible 65 , 66 , 67 . This signal is particularly important if individuals have imperfect information about the distribution of attitudes or behaviour within a reference population 67 , 68 . Several papers have found evidence for feedback from changes in laws and regulations to the perception of social norms, attitudes or behaviour, including the legalization of gay marriage 69 , 70 , smoking bans 71 and the COVID-19 lockdowns 72 . Endogenous cost-reduction feedback New energy technologies are often expensive, but also tend to exhibit price declines with installed capacity. This ‘learning-by-doing’ effect has been widely documented in the energy systems literature and is incorporated into some energy system models 73 . Falling costs are attributed to the combination of economies of scale, lower input costs and efficiencies in the production process and design 74 . This is a reinforcing feedback process, where small initial deployments, possibly driven by subsidies or regulatory requirements, lower costs and enable further deployment. Rubin et al. 75 reviewed estimated learning rates (that is, the fractional reduction in cost for a doubling of installed capacity) for 11 generation technologies and found ranges between −11% and 47% with many estimates falling in the 2% to 20% range. Temperature–emissions feedback The effects of climate change are expected to be widely felt across geographical regions and economic sectors. These impacts themselves might well affect the capacity of the economy to produce emissions. Most notably, some work has suggested large effects of warming on economic growth 76 , 77 , which could substantially reduce the level of economic production over time with a corresponding reduction in greenhouse gas emissions. However, other effects through the impact of warming on energy demand 78 or on the carbon intensity of energy production 79 , 80 might either partially offset or exacerbate this effect. Woodard et al. 8 provide a central estimate of these combined effects of a 3.1% decline in emissions per degree of warming, with upper and lower bounds ranging from −10.2% to 0.1%. The model developed here is designed to investigate the complex, emergent behaviour of the coupled climate–social system, including the feedback processes described above. Figure 1 shows the six major model components that operate across four interconnected scales: individual (cognition component), social (opinion and adoption components), national (policy component) and global (emissions and climate components). Descriptions of processes and key parameters in each component are given in Table 1 , and equations and parameters are fully documented in the Methods and the ‘Model documentation’ section of the Supplementary Information . Tipping points, interactions and thresholds The coupled feedback processes across model components described above can produce complex, highly nonlinear behaviour that depends sensitively on interactions across social, political and technical systems. We begin by demonstrating this behaviour through three systematic explorations of the model parameter space, designed to highlight interactions across scales and model components. These values were chosen deliberately to highlight tipping-point and threshold behaviour in the model and are not necessarily the most likely or representative values. Constraints on the distribution of parameter values are discussed in the next section. Each panel in Fig. 2 shows model output, systematically varying 2–3 parameters while keeping all of the other model parameters fixed at the values given in Extended Data Table 1 . Fig. 2: Tipping points and thresholds in model behaviour. a , Illustration of a tipping point associated with individual adoption of behavioural change by climate policy supporters through the credibility-enhancing display feedback. b , The interactions between endogenous cost reductions in the energy sector and the opinion (fraction of climate policy supporters) and policy (status quo bias) components. c , The effect of the climate perception feedback and specific cognitive biases on public opinion. Model parameters that are not mentioned in each figure panel are kept constant for all of the model runs at the values shown in Extended Data Table 1 . Full size image Individual behavioural change Figure 2a demonstrates the potential for tipping points associated with individuals’ adoption of behavioural change. The primary effect of behaviour change on emissions is small, reflecting the limited control that individuals have over how societies produce and use energy. The COVID-19 lockdowns, a global and unprecedented change in mobility and consumption patterns, temporarily reduced global CO 2 emissions by somewhere between 9% and 17% (refs. 81 , 82 ), providing a possible upper bound on the effect of behavioural change on reducing carbon footprints. As emissions under our RCP7 baseline almost double by 2100, this is clearly insufficient to provide the deep decarbonization needed to stabilize global temperatures, even under universal adoption. However, Fig. 2a demonstrates that, under some conditions, the willingness of climate policy supporters to undertake costly personal pro-climate behavioural change can be decisive in triggering positive feedback processes that tip the system into a sustainable state. This interaction operates through the credibility-enhancing display feedback from adoption to opinion; if this feedback is small or absent, then no amount of individual action can drive major emissions reduction. However, if this feedback is strong, then behavioural change by climate policy supporters persuades more people to support climate policy, an effect that triggers a cascade of positive feedback processes in the opinion (social-conformity feedback) and mitigation (learning by doing) components that drive emissions to zero by 2100. Learning by doing Figure 2b illustrates interaction effects between technological change in the energy system, public opinion dynamics and the responsiveness of political institutions. On average, larger endogenous cost reductions lead to larger emissions reduction. However, as this technological feedback must be initiated by climate policy, there is a threshold effect—a large nonlinear change in model behaviour at a particular parameter value—associated with the fraction of the population supporting climate policy. Below a threshold level of support, there is no policy driving the initial deployment required to kickstart the cost-reduction feedback. Moreover, even beyond this threshold, higher levels of support lead to faster deployment and a larger effect of endogenous cost reductions (indicated by the steepening of the contour lines at the top of the figure). The two panels in Fig. 2b highlight how the characteristics of political institutions affect this relationship: those that are less responsive to public opinion (that is, high status quo bias) (Fig. 2b bottom) have a higher threshold for policy support and ramp up climate policy more slowly, leading to higher cumulative emissions over the twenty-first century, even in the presence of a strong cost-reduction feedback in the energy sector. Perception of climate change Figure 2c illustrates how information from the climate system might influence public opinion dynamics if observation of the weather affects support for climate policy (that is, the climate perception feedback). The existence of this feedback can have a decisive influence on opinion dynamics, as illustrated by the threshold behaviour at zero. Model behaviour is substantively different even for very small effects of perceived weather on climate policy opinion compared with model behaviour with no perception effect. However, this is moderated substantially in the presence of cognitive biases that can fully offset the cognition feedback. In model runs using a fixed baseline for the perception of temperature anomalies (Fig. 2c left), the population unanimously favours climate policy, regardless of biased assimilation, because the perceived weather changes are so large. The presence of shifting baselines (Fig. 2c right) complicates this effect. In particular, when biased assimilation is large, a stronger perception feedback leads to more climate policy opposers in 2050 compared with if that feedback were weaker or absent. This is because, if baselines shift and people compare current weather only to the past 8 years, they will periodically perceive unusually cold anomalies due to natural weather variability, even though temperatures are warm relative to a fixed, preindustrial baseline 55 . In the presence of biased assimilation, these perceived cold anomalies reinforce the belief of climate policy opposers in their position, leading to persistence of this opinion group. Constraining the parameter space The illustrations in the previous section highlight how coupled socio-politico-technical feedback processes across components and scales in the climate–social system can produce nonlinear behaviour leading to a wide range of twenty-first century emission trajectories. This complexity characterizes the space of possible climate outcomes when climate policy is modelled as an endogenous product of more fundamental social and political forces. However, identifying outcomes that are more or less likely within this range requires placing some bounds on the model parameters. The model is a highly aggregated and abstracted representation of the coupled climate–social system, meaning that parameterization is not straightforward. We performed two exercises based on hindcasting performance to partially and probabilistically constrain the parameter space. The first exercise used the population-weighted time series of public opinion on climate change in nine OECD countries (the USA, Canada, France, Germany, Italy, Spain, the UK, Australia and Japan) between 2013 and 2020 from Pew Research Center 83 and the emissions-weighted average carbon price for the same countries over the same period 84 to jointly constrain nine parameters in the cognition, opinion and policy components. The second exercise used recent estimates of the effect of Swedish carbon prices on emissions to constrain two parameters in the emissions component 85 . Although only a tiny fraction of global emissions, the Swedish case is important because Sweden has had the world’s highest carbon price for several decades 84 , enabling estimates of the effect of high and sustained carbon prices on emissions. As the model includes a single abatement cost function, this exercise implicitly assumes that the Swedish abatement costs are more widely generalizable, a potential weakness of this calibration point. For each hindcasting exercise, relevant model components are run in a Monte Carlo mode, sampling independently from the set of possible parameter values. Model output for each run is then compared to the observed time-series and parameter combinations are weighted on the basis of the distance between model output and observed data ( Methods ). Differences between the unweighted and weighted parameter distributions provide an indication of the extent to which observations provide constraints on the parameter value. Extended Data Figures 1 and 2 give the results of these exercises. Extended Data Figure 1a shows how the dynamics of public opinion provide some constraint on both the social conformity and cognition feedback. Public opinion on climate policy over the last decade suggests a population socially sorted within opinion groups (that is, slightly higher network homophily parameter) with relatively slow movement between groups (that is, low persuasive force) and a relatively small role for the individual perception of climate change in opinion formation (low evidence parameter). The exercise is less informative regarding parameters in the policy component, although there is some evidence of status quo bias in the political system. The exercise also constrains the covariance between parameters (Extended Data Fig. 1b ). For example, there is covariance between the network homophily, persuasive force and shifting baseline parameters—consistency with observed changes in OECD climate opinion over time requires that opinion groups are socially separated, movement between opinion groups is slow or cognitive biases like shifting baselines limit the role of observed climate change in driving public opinion. Extended Data Figure 2 shows the results of the second hindcasting exercise on the emissions parameters, which suggests a low value for the contemporaneous effect of policy on emissions (maximum mitigation rate), but is uninformative about the persistence of those emissions reductions (maximum mitigation time). Future emissions pathways We used the partially constrained parameter space to probabilistically examine emissions trajectories over the twenty-first century. We performed 100,000 runs of the model, drawing from the joint distribution of the set of hindcast parameters and sampling uniformly over an additional 11 parameters, mostly within the adoption component (with the exception of a triangular distribution for the temperature–emissions feedback based on Woodard et al. 8 ). The model is initialized using 2020 public opinion 83 and emissions data and run until 2100, with parameter values fixed for each model run. We used k -means clustering to group together model runs with similar trajectories of climate policy and emissions over the twenty-first century, identifying five distinct pathway types ( Methods ). A focus on clusters of similar policy and emissions pathways strikes a balance between exploring and explaining the diverse range of model behaviours while avoiding an undue focus on either the central tendency or the extremes of model outcomes. Figure 3 shows the mean policy and emissions trajectories for the five clusters. The model parameter values characteristic to each cluster indicate the socio-politico-technical states determining each policy–emissions trajectory. These parameter values are shown visually in Extended Data Fig. 3 . Table 2 describes the different pathways and gives end-of-century warming under the mean emissions scenario in each cluster. Fig. 3: Future emissions pathways in the coupled climate–social system. Policy (left) and global CO 2 emissions (right) trajectories from 100,000 Monte Carlo runs of the coupled climate–social model, clustered into 5 clusters using k -means clustering. The line thickness corresponds to the size of the cluster. Source data Full size image Table 2 Descriptions of distinguishing characteristics, frequency and temperature outcomes Full size table The modal policy–emissions trajectory emerging from the model, 48% of model runs, has global emissions peaking in the 2030s and dropping steeply over the 2040–2060 period, resulting in 2100 warming of 2.3 °C above 1880–1910 levels. The 2030–2050 emissions pathway displays a perhaps remarkable similarity to recent estimates of the effect of current climate policies or stated nationally determined contributions. Sognnaes et al. 11 estimate these result in fossil-fuel CO 2 emissions between 30–36 Gt CO 2 in 2030 and between 23–40 Gt CO 2 in 2050. Assuming that fossil fuels constitute 90% of total CO 2 emissions, equivalent values for the modal path trajectory are 38 Gt CO 2 in 2030 and 30 Gt CO 2 in 2050. This congruency arises despite the fact that current and stated climate policies are not input into the model and do not constrain model behaviour. The second and third most frequent clusters highlight the role of feedback processes discussed above. The ‘aggressive action’ trajectory is characterized by a strong social-conformity feedback in the opinion component through a high persuasive force parameter, leading to rapid diffusion of support for climate policy that—combined with effective and globally deployed mitigation technologies—drives emissions down faster than in the modal path, limiting warming to below the 2 °C temperature target. By contrast, the ‘technical challenges’ trajectory is characterized by a weak or absent learning-by-doing cost reduction feedback within the energy sector, as well as expensive and ineffective mitigation technologies. This pathway has the same climate policy trajectory as the modal path, but the absence of the technical-change feedback driving costs down over time leads to much greater emissions and warming of 3 °C by 2100. Two other trajectories (‘delayed recognition’ and ‘little and late’) exhibit multi-decade delays in climate policy, producing higher emissions over the century. These trajectories (which together constitute just over 5% of model runs), tend to be characterized by weak social conformity feedback in public opinion (through high network homophily and low persuasive force), cognitive biases limiting any effect of perceived climate change in increasing support for climate policy and an unresponsive political system (high status quo bias) that slows climate policy even as public support increases. Examining the set of parameters that distinguish the clusters of policy and emissions trajectories from each other (Table 1 and Extended Data Fig. 3 ) reveals an important role for parameters associated with the opinion, mitigation, cognition and policy components, particularly the strength of social conformity (for example, network homophily and persuasive force), the strength of mitigation technology feedback and effectiveness (for example, learning by doing, mitigation rate and lag time), the responsiveness of political institutions (for example, status quo bias) and the role of cognitive biases (for example, shifting baselines and biased assimilation). Parameters from the adoption component notably do not tend to be distinguishing characteristics of policy and emissions pathways. Thus, although the model can exhibit tipping-point behaviour in which individual adoption of behavioural change can be decisive in driving the system towards zero emissions (Fig. 2a ), the particular conditions that are necessary for this model behaviour do not appear to be common after constraining the model parameters using the hindcasting exercise. Drivers of variance in model behaviour were further explored by fitting random-forest models to two outputs of the 100,000 Monte Carlo runs of the calibrated model: policy in 2030 and cumulative emissions over 2020–2100. Normalized values of the 22 model parameters are used as explanatory variables. Extended Data Figure 4 gives the minimum depth distributions for the most important 10 variables for each model. As with the clustering analysis, variables related to opinion dynamics (persuasive force and network homophily), responsiveness of the political system (status quo bias and political-interest feedback), individual perception of climate change (shifting baselines and evidence effect) and mitigation technologies emerge as important in explaining variation in policy and emissions trajectories over the twenty-first century. Discussion and conclusions The trajectory of global greenhouse gas emissions over the twenty-first century will result from the complex interaction of technologies, governments, markets, individuals and communities. Although a range of disciplines have documented relevant feedback processes, the dynamics of the full system will depend on connections across components and scales. These coupled feedback processes can give rise to complex behaviour with, in some cases, sensitive dependence on parameter values and initial conditions. Even further uncertainties and more complex behaviour could emerge if parameter values were allowed to drift or change over time, for example, due to the evolution or reform of political institutions, a dynamic not explored in this analysis. However, despite the wide range of plausible behaviour, systematic exploration of the model parameter space combined with observational constraints on parameter values where possible can bound the space of probable outcomes. Despite uncertainties in many parameters, none of the policy–emissions clusters that we identified represent a pure business-as-usual world without climate policy. Even the highest-emission cluster produces warming in 2100 that is lower than the RCP7 business-as-usual baseline of 3.9 °C. The vast majority of runs (98%) produce warming of more than half a degree lower, although these warming estimates are sensitive to uncertainties in the climate system, including the climate sensitivity and the representation of carbon-cycle feedback, as well as the treatment of non-CO 2 greenhouse gases ( Methods ). Identified emissions trajectories, even the aggressive action scenario, fail to meet the more ambitious Paris Agreement target of limiting warming to 1.5 °C above pre-industrial levels. This result is not surprising, as all 1.5-°C-consistent emissions scenarios from energy system models include the widespread deployment of negative emissions technology, which is not represented in our model 86 . However, we do estimate a substantial probability of meeting the 2 °C Paris Agreement target—28% of our Monte Carlo runs result in 2091–2100 warming below 2 °C above 1880–1910 levels. We therefore find that socio-politico-technical feedback processes can be decisive for climate policy and emissions outcomes. Yet, they require a distinct and deliberate modelling approach. Exploring emissions pathways as an endogenous outcome of the coupled climate–social system differs from the typical use of emissions scenarios as exogenous inputs into either energy–economic or general circulation models. This paper seeks to explain alternative emissions and policy trajectories as the product of more fundamental social, political, technical and economic processes. Doing so requires an integrated multidisciplinary perspective—almost all of our identified clusters have distinguishing parameters from more than one model component, implying that the interaction between these subsystems is key in driving variance in potential twenty-first century emissions pathways. Further work to enhance this modelling framework could improve the climate model to better represent non-CO 2 forcing and carbon-cycle feedback and would expand the carbon pricing data used for calibration of the policy and mitigation components. Methods Model components and the feedback processes between and within components were identified from a review of literature across relevant fields including social and cognitive psychology, economics, sociology, law, political science and energy systems engineering 8 , 24 , 26 , 27 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 78 . The climate–social model was developed using relationships and feedback processes identified from this review (illustrated in Fig. 1 , described in Table 1 and documented in the ‘Model documentation’ section of the Supplementary Information ). Specific parameterizations or functional forms were derived from the literature where available. These are (1) parameterization of the temperature-emissions feedback using Woodard et al. 8 ; (2) parameterization of the shifting-baseline effect using Moore et al. 55 ; (3) parameterization of the learning-by-doing effect using Rubin et al. 75 ; and (4) use of a logistic uptake curve to represent uptake of individual behaviour change as commonly used in the technology adoption literature 95 . However, in many cases, only qualitative descriptions or relationships were described in the literature. In these cases (normative force of law feedback, political interest feedback, social norm effect, social homophily, status quo bias, credibility-enhancing display feedback and biased assimilation), we attempted to translate the relationships into appropriate functional forms, described in more detail in the ‘Extended model documentation’ section of the Supplementary Information . The behaviour of the model, particularly the potential for cross-component feedback processes and tipping points was investigated using systematic sweeps of the parameters shown in Fig. 2 , keeping all of the other model parameters fixed. Other model parameters for this part of the analysis were deliberately chosen to demonstrate the existence of tipping or threshold behaviour following an informal, qualitative exploration of the parameter space and are given in Extended Data Table 1 . The parameters that were varied in this analysis were chosen to exemplify thresholds and tipping-point behaviour as well as the interactions that moderate those effects. Two hindcasting exercises were conducted to partially constrain some key model parameters (given in Extended Data Figs. 1 and 2 ) using historical data. The first used time series of public opinion on climate change and carbon prices from 2013 to 2020 for nine OECD countries (the USA, Canada, Japan, Australia, the UK, Germany, France, Italy and Spain) to jointly constrain nine parameters in the opinion, policy and cognition components. Opinion data came from the Pew Research Center 83 , which asked respondents whether they thought global climate change was a major threat, a minor threat or not a threat. These three categories were mapped onto those supporting, neutral or opposed to climate policy and data from nine countries were aggregated into a single population-weighted time series 96 . Carbon price data come from the World Bank Carbon Pricing Dashboard and we calculate a single, emissions-weighted carbon price for the nine OECD countries between 2013 and 2020 (ref. 84 ). This constrains the calibration to only explicit carbon prices based on taxes or emissions trading schemes, ignoring implicit carbon prices arising through other forms of climate and energy regulation, for which data are not readily available. The model was initialized using carbon prices and opinion distribution from 2013 and then run 20,000 times, sampling from the distributions over nine model parameters (given in Extended Data Fig. 1 ). We use uniform prior distributions over the parameters, except in a couple of cases for which parameters are structurally related to each other (specifically the ‘weak persuasive force’ is constrained to be smaller than the ‘strong persuasive force’ and the ‘political interest feedback’ is constrained to be smaller than the ‘status quo bias’) or where some prior evidence suggests non-uniform distributions. Specifically, we used informative prior distributions for the network homophily parameter, placing higher weight on larger values (that is, more social separation between opinion groups 97 , 98 ) and for the shifting baselines parameter, placing more weight on the existence of shifting baselines 55 . For each model run, we defined a probability weight associated with the parameters based on its error in predicting 2014–2020 opinion and policy (that is, carbon prices) relative to the set of all 20,000 runs (details are provided in the ‘Weighting scheme for hindcast parameter constraints’ section of the Supplementary Information ). Initial distributions and weighted distributions based on hindcasting performance are given in Extended Data Fig. 1a . A second tuning exercise was performed for two parameters in the emissions component (maximum mitigation rate and maximum mitigation time) using evidence from Andersson 85 on the effect of the Swedish carbon price over the period 1991–2005. Andersson estimates that carbon pricing reduced emissions by 12.5% in 2005. The emissions component was forced with observed Swedish carbon prices over this time period and run 10,000 times, sampling from independent uniform distributions over the two mitigation parameters. A weighting scheme based on the difference in the modelled mitigation rate in 2005 and the estimated effect of the policy in Andersson 85 was applied to the initial uniform distributions, shown in Extended Data Fig. 2 . As with the first calibration exercise, this again relies on only explicit carbon tax levels, ignoring the effects of fuel taxes or the shadow costs of other climate or energy regulation. To evaluate the effectiveness of the parameter-tuning process for parameters in the opinion, policy and cognition components, we also performed a leave-one-out cross-validation of the model. Component parameters were tuned after dropping data from each year between 2014 and 2020 in sequence. The trained model was then run 20,000 times in Monte Carlo mode to predict the missing year value. We find that the average out-of-sample root mean squared error is US $2.5 for the carbon price and 5.4 percentage points for the combined neutral and opposed opinion groups. Finally, a full Monte Carlo analysis of the model was performed. Parameters partly constrained in the hindcasting exercises were drawn from the weighted distributions shown in Extended Data Figs. 1 and 2 . An additional 11 parameters (primarily in the adoption component, and listed in the ‘Monte Carlo parameter sampling details’ section of the Supplementary Information ) were drawn from independent uniform distributions (with the exception of a triangular distribution for the temperature–emissions feedback based on Woodard et al. 8 ). The model was run 100,000 times, initialized using opinion distribution in 2020 and running until 2100. Clusters of similar policy and emissions trajectories were identified by concatenating the two time series for each model run, scaling each column and applying k -means clustering to the resulting data frame. We decided on 5 clusters based on reductions in the within-cluster variance for 2–9 clusters (Extended Data Fig. 5 ). Characteristic parameter values for each cluster (Extended Data Fig. 3 ) were identified by first scaling the parameter values across all runs and then plotting average values for each cluster. Values close to zero mean that the model runs within the cluster have parameter values close to the ensemble average, whereas high or low values suggest sorting of those ensemble runs into the cluster and that these values are therefore important in producing the policy–emissions trajectory associated with that cluster. The temperature outcomes for emissions pathways reported in Table 2 depend on how forcing from non-CO 2 greenhouse gases are assumed to change with CO 2 emissions. Following the 2016 DICE model 99 , non-CO 2 forcings appear in the model as an ‘exogenous forcing’ term applied on top of radiative forcing from CO 2 . We allow this forcing to vary with CO 2 emissions based on a fitted relationship between reductions in CO 2 and reductions in CH 4 and N 2 O observed in the SSP-RCP emissions database 100 , which suggests that these gases are reduced at approximately half the rate of CO 2 (Extended Data Fig. 6 ). The sensitivity of 2091–2100 temperature estimates to this modelling choice is shown in Extended Data Table 2 . Moreover, the DICE climate model used in the coupled climate–social model and to estimate warming in Table 2 has a slow temperature response and lacks representation of carbon-cycle feedback 101 . Thus, in Extended Data Table 2 , we also show 2091–2100 warming under the five emissions trajectories using the MAGICC v.7 climate model, which includes saturation of the land and ocean carbon sinks, a more complete treatment of non-CO 2 forcing and is calibrated to reproduce behaviour of much larger general circulation models 94 , 102 . End-of-century warming on the basis of the DICE model is well within the uncertainty range based on 100 Monte Carlo runs of MAGICC. The largest difference with median MAGICC warming is 0.2 °C for the aggressive action pathway. All of the other scenarios are within 0.1 °C of the median. The coupled climate–social model is coded in R (v.3.6.3). Model output and behaviour were also analysed using the tidyverse, randomForest and randomForestExplainer packages. Figure 3 and Extended Data Figs. 3 , 4 and 6 were made using the ggplot2 package. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All data used in the Article are publicly available online ( ). Source data are provided with this paper. Code availability Model code and code to reproduce the analysis in this Article are provided in an online repository ( ).
Politics and society largely dictate climate policy ambitions and therefore the trajectory of greenhouse gas emissions, yet climate change models and projections rarely include political and social drivers. A study from the University of California, Davis, simulated 100,000 possible future policy and emissions trajectories to identify relevant variables within the climate-social system that could impact climate change in this century. The study, published today in the journal Nature, indicates that public perceptions of climate change, the future cost and effectiveness of climate mitigation and technologies, and how political institutions respond to public pressure are all important determinants of the degree to which the climate will change over the 21st century. "Small changes in some variables, like the responsiveness of the political system or the level of public support for climate policy, can sometimes trigger a cascade of feedbacks that result in a tipping point and drastically change the emissions trajectory over the century," said lead author Frances C. Moore, an assistant professor with the UC Davis Department of Environmental Science and Policy. "We're trying to understand what it is about these fundamental socio-political-technical systems that determine emissions." Coupling climate and policy The authors note that the biggest uncertainty in understanding climate impacts over the long term is what emissions will be in the future. Most climate and energy modeling treats policy as something external to the models. But to prepare for climate impacts, adaptation planners need to understand the probability of different temperature outcomes for future decades. For this study, the authors modeled 100,000 possible future pathways of climate policy and greenhouse gas emissions. They used an integrated, multidisciplinary model that connected data across a wide range of social, political and technical fields. Such scenarios included public and political support, social perceptions of climate change, how quickly collective action or carbon pricing responds to changes in public opinion and other inputs. The pathways fell into five clusters, with warming in 2100 varying between 1.8 to 3.6 degrees Celsius above the 1880-1910 average, but with a strong probability of warming between 2 and 3 degrees Celsius at the end of the century. Key drivers The results indicate that people's perceptions and social groups, the improvements in mitigation technology over time, and the responsiveness of political institutions are key drivers of future emissions, even more so than individual actions. The study is not prescriptive. Rather, it examines what it is about the social-political-technical system that determines future emissions, integrates that information into existing climate models, and connects them across individual, community, national and global scales. "Understanding how societies respond to environmental change, and how policy arises from social and political systems, is a key question in sustainability science," Moore said. "I see this as pushing that research, and also being useful for climate adaptation and impacts planning." The study's co-authors are Katherine Lacasse of Rhode Island College, Katharine Mach of the University of Miami, Yoon Ah Shin of Arizona State University, Louis Gross of University of Tennessee, and Brian Beckage of University of Vermont.
10.1038/s41586-022-04423-8
Earth
Locating poor air quality in cities
Jennifer Bailey et al, Localizing SDG 11.6.2 via Earth Observation, Modelling Applications, and Harmonised City Definitions: Policy Implications on Addressing Air Pollution, Remote Sensing (2023). DOI: 10.3390/rs15041082
https://dx.doi.org/10.3390/rs15041082
https://phys.org/news/2023-02-poor-air-quality-cities.html
Abstract Guidelines Hypothesis Interesting Images Letter New Book Received Obituary Opinion Perspective Proceeding Paper Project Report Protocol Registered Report Reply Retraction Short Note Study Protocol Systematic Review Technical Note Tutorial Viewpoint All Article Types Advanced Search Section All Sections AI Remote Sensing Atmospheric Remote Sensing Biogeosciences Remote Sensing Coral Reefs Remote Sensing Discoveries in Remote Sensing Earth Observation Data Earth Observation for Emergency Management Ecological Remote Sensing Engineering Remote Sensing Environmental Remote Sensing Forest Remote Sensing Ocean Remote Sensing Remote Sensing and Geo-Spatial Science Remote Sensing Communications Remote Sensing Image Processing Remote Sensing in Agriculture and Vegetation Remote Sensing in Geology, Geomorphology and Hydrology Remote Sensing of the Water Cycle Satellite Missions for Earth and Planetary Exploration Urban Remote Sensing All Sections Special Issue All Special Issues Remote Sensing: 15th Anniversary 100 Years ISPRS - Advancing Remote Sensing Science 2nd Edition Advances in Remote Sensing for Archaeological Heritage 3D Point Clouds in Rock Mechanics Applications Accuracy Assessment of UAS Lidar Active-Passive Microwave Sensing for Earth System Parameters Advanced Communication and Networking Techniques for Remote Sensing Advanced Machine Learning and Big Data Analytics in Remote Sensing for Natural Hazards Management Advanced Machine Learning Approaches for Hyperspectral Data Analysis Advanced Radar Signal Processing and Applications Advanced Remote Sensing Methods for 3D Vegetation Mapping and Characterization Advanced Satellite-Terrestrial Networks Advanced Topics in Remote Sensing Advancement of Remote Sensing and UAS in Cartography and Visualisation Advancements in Remote Sensing and GIS in Mobile Computing and Location-Based Services Advances and Innovative Applications of Unmanned Aerial Vehicles Advances in Applications of Volunteered Geographic Information Advances in Climate and Geophysics Advances in Geographic Object-Based Image Analysis (GEOBIA) Advances in Mobile Laser Scanning and Mobile Mapping Advances in Object and Activity Detection in Remote Sensing Imagery Advances in Object and Activity Detection in Remote Sensing Imagery II Advances in Object-Based Image Analysis—Linking with Computer Vision and Machine Learning Advances in Quantitative Remote Sensing in China – In Memory of Prof. Xiaowen Li Advances in Real Aperture and Synthetic Aperture Ground-Based Interferometry Advances in Remote Sensing and Geographic Information Science and Their Uses in Geointelligence Advances in Remote Sensing Applications for the Detection of Biological Invasions Advances in Remote Sensing for Archaeological Heritage Advances in Remote Sensing in Coastal Geomorphology Advances in Remote Sensing of Agriculture Advances in Remote Sensing of Crop Water Use Estimation Advances in Remote Sensing of Forestry Advances in Remote Sensing of Wildland Fires Advances in Remote Sensing-based Disaster Monitoring and Assessment Advances in SAR: Sensors, Methodologies, and Applications Advances in Satellite Altimetry Advances in Satellite Altimetry and Its Application Advances in Synthetic Aperture Radar Remote Sensing Advances in the Remote Sensing of Terrestrial Evaporation Advances in VIIRS Data Advances of Remote Sensing in Environmental Geoscience Aerosol and Cloud Remote Sensing Airborne Laser Scanning Analysis of Remote Sensing Image Data Application of Machine Learning in Volcano Monitoring Application of Multi-Sensor Fusion Technology in Target Detection and Recognition Application of Remote Sensing in Hydrological Modeling and Watershed Management Applications and New Trends in Metrology for Radar/LiDAR-Based Systems Applications of Deep Learning in Smart Agriculture Applications of Full Waveform Lidar Applications of GNSS Reflectometry for Earth Observation Applications of Micro- and Nano-Satellites for Earth Observation Applications of Remote Sensing in Landscape Ecology in Latin America Applications of Remote Sensing in Rangelands Research Applications of Synthetic Aperture Radar (SAR) for Land Cover Analysis Approaches for Monitoring Land Degradation with Remote Sensing Archaeological Prospecting and Remote Sensing Artifacts in Remote Sensing Data Analysis: An Ecological Perspective Artificial Intelligence and Automation in Sustainable Smart Farming Artificial Neural Networks and Evolutionary Computation in Remote Sensing Assessment of Renewable Energy Resources with Remote Sensing Assimilation of Remote Sensing Data into Earth System Models Atmospheric Remote Sensing Baltic Sea Remote Sensing Big Data Analytics for Secure and Smart Environmental Services Big Earth Observation Data: From Cloud Technologies to Insights and Foresight Bistatic HF Radar Calibration and Validation of Synthetic Aperture Radar Calibration and Verification of Remote Sensing Instruments and Observations Carbon Cycle, Global Change, and Multi-Sensor Remote Sensing Cartography of the Solar System: Remote Sensing beyond Earth Celebrating the 50th Anniversary of the Moon Landing with Views of Earth from Satellite and Apollo Missions Citizen Science and Earth Observation Climate Modelling and Monitoring Using GNSS Close Range Remote Sensing Close-Range Remote Sensing by Ground Penetrating Radar Compact Polarimetric SAR Concurrent Positioning, Mapping and Perception of Multi-source Data Fusion for Smart Applications Contemporary Microwave and Radar Techniques in Remote Sensing—MIKON 2022, IRS 2022 CORINE Land Cover System: Limits and Challenges for Territorial Studies and Planning Cross-Calibration and Interoperability of Remote Sensing Instruments Cryospheric Remote Sensing Data Mining in Multi-Platform Remote Sensing Data Science in Remote Sensing Data Science, Artificial Intelligence and Remote Sensing Deep Learning for Intelligent Synthetic Aperture Radar Systems Deep Learning for Target Object Detection and Identification in Remote Sensing Data Deep Learning in Remote Sensing: Sample Datasets, Algorithms and Applications Dense Image Time Series Analysis for Ecosystem Monitoring Design and Calibration of Microwave Radiometers and Scatterometers for Remote Sensing of the Earth Designing and Managing the Next Generation of Transportation Infrastructure Digital Forest Resource Monitoring and Uncertainty Analysis Discovering A More Diverse Remote Sensing Discipline Drone-Based Ecological Conservation Earth Environment Monitoring with Advanced Spaceborne Synthetic Aperture Radar: New Architectures, Operational Modes, and Processing Techniques Earth Monitoring from A New Generation of Geostationary Satellites Earth Observation and Sustainable Development in Marine and Freshwater Systems Earth Observation for Ecosystems Monitoring in Space and Time Earth Observation for Water Resource Management in Africa Earth Observation in Planning for Sustainable Urban Development Earth Observation Technology Cluster: Innovative Sensor Systems for Advanced Land Surface Studies Earth Observation to Support Disaster Preparedness and Disaster Risk Management Earth Observation with AVHRR Data and Interconsistency Earth Observations for a Better Future Earth Earth Observations for Addressing Global Challenges Earth Observations for Geohazards Earth Observations for the Sustainable Development Ecogeomorphological Research Using Satellite Images Ecological Status and Change by Remote Sensing Environmental Research with Global Navigation Satellite System (GNSS) EO Solutions to Support Countries Implementing the SDGs ESA - NRSCC Cooperation Dragon 4 Final Results Estimation of Forest Biomass from SAR European Remote Sensing-New Solutions for Science and Practice Feature Papers Field Spectroscopy and Radiometry Fire Remote Sensing: Capabilities, Innovations, Opportunities and Challenges Frontiers in Spectral Imaging and 3D Technologies for Geospatial Solutions Fusion of High-Level Remote Sensing Products Fusion of LiDAR Point Clouds and Optical Images Fusion of Multi-Satellite and Multi-Sensor SARs Data for Investigating Long-Term Surface Ground Movements and Processes GEOBIA in the Era of Big EO Data Geological Remote Sensing Geomorphological Mapping and Process Monitoring Using Remote Sensing Geomorphological Processes and Natural Hazards Geospatial Statistics and Spatial/Spatiotemporal Analysis in Remote Sensing GIS and Remote Sensing advances in Land Change Science Global Croplands Global Navigation Satellite Systems for Earth Observing System Global Positioning Systems (GPS) and Applications GNSS High Rate Data for Research of the Ionosphere Google Earth Engine and Cloud Computing Platforms: Methods and Applications in Big Geo Data Science Google Earth Engine Applications Google Earth Engine: Cloud-Based Platform for Earth Observation Data and Analysis GPS/GNSS Contemporary Applications GPS/GNSS for Earth Science and Applications High Performance Computing in Remote Sensing High-precision GNSS: Methods, Open Problems and Geoscience Applications Human-Induced Global Change Hydrological Remote Sensing Hydrometeorological Prediction and Mapping Hyperspectral Imaging for Precision Farming Hyperspectral Remote Sensing Image Compression and Coding in Remote Sensing Indoor Localization Innovative Application of AI in Remote Sensing Innovative Remote Sensing for Monitoring and Assessment of Natural Resources InSAR for Earth Observation InSAR in Remote Sensing Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques Integrated Applications of Geo-Information in Environmental Monitoring Integrating Remote Sensing and Social Sensing Intelligence Computing Paradigms with Remote Sensing Networks in Water Hydrology Joint Artificial Intelligence and Computer Vision Applications in Remote Sensing Land Change Assessment Using Remote Sensing and Geographic Information Science Land Cover Classification Using Multispectral LiDAR Data Land Cover/Land Use Change (LC/LUC) – Causes, Consequences and Environmental Impacts in South/Southeast Asia Land Surface Fluxes Land Surface Global Monitoring from PROBA-V to Sentinel-3 Land Surface Processes and Interactions—From HCMM to Sentinel Missions and beyond Land Use / Land Cover Monitoring using SAR and Complementarity with Optical Sensors Landsat-8 Sensor Characterization and Calibration Laser Scanning in Forests Latest Advances in Remote Sensing-Based Environmental Dynamic Models Latest Developments in Clustering Algorithms for Hyperspectral Images Lessons Learned from the SPOT4 (Take5): Experiment in Preparation for Sentinel-2 LiDAR Lidar/Laser Scanning in Urban Environments Machine and Deep Learning for Earth Observation Data Analysis Machine Learning Applications in Earth Science Big Data Analysis Machine Learning Models from Regression to Deep Learning Neural Networks for Assessment, Prediction, Mitigation, and Control of Geospatial, Socioeconomic, and Environmental Impacts of Climate Change Machine Learning Techniques for Remote Sensing and Electromagnetic Applications Mapping Ecosystem Services Flows and Dynamics Using Remote Sensing Mapping the Dynamics of Forest Plantations in Tropical and Subtropical Regions from Multi-Source Remote Sensing Mapping, Monitoring and Impact Assessment of Land Cover/Land Use Changes in South and South East Asia Mathematical Models for Remote Sensing Image and Data Processing Microwave Indices from Active and Passive Sensors for Remote Sensing Applications Microwave Remote Sensing Mobile Laser Scanning Mobile Mapping Technologies Mobile Remote Monitoring Technology for Mental Health Modern Advances in Electromagnetic Imaging and Remote Sensing: Enabling Hardware, Computational Techniques and Machine Learning Monitoring and Modelling of Dynamics in Tropical Coastal Systems Monitoring Global Vegetation with AVHRR NDVI3g Data (1981-2011) Monitoring Land Surface Dynamic with AVHRR Data Monitoring of Land Changes Mountain Remote Sensing Multi-Constellation Global Navigation Satellite Systems: Methods and Applications Multi-Sensor and Multi-Data Integration in Remote Sensing Multi-Temporal Remote Sensing Multiscale and Multitemporal High Resolution Remote Sensing for Archaeology and Heritage: From Research to Preservation Multistatic and Bistatic SAR: State of Affairs and Way Forward Multitemporal Land Cover and Land Use Mapping Natural Hazard Assessment and Disaster Management Using Remote Sensing New Frontiers of Multiscale Monitoring, Analysis and Modeling of Environmental Systems via Integration of Remote and Proximal Sensing Measurements New Insights in InSAR and GNSS Measurements New Outstanding Results over Land from the SMOS Mission New Perspectives of Remote Sensing for Archaeology New Sensors, Multi-Sensor Integration, Large Volumes: New Opportunities and Challenges in Forest Fire Research New Technologies for Earth Remote Sensing New Trends in Forest Fire Research Incorporating Big Data and Climate Change Modeling Novel Bistatic SAR Scattering Theory, Imaging Algorithms, and Applications Object Detection, Recognition and Identification Using Remote Sensing Technique Object-Based Image Analysis Observation of Lake–Air Interaction and Atmospheric Boundary Layer Process, Remote Sensing Inversion over the Pan-Third Pole Observations, Modeling, and Impacts of Climate Extremes Observing the Ocean’s Interior from Satellite Remote Sensing Ocean Remote Sensing Open Resources in Remote Sensing Opportunities and Challenges for Medium Resolution (hecta- to kilometric) Earth Observation of Land Surfaces in the Advent of European Proba-V and Sentinel-3 Missions Optical Remote Sensing of the Atmosphere OPTIMISE: Innovative Optical Tools for Proximal Sensing of Ecophysiological Processes Photogrammetry and Image Analysis in Remote Sensing Point Cloud Processing and Analysis in Remote Sensing Polar Sea Ice: Detection, Monitoring and Modeling Positioning and Navigation in Remote Sensing Precise Orbit Determination with GNSS Processing of Bi-static, Geo-Synchronous and Multi-Satellite SAR Constellation Data Progress on the Use of UAS Techniques for Environmental Monitoring Proximal and Remote Sensing in the MWIR and LWIR Spectral Range Quantifying and Validating Remote Sensing Measurements of Chlorophyll Fluorescence Quantifying the Environmental Impact of Forest Fires Quantitative Inversion and Validation of Satellite Remote Sensing Products Quantitative Remote Sensing of Land Surface Variables Quantitative Remote Sensing Product and Validation Technology Quantitative Volcanic Hazard Assessment and Uncertainty Analysis in Satellite Remote Sensing and Modeling Quantitative Volcanic Hazard Assessment and Uncertainty Analysis in Satellite Remote Sensing and Modeling: Part II Radar Applications in Cultural Heritage Radar High-Speed Target Detection, Tracking, Imaging and Recognition - 2nd Edition Radar Remote Sensing for Agriculture Radar Remote Sensing of Cloud and Precipitation Radar Remote Sensing on Life Activities Radar Systems for the Societal Challenges Radiative Transfer Modelling and Applications in Remote Sensing Radio Frequency Interference (RFI) in Microwave Remote Sensing Rapid Processing and Analysis for Drone Applications Real-time GNSS Precise Positioning Service and its Augmentation Technology Real-Time Radar Imaging and Sensing Recent Advances in Geophysical Exploration and Monitoring on Coastal Areas Recent Advances in GPR Imaging Recent Advances in Pattern Recognition and Analysis in Landscape Ecology Recent Advances in Polarimetric SAR Interferometry Recent Advances in Remote Sensing for Crop Growth Monitoring Recent Advances in Subsurface Sensing Technologies Recent Advances in Thermal Infrared Remote Sensing Recent Developments in Remote Sensing for Physical Geography Recent Progress and Developments in Imaging Spectroscopy Recent Progress in Ground Penetrating Radar Remote Sensing Recent Trends in UAV Remote Sensing Regional and Global Land Cover Mapping Remote and Proximal Assessment of Plant Traits Remote Sensed Data and Processing Methodologies for 3D Virtual Reconstruction and Visualization of Complex Architectures Remote Sensing and Cyber Situational Awareness Remote Sensing and Decision Support for Precision Orchard Production Remote Sensing and GIS for Habitat Quality Monitoring Remote Sensing and Machine Learning Applications in Atmospheric Physics, Weather, and Air Quality Remote Sensing and Oil Spill Response: Leveraging New Technologies to Safeguard the Environment Remote Sensing and Vegetation Mapping Remote Sensing Application for Promoting Ecosystem Services and Land Degradation Management in Mid-Latitude Ecotone (MLE) Remote Sensing Applications in Monitoring of Protected Areas Remote Sensing Applications to Human Health Remote Sensing Applied to Soils: From Ground to Space Remote Sensing as Tool in Geofluids Dynamics and Related Risks Remote Sensing Big Data: Theory, Methods and Applications Remote Sensing by Synthetic Aperture Radar Technology Remote Sensing Data Application, Data Reanalysis and Advances for Mesoscale Numerical Weather Models Remote Sensing Data Fusion and Applications Remote Sensing Dedicated to Geographical Conditions Monitoring Remote Sensing for 3D Urban Morphology Remote Sensing for Archaeological Heritage Preservation Remote Sensing for Coral Reef Monitoring Remote Sensing for Cultural Heritage Remote Sensing for Environment and Disaster Remote Sensing for EU Habitats Directive Application Remote Sensing for Improved Understanding of Land Surface, Hydrology, and Water Quality Remote sensing for Intelligent Transportation Systems Remote Sensing for Land Cover/Land Use Mapping at Local and Regional Scales Remote Sensing for Land Cover/Land Use Mapping at Local and Regional Scales: Part II Remote Sensing for Land Surface Temperature (LST) Estimation, Generation, and Analysis Remote Sensing for Landslide Monitoring, Mapping and Modeling Remote Sensing for Landslides Investigation: From Research into Practice Remote Sensing for Maritime Safety and Security Remote Sensing for Monitoring Wildlife and Habitat in a Changing World Remote Sensing for Near-Real-Time Disaster Monitoring Remote Sensing for Public Health Remote Sensing for Sustainable Energy Systems Remote Sensing for the Definition and Near-Real Time Monitoring of Meteorological Extremes Remote Sensing for Understanding Coral Reef Dynamics and Processes: Photo-Systems to Coral Reef Systems Remote Sensing for Water Productivity Assessments in Agriculture, Ecosystems and Water Resources Remote Sensing for Wetland Inventory, Mapping and Change Analysis Remote Sensing from Unmanned Aerial Vehicles (UAVs) Remote Sensing in 2021 Remote Sensing in 2023 Remote Sensing in Applications of Geoinformation Remote Sensing in Climate Modeling Remote Sensing in Climate Monitoring and Analysis Remote Sensing in Coastal Ecosystem Remote Sensing in Coastal Environments Remote Sensing in Dryland Assessment and Monitoring Remote Sensing in Ecosystem Modelling Remote Sensing in Flood Monitoring and Management Remote Sensing in Food Production and Food Security Remote Sensing in Geology Remote Sensing in Geomorphology Remote Sensing in Natural and Cultural Heritage Remote Sensing in Precision Agriculture Remote Sensing in Public Health Remote Sensing in Sea Ice Remote Sensing in Seismology Remote Sensing in Support of Aeolian Research Remote Sensing in Support of Environmental Governance Remote Sensing in Support of Environmental Policy Remote Sensing in the Age of Electronic Ecology Remote Sensing in Tibet and Siberia Remote Sensing in University of Warsaw: Celebrating 60th Anniversary on Remote Sensing Remote Sensing Measurements for Monitoring Achievement of the Sustainable Development Goals (SDGs) Remote Sensing Monitoring Aerosols and Its Effects on Atmospheric Radiation Remote Sensing Observations of the Giant Planets Remote Sensing of Aerosol - Cloud Interactions Remote Sensing of Arctic Tundra Remote Sensing of Atmosphere and Underlying Surface Using OLCI and SLSTR on Board Sentinel-3: Calibration, Algorithms, Geophysical Products and Validation Remote Sensing of Biodiversity Remote Sensing of Biodiversity Monitoring Remote Sensing of Biogeochemical Cycles Remote Sensing of Biological Diversity Remote Sensing of Biomass Burning Remote Sensing of Biophysical Parameters Remote Sensing of Changing Northern High Latitude Ecosystems Remote Sensing of Desert Landscapes to Monitor Impacts of Renewable Energy Developments Remote Sensing of Desertification Remote Sensing of Drought Monitoring Remote Sensing of Dryland Environment Remote Sensing of Ecosystem Structure and Function Dynamics Due to Climate Change and Human Activities Remote Sensing of Environmental Changes in Cold Regions Remote Sensing of Essential Climate Variables and Their Applications Remote Sensing of Floodpath Lakes and Wetlands Remote Sensing of Forest Health Remote Sensing of Geo-Hydrological Process in an Arid Region Remote Sensing of Geopolitics Remote Sensing of Glaciers Remote Sensing of Grassland Ecosystem Remote Sensing of Human-Environment Interactions along the Urban-Rural Gradient Remote Sensing of Hydro-Meteorology Remote Sensing of Image Pansharpening Remote Sensing of Islands Remote Sensing of Land Degradation and Drivers of Change Remote Sensing of Land Degradation in Drylands Remote Sensing of Land Surface Radiation Budget Remote Sensing of Mangroves: Observation and Monitoring Remote Sensing of Night Lights – Beyond DMSP Remote Sensing of Peatlands I Remote Sensing of Phytoplankton Remote Sensing of Rainfall and Snowfall - Recent Advances Remote Sensing of Regional Soil Moisture Remote Sensing of Solar Surface Radiation Remote Sensing of the Oceans: Blue Economy and Marine Pollution Remote Sensing of Tropical Environmental Change Remote Sensing of Urban Ecology Remote Sensing of Vegetation Proportion, Attribute, Condition, and Change Remote Sensing of Vegetation Structure and Dynamics Remote Sensing of Water Quality Remote Sensing of Water Resources Remote Sensing of Wildfire Remote Sensing of Wind Energy Remote Sensing on Earth Observation and Ecosystem Services Remote Sensing on Land Surface Albedo Remote Sensing with Nighttime Lights Remote Sensing-Based Proxies to Predict Socio-Economic and Demographic Data Remote Sensing: 10th Anniversary Remote Sensing: A Themed Issue in Honor of Professor John R. Jensen Remotely Sensed Estimates of Fire Radiative Energy Reproducibility and Replicability in Remote Sensing Workflows Retrieval and Validation of Trace Gases Using Remote Sensing Measurements Retrieval, Validation and Application of Satellite Soil Moisture Data Root Dynamics Tracking Using Remote Sensing SAR for Forest Mapping SAR Imagery for Landslide Detection and Prediction Satellite Altimetry for Earth Sciences Satellite Climate Data Records and Applications Satellite Derived Bathymetry Satellite Hydrological Data Products and Their Applications Satellite Mapping Technology and Application Satellite Monitoring of Water Quality and Water Environment Satellite Remote Sensing Applications for Fire Management Satellite Remote Sensing for Water Resources in a Changing Climate Satellite Remote Sensing of High-Temperature Thermal Anomalies Satellite Remote Sensing of High-Temperature Thermal Anomalies, Volume II Satellite-Derived Wind Observations Science of Landsat Analysis Ready Data Sea Ice Remote Sensing and Analysis Selected Papers from IGL-1 2018 — First International Workshop on Innovating GNSS and LEO Occultations & Reflections for Weather, Climate and Space Weather Selected Papers from Remote Sensing Science Workshop Selected Papers from the "2019 International Symposium on Remote Sensing" Selected Papers from the “International Symposium on Remote Sensing 2018” Selected Papers from the “International Symposium on Remote Sensing 2021” Selected Papers from the 1st International Electronic Conference on Remote Sensing (ECRS-1) Selected Papers from the 2nd International Electronic Conference on Remote Sensing (ECRS-2) Selected Papers from the 5th International Electronic Conference on Remote Sensing Selected Papers from the First "Symposia of Remote Sensing Applied to Soil Science", as part of the "21st World Congress of Soil Science, 2018" Selected Papers from the Polarimetric Interferometric SAR Workshop 2019 Semantic Segmentation Algorithms for 3D Point Clouds She Maps Silk-Road Disaster Monitoring and Risk Assessment Using Remote Sensing and GIS Societal and Economic Benefits of Earth Observation Technologies Space Geodesy and Ionosphere Space-Borne Gravimetric Measurements for Quantifying Earth System Mass Change Spatial Data Infrastructures for Big Geospatial Sensing Data Spatial Demography and Health – The 1st Internaitonal Symposium on Lifecourse Epidemiology and Spatial Science (ISLES) Spatial Enhancement of Hyperspectral Data and Applications Spatial Modelling of Natural Hazards and Water Resources through Remote Sensing, GIS and Machine Learning Methods Spatio-Temporal Mobility Data State-of-the-Art Remote Sensing in North America 2019 State-of-the-Art Remote Sensing in North America 2023–2024 State-of-the-Art Remote Sensing in South America State-of-the-Art Technology of Remote Sensing in Russia Synthetic Aperture Radar (SAR) Technology Advance and for Cultural Heritage Applications Synthetic Aperture Radar (SAR)—New Techniques, Missions and Applications Systems and Technologies for Remote Sensing Application through Unmanned Aerial Systems (STRATUS) Teaching and Learning in Remote Sensing Technological Developments of Unmanned Aerial Systems (UAS) for Remote Sensing Applications Ten Years of Remote Sensing at Barcelona Expert Center Ten Years of TerraSAR-X—Scientific Results Terrestrial Laser Scanning The Application of Thermal Urban Remote Sensing to Understand and Monitor Urban Climates The Development and Validation of Remote Sensing Products for Terrestrial, Hydrological, and Ecological Applications at the Regional Scale The Environmental Mapping and Analysis Program (EnMAP) Mission: Preparing for Its Scientific Exploitation The Internet of Things (IoT) in Remote Sensing: Opportunities and Challenges The Kyoto and Carbon Initiative—Environmental Applications by ALOS-2 PALSAR-2 The Role of Remote Sensing in Sustainable Renewable Energy Exploitation The Use of Earth Observations for Exposure Assessment in Epidemiological Studies The Use of UAV Platforms for Cultural Heritage Monitoring and Surveying Themed Issue: Practical or Economic Applications of Optical/Thermal Remote Sensing: A Themed Issue in Honor of Professor Toby N. Carlson Thermal Remote Sensing Applications for the Atmospheric Surface Layer Thermal Remote Sensing Applications: Present Status and Future Possibilities Time Series Analysis in Remote Sensing: Algorithm Development and Applications Time-of-Flight Range-Imaging Cameras Towards Practical Application of Artificial Intelligence in Remote Sensing Towards Remote Long-Term Monitoring of Wetland Landscapes Trends in UAV Remote Sensing Applications Trends in UAV Remote Sensing Applications: Part II Two Decades of MODIS Data for Land Surface Monitoring: Exploitation of Existing Products and Development of New Algorithms UAV and IoT-based Measurement of Vegetation Indices for Environmental Monitoring UAV Imagery for Precision Agriculture UAV Photogrammetry and Remote Sensing UAV-Based Remote Sensing Methods for Modeling, Mapping, and Monitoring Vegetation and Agricultural Crops Uncertainties in Remote Sensing Understanding Animal-Ecosystem Interactions with Remote Sensing Underwater 3D Recording & Modelling Underwater Acoustic Remote Sensing Unmanned Aerial Vehicle Applications in Cryospheric Sciences Unmanned Aerial Vehicles (UAVs) based Remote Sensing Urban Remote Sensing Use of LiDAR and 3D point clouds in Geohazards Validation and Inter-Comparison of Land Cover and Land Use Data Validation on Global Land Cover Datasets Visible Infrared Imaging Radiometers and Applications Volcanic Processes Monitoring and Hazard Assessment Using Integration of Remote Sensing and Ground-Based Techniques Volcano Remote Sensing Water Optics and Water Colour Remote Sensing What can Remote Sensing Do for the Conservation of Wetlands? All Special Issues Volume Issue Number Page Logical Operator Operator AND OR Search Text Search Type All fields Title Abstract Keywords Authors Affiliations Doi Full Text
People in big cities breathe bad air. Bad air that consists of particulate matter and other pollutants, which pose health risks to urban citizens. Researchers led by Dr. Martin Ramacher of the Hereon Institute of Coastal Environmental Chemistry, in collaboration with the National Observatory of Athens, are now helping to make the determination of particulate matter smaller than 2.5 micrometers (PM2.5) more accurate. To do this, they used openly available EU-wide Copernicus satellite data in combination with the EPISODE-CityChem chemical transport model. The system developed at Hereon was able to model hotspots for bad air at a resolution of 100x100 square meters using Hamburg as an example. The calculated particulate matter concentrations are combined with population data and can thus simultaneously indicate areas with poor air quality and high population density. These areas are of particular interest for achieving air quality improvements. The pioneering aspect of the developed method is the combination of different satellite data, which are freely available for all of Europe, with city-scale model calculations. Compared with the mean value of 14 micrograms per cubic meter for the entire city previously collected by the World Health Organization (WHO) for the example year 2016 used, Hamburg was actually subject to lower fine particulate matter concentrations of 11 to 12 micrograms per cubic meter as an urban average. However, the new detailed calculations show that pollution levels are distributed differently across the city and can rise to 17 micrograms per cubic meter in some neighborhoods. "In particular, we were able to determine elevated annual mean values for particulate matter concentrations for the sample year 2016 on busy roads and in the industrial area near the port in the south of the Elbe River. While relatively few people live near the industrial areas, we were able to demonstrate that many people live near heavily traveled roads and are affected by elevated concentrations. Such considerations of air pollution hotspots, have so far been unrepresented in the UN indicator. But our approach, in line with the UN indicator, allows to better record pollution levels and can help local decision-makers to initiate countermeasures," says Ramacher. Overall, Hamburg is below the European average for particulate matter pollution compared to other major European cities and does not exceed the annual EU limit of 20 micrograms per cubic meter for particulate matter smaller than 2.5 micrometers (PM2.5). The SDG 11.6.2 indicator was developed by the United Nations to address the threat to public health from urban air pollution globally. The World Health Organization (WHO) published updated guidelines for air quality benchmarks in late September 2021 to respond to the threat of pollution. The effects of those include seven million premature deaths worldwide each year and many millions of people becoming ill. Air pollution is still a major health problem in Europe as well. The local definition of SDG 11.6.2 indicator brings challenges—mainly because of the diversity of causes of air pollution, for example, from a wide range of emission sources and other influencing factors. The often too few monitoring sites cannot accurately capture the spatial complexity. The study, conducted jointly by Hereon and the National Observatory of Athens, aims to advance the discussion on the potential of the SDG 11.6.2 indicator for local decision-making. This is because detailed inner-city information on pollution and population is needed to fill the research gap that has existed to date and eventually to improve air quality in cities. The study is published in the journal Remote Sensing.
10.3390/rs15041082
Biology
Lab shows phage attacks in new light
Charles L. Dulberger et al, Mycobacterial nucleoid-associated protein Lsr2 is required for productive mycobacteriophage infection, Nature Microbiology (2023). DOI: 10.1038/s41564-023-01333-x Journal information: Nature Microbiology
https://dx.doi.org/10.1038/s41564-023-01333-x
https://phys.org/news/2023-03-lab-phage.html
Abstract Mycobacteriophages are a diverse group of viruses infecting Mycobacterium with substantial therapeutic potential. However, as this potential becomes realized, the molecular details of phage infection and mechanisms of resistance remain ill-defined. Here we use live-cell fluorescence microscopy to visualize the spatiotemporal dynamics of mycobacteriophage infection in single cells and populations, showing that infection is dependent on the host nucleoid-associated Lsr2 protein. Mycobacteriophages preferentially adsorb at Mycobacterium smegmatis sites of new cell wall synthesis and following DNA injection, Lsr2 reorganizes away from host replication foci to establish zones of phage DNA replication (ZOPR). Cells lacking Lsr2 proceed through to cell lysis when infected but fail to generate consecutive phage bursts that trigger epidemic spread of phage particles to neighbouring cells. Many mycobacteriophages code for their own Lsr2-related proteins, and although their roles are unknown, they do not rescue the loss of host Lsr2. Main Bacteriophages are the most numerous biological entities in the biosphere 1 , 2 and possess unparalleled genetic diversity 3 . Host factors needed for phage replication are poorly understood, but mutations both in receptors and intracellular functions can confer phage resistance, and phages must co-evolve in response. This bacterial–phage arms race spans billions of years and dominantly shaped the coevolutionary picture of phages and their hosts 1 , 2 , 4 . Together, these factors contribute to viral host range, a key factor influencing phage therapeutic potential 5 , 6 , 7 , 8 , 9 , 10 . Adsorption to the cell surface is the first step in the phage life cycle. Prevention of adsorption through modification of surface receptors serving as phage entry points is the first line of defence for bacteria, and it is a prevalent target for phage resistance mechanisms 11 . Bacteria thwart phages by acquiring mutations in genes responsible for synthesizing receptors and pathways that secrete extracellular matrix 11 . Resistance can also result from acquiring mutations in host genes that are critical for other stages of the phage life cycle, including those required for phage DNA replication and assembly into mature phage particles 11 . Mycobacteriophages—phages infecting Mycobacterium hosts—are well-studied genomically and are highly diverse. Many are temperate 12 and enter lysogeny where they replicate passively with the host chromosome 13 , 14 , 15 , 16 . Mycobacteriophages have considerable therapeutic potential for Mycobacterium tuberculosis and Mycobacterium abscessus infections, which are challenging to control due to intrinsic or acquired antibiotic resistance as well as prolonged treatment with harsh antibiotic regimens 8 , 12 , 17 , 18 . The more than 2,000 sequenced mycobacteriophage genomes have been grouped into sequence-related clusters (Clusters A, B and so on), many of which have been divided into subclusters (Subclusters A1, A2 and so on) 19 . Currently, there are 31 clusters (Clusters A–AE) and 7 ‘singletons’, each of which has no close relatives 12 , 19 , 20 . All are double-stranded DNA tailed phages, with either siphoviral or myoviral virion morphotypes 12 , 21 . Their narrow host range among nontuberculous mycobacteria strains limits broad therapeutic potential 22 , but little is known about receptors or other determinants of specificity 23 . Lsr2 is a nucleoid-associated protein conserved in mycobacteria and actinomycetes encoded by the lsr2 gene 24 . M. tuberculosis Lsr2 is composed of two domains: an N-terminal DNA-binding domain that binds preferentially to AT-rich DNA and a C-terminal oligomerization domain that promotes nucleoprotein filament formation 25 , 26 . Similar to other bacterial nucleoid-associated proteins, Lsr2 polymerizes around DNA to organize and compact bacterial chromatin 27 and mediates DNA bridging 28 . Lsr2 is essential for M. tuberculosis growth but not for planktonic growth of M. smegmatis , although it is required for biofilm formation 29 and conjugal transfer 30 . Lsr2 is a global gene regulator of cell wall synthesis 26 , 31 , 32 , 33 and virulence genes of M. tuberculosis and M. abscessus 32 , 34 , and contributes to antibiotic resistance 35 . Here we show that Mycobacterium Lsr2 is required for productive infection by many mycobacteriophages. We observed that phages adsorb specifically to sites of new cell wall synthesis and Lsr2 reorganizes away from chromosomal DNA foci to zones of phage DNA replication (ZOPR). We demonstrate that loss of Lsr2 leads to poor ZOPR establishment, phage resistance and interruption of population-level viral epidemics. Results Disruption of M. smegmatis lsr2 confers phage resistance Resistance to phage infection can be mediated by bacterial surface changes resulting in defective binding and DNA injection, or post DNA injection processes that result in either cell death or inhibition of phage replication 36 (Fig. 1a ). Mechanisms of resistance to Cluster K mycobacteriophages and phage Fionnbharth (Subcluster K4) are specifically of interest as these have been proposed for tuberculosis therapy 18 and the related Cluster K2 phage TM4 is widely used for specialized transduction 37 , diagnostic reporter phages 38 and transposon delivery 39 , 40 . We used lytic derivatives of Fionnbharth (FionnbharthΔ 47 or FionnbharthΔ 45 Δ 47 , deleted for repressor or both integrase and repressor, respectively 22 ) to isolate resistant mutants of M. smegmatis mc 2 155. Five resistant strains were recovered and purified, and designated LM11, LM12, LM13, LM14 and LM15. We confirmed them to be Fionnbharth-resistant (Fig. 1b ) but sensitive to unrelated phages such as Bxb1 (Subcluster A1) 41 (Fig. 1b ) and BPs (Subcluster G1) 42 . The five genomes were completely sequenced and compared with the parent strain. LM11 has non-synonymous mutations in several genes including those encoding NAD(P)-dependent alcohol dehydrogenase (MSMEG_4039) and glycogen debranching enzyme GlgX (MSMEG_3186). LM12 and LM13 share the same mutation in a methylmalonyl CoA mutase gene (MSMEG_4881) but have additional mutations elsewhere. Strain LM15 has three mutations resulting in amino acid substitutions: D113G (A3649339G) in MSMEG_3578, E392A (C6970708A) in ABC transporter permease subunit MSMEG_6909 and G93V (C6169539A) in lsr2 (MSMEG_6092). Strains LM11–LM13 and LM15 were not characterized further. Strain LM14 contains only a single difference from the parent strain: insertion of a resident IS 1549 transposon into the lsr2 gene MSMEG_6092 (Fig. 1c and Extended Data Fig. 1 ). The insertion is within the Lsr2 C-terminal DNA-binding domain, with a 13 bp target duplication in codons 93–97. The predicted mutant product is 102 aa long and the C-terminal 17 residues are lost. The Lsr2 protein DNA binding activity is predicted to be lost, although the oligomerization domain may remain functional. Fig. 1: M. smegmatis Lsr2 is required for infection of diverse mycobacteriophages. a , Schematic of the mycobacteriophage lytic life cycle and resistance mechanisms. Infection begins with adsorption of phage particles to surface bacterial receptors and DNA injection into the host cell. Phage receptors are enriched at the actively growing poles and septa of mycobacterial cells. After DNA injection, the phage hijacks the host replication, transcription and translation machinery to produce and assemble progeny within the phage replication domain (dashed line). The phage expresses lytic enzymes that digest and lyse the host cell envelope, liberating the mature phage particles to initiate new infections. Bacteria resist phage infection via phage defence mechanisms (red text) such as CRISPR, and phage resistance can arise de novo by mutating host bacterial genes (such as lsr 2) that are essential for phage propagation. b , M. smegmatis strains LM11–LM15 were isolated as resistant to infection by mycobacteriophage Fionnbharth, using a lytic derivative of the parent temperate phage. Tenfold serial dilutions of phages FionnbharthΔ 45 Δ 47 (in which both repressor and integrase genes are deleted) and Bxb1 were spotted onto lawns of M. smegmatis mc 2 155 and LM11–LM15. c , Schematic representation of the M. smegmatis lsr2 locus showing the position of the IS 1549 transposon insertion in the lsr2 gene (MSMEG_6092) in M. smegmatis LM14 and the unmarked lsr2 deletion mutant GWB142. The bottom shows the domain organization of Lsr2 with amino acid coordinates indicated, together with the location of a G100A substitution in the AT-hook-like DNA binding domain. d , Tenfold serial dilutions of a set of genetically diverse mycobacteriophages were spotted onto strains of M. smegmatis LM14 and M. smegmatis Δ lsr2 together with their derivatives carrying integrative plasmid vector (pTTP1b), a plasmid with lsr2 but no promoter (pCG52), a plasmid expressing lsr2 from a phage BPs promoter (pCG54) or a plasmid derivative of pCG54 carrying a G100A Lsr2 substitution (pCG67); the control strain M. smegmatis mc 2 155 on which the phages were propagated is also shown. Phage names are shown at the left and their cluster/subcluster/singleton (sin) designations shown at the right. The variability among independent cultures is shown in Extended Data Fig. 2 . Full size image An M. smegmatis strain with an unmarked deletion of lsr2 (GWB142; Δ lsr2 ) has a phenotype similar to LM14 (Fig. 1d , and Extended Data Figs. 1 and 2 ). Introduction of integration-proficient vector (pCG54) with a wild-type (WT) copy of lsr2 driven by a modified P R promoter of phage BPs 43 complements both strains and restores Fionnbharth infection (Fig. 1d ). Recombinant strains carrying vector alone (pTTP1b) or lacking a promoter (pCG52) fail to complement (Fig. 1d ). In addition, we tested a plasmid (pCG67) expressing Lsr2 with a G100A substitution in Lsr2 25 (Fig. 1c ) , which also fails to complement either strain, showing that the Lsr2 DNA binding domain is required for Fionnbharth infection (Fig. 1d ). LM14 and Δ lsr2 display varying susceptibilities to a diverse panel of mycobacteriophages (Fig. 1d and Extended Data Fig. 3 ) and only 3 of the 23 phages tested were indifferent to Lsr2 loss (phages Dutchessdung, Charlie and RonRayGun, in Clusters B1, N and T, respectively); all others showed reduction in efficiency of plaquing (for example, Fionnbharth, Muddy), reduced plaque size (for example, D29), increased turbidity (for example, Dori) or a combination of these effects (Fig. 1d and Extended Data Fig. 3 ). We note that BPsΔ 33 HTH (a lytic derivative of phage BPs) shows a 100-fold reduction in plaquing and increased plaque turbidity relative to M. smegmatis mc 2 155 infection. In general, LM14 and Δ lsr2 strains behaved similarly, with the notable exception of AdephagiaΔ 41 Δ 43 , which forms plaques more efficiently on Δ lsr2 than on LM14 (Extended Data Fig. 3 ). For the phages tested, normal infection patterns are restored by complementation (Fig. 1d ). Lsr2 thus plays a broad role in mycobacteriophage infection. Mycobacteriophages bind at sites of cell wall synthesis To determine the role of Lsr2, we developed a set of imaging tools that allowed us to visualize the spatiotemporal dynamics of phage binding to the cell surface and phage replication within infected cells. We used a widefield fluorescence microscope equipped with a CellASIC microfluidic system to collect high resolution time-lapses of the entire phage life cycle. The N-QTF fluorogenic probe allowed us to continuously track the synthesis of new cell wall material (Fig. 2a – c ) and SYTOX Orange-labelled Fionnbharth phage particles 44 allowed us to resolve single phage particle binding events. With these tools, we observed that phage virions preferentially attach at sites of cell wall synthesis at the poles and septa of mycobacteria 45 (Fig. 2e – g and Supplementary Video 1 ). We quantified this co-localization using DeepCell machine learning 46 to segment individual cells and a custom MATLAB programme using lines drawn perpendicular to the cell surface (splines) to juxtapose the intramembrane N-QTF pixel intensities with the adjacent phage signals (Fig. 2e,g ). The N-QTF signal is greatest at cell membrane regions proximal to phage binding events. Similar observations were made with phage Muddy and Adephagia, suggesting that this may be common among mycobacteriophages. Negative-stain transmission electron microscopy (TEM) of phage-infected cells agrees with this observation (Fig. 2f ) and shows that Fionnbharth preferentially binds to the growing tips of mycobacteria (Fig. 2f ). These data suggest that mycobacteriophage receptors are enriched in polar and septal regions, and that the receptors are intermediates in cell wall biosynthesis that are absent from old or established cell wall material. This may represent a general strategy employed by phages to target actively growing cells that will support lytic replication. Fig. 2: Mycobacteriophage Fionnbharth preferentially binds at sites of cell wall synthesis. a , The fluorogenic probe N-QTF is a trehalose monomycolate mimic containing a fluorophore (bodipy, green) and a quencher (yellow). Processing of N-QTF by Ag85 mycolyltransferase removes the quencher and integrates the fluorophore into the mycobacterial outer membrane. These ‘turn-on’ probes allow for monitoring of mycolic acid membrane biosynthesis in real time via imaging and other fluorescence-based readouts. b , Chemical structure of N-QTF probe with an amide bond replacing the ester bond linkage between the lipid-fluorophore and the trehalose. N-QTF has improved stability, brightness and membrane-integrating properties. c , A comparison of M. smegmatis labelling by peptidoglycan- and mycolic acid-integrating probes. The FDAA probe RADA and N-QTF were used at final concentrations of 0.2 mM and 500 nM, respectively. Both label the elongating cell poles and division septa, the sites of active cell wall biogenesis in mycobacteria. The ‘turn-on’ nature of N-QTF allows for the visualization of cell wall synthesis via continuous live-cell labelling and unlike FDAAs, does not require washing out steps. d , Schematic depiction of asymmetric polar growth in mycobacteria where the old pole elongates more rapidly than the new pole. This distinctive growth strategy produces a polar and septal labelling pattern seen with cell wall biosynthetic probes and phage adsorption. e , Micrographs of a single M. smegmatis cell in multiple channels showing the localization of N-QTF incorporated probe (green) and adsorbed SYTOX Orange-stained phages (red) and their co-localization. DeepCell machine learning was used to segment individual cells, and lines perpendicular to the cell surface (splines) facilitated juxtaposition of intramembrane N-QTF pixel intensities with adjacent phage signals. f , Negative-stain TEM micrograph of a single M. smegmatis cell bound with phages at a MOI of 100. Inset shows a different cell viewed at higher magnification showing individual phages adsorbed to the cell pole. Black arrows highlight bound phages. g , Violin plots displaying the quantification of N-QTF-phage co-localization as described in e . N-QTF signal intensity is greater at cellular regions proximal to bound phage for both WT and ∆ lsr2 cells. Thick dashed lines denote the median and the thin dashed lines represent upper and lower quartiles. Full size image Role of Lsr2 during mycobacteriophage infection To determine whether Lsr2 is required for Fionnbharth adsorption, we infected M. smegmatis with SYTOX Orange-labelled phages at a multiplicity of infection (MOI) of 10 and analysed cells via flow cytometry (Fig. 3a and Extended Data Fig. 4 ). Fionnbharth adsorption is qualitatively and quantitatively similar for both WT and Δ lsr2 cell types, suggesting that Lsr2 is not required for Fionnbharth binding (Extended Data Fig. 5 ). We similarly tested the binding of SYTOX Orange-labelled phage BPs, for which infection is only mildly reduced by loss of lsr2 (Fig. 1d ). BPs similarly adsorbs to the two strains, although adsorption to Δ lsr2 cells is slightly better than to wild-type cells (Fig. 3a , and Extended Data Figs. 4 and 5 ). The reason for this phenotype is unclear, although we note that some BPs host range mutants show substantially enhanced adsorption to M. smegmatis mc 2 155 relative to wild-type BPs 47 . N-QTF stains both strains similarly (Fig. 3a ). Fig. 3: lsr2 deletion does not inhibit adsorption and protects via a post-injection mechanism. a , Flow cytometry data plotted as histograms showing the population-level fluorescent signals for M. smegmatis cells labelled with SYTOX Orange-stained mycobacteriophages at a MOI of 100 (Fionnbharth on the left, BPs in the middle) and N-QTF at a concentration of 500 nM. b , A plaque assay for the Fionnbharth-mCherry reporter phage. Images depict a 100 mm agarose plate containing a lawn of WT M. smegmatis cells infected with 100 phage particles. The plate and fluorescent plaques are shown in the transmitted light channel, mCherry and merged. c , Time-lapse of WT or ∆lsr2 M. smegmatis cells grown and infected with fluorescent phages in a CellASIC microfluidic device. Cells are continuously labelled with N-QTF to mark sites of cell wall synthesis and infected via a 1 h pulse of SYTOX Orange-labelled phage particles at 10 7 p.f.u. ml −1 . The Fionnbharth-mCherry reporter signal turns on when proteins are expressed from the phage DNA. White arrows highlight important events in the infection life cycle including adsorption, mCherry reporter signal and lysis vs outgrowth. Wild-type cells are susceptible to Fionnbharth and BPs phages while ∆ lsr2 cells are susceptible to BPs but resistant to Fionnbharth resulting in cell outgrowth. Full size image To test whether Lsr2 influences a post DNA injection process, we utilized reporter phages that express mCherry within the host cell during phage replication 48 (Fig. 3b ). In combination with SYTOX Orange-labelling, this allowed us to sequentially observe adsorption of phage particles and expression from the Fionnbharth-mCherry reporter phage (Fig. 3c ). With this imaging strategy, we verify that Fionnbharth adsorption is similar for WT (Supplementary Video 2 ) and ∆lsr2 cells (Fig. 3c 3rd column from left, Supplementary Video 3 and Extended Data Fig. 5 ), with all cells in the imaging field bound with multiple phages. After adsorption, the WT cells begin expressing mCherry (Fig. 3c fourth column) and lyse within 24 h. In contrast, only half of the ∆lsr2 cells express mCherry (Fig. 3c , fourth column) and lyse after adsorption (Fig. 3c , fifth column), while the other half continue growing and eventually fill the field (Fig. 3c , fifth column). Fionnbharth thus binds and delivers DNA to WT and ∆ lsr2 cells, consistent with the adsorption observed above (Fig. 3c ), but lytic replication is limited in the ∆ lsr2 cells. Phage BPs infects ∆ lsr2 (Supplementary Video 4 ) and WT (Supplementary Video 5 ) cells similarly (Fig. 3c bottom row). Lsr2 is thus required for normal lytic growth of Fionnbharth (and probably other Lsr2-dependent phages) but not for BPs and other Lsr2-independent phages (Fig. 1 ). ∆ lsr2 -mediated resistance occurs via a post-injection mechanism blocking entry into or completion of the lytic life cycle. lsr2 deletion limits epidemic spread of phage To better understand how ∆ lsr2 -mediated resistance operates at the population level, we imaged reporter phage infection of a large field of cells on an agarose pad with a low magnification objective (×20) and time-lapse widefield fluorescence microscopy (Extended Data Fig. 6 ) 49 . This imaging strategy allowed us to observe plaque formation, zones of clearing approximately 1 mm in diameter, where all bacteria on the pad were lysed by a phage epidemic. WT or ∆ lsr2 cells were infected with Fionnbharth-mCherry reporter phage, washed extensively and diluted 1,000× with uninfected cells before being spotted at high density on agarose pads. In the WT condition, infected single cells express mCherry and lyse, igniting widespread infections that cascade through bacterial colonies creating phage plaques that can be seen with the naked eye (Extended Data Fig. 6 and Supplementary Video 6 ). While the lawn of WT cells is decimated by the Fionnbharth reporter phage, the ∆ lsr2 cells are effectively protected at a population level, with only initial minor outbreaks that are eventually outcompeted and absorbed into the lawn of growing M. smegmatis (Extended Data Fig. 6b–d ). There is seemingly an initial burst of infection in a subset of M. smegmatis mc 2 155 Δ lsr2 cells (Extended Data Fig. 5 ), although subsequent rounds of infections are strongly reduced. lsr2 deletion causes mycobacteriophage assembly defects To visualize Fionnbharth genome replication directly, we constructed a lytic recombinant phage carrying an array of seven MalO operator sites that are recognized by the Escherichia coli MalI protein (Fionnbharth-MalO phage) (Fig. 4 ). MalI binds to the MalO array with high affinity and can be observed microscopically using MalI fused via a short glycine linker to mNeonGreen (Fig. 4a ) 50 . Each operator site can accommodate two MalI protomers for a total of 14 fluorescent MalI interactions per phage chromosome (Fig. 4b ). The MalI-mNeonGreen fusion was cloned downstream of a strong promoter (UV15) on a mycobacterial vector and shows diffuse cytoplasmic fluorescence in uninfected cells, with most cells also containing at least one MalI-mNeonGreen focus near the cell poles (Fig. 4c , panel 1). This polar fluorescence probably represents aggregated protein in inclusion bodies as a consequence of high-level expression 51 . Expression from weaker promoters does not completely eliminate these aggregates and a high level of expression is needed to visualize single phage chromosomes (Fig. 4c ). The polar aggregates do not appear to interfere with or alter the dynamics of phage replication or cell growth. Fig. 4: Mycobacteriophage replication and assembly occur in spatially defined domains and lsr2 deletion causes assembly defects. a , Schematic depicting the MalI-mNeon protein construct expressed in M. smegmatis cells to visualize the dynamics of phage replication and assembly. The MalI transcription factor was cloned in frame with a high-expressing mycobacterial promoter (UV15) and two copies of the fluorescent mNeonGreen protein joined with short glycine-based linkers. b , The Fionnbharth-MalO reporter phage contains 7 MalO (operator) binding sites for the transcription factor MalI. Each operator site can accommodate 2 MalI transcription factors for a total of 14 fluorescent MalI interactions per phage chromosome. c , To visualize single phage infection events at the single-cell level, M. smegmatis cells grown in a CellASIC microfluidic device were exposed to a short pulse of diluted (10 5 p.f.u. ml −1 ) SYTOX Orange-stained Fionnbharth-MalO phage. White arrows highlight important events in the infection life cycle. A single phage binding event (red focus, at 00:42) is followed by the formation of a proximal MalI-mNeonGreen focus, consistent with phage ejection and recruitment of MalI-mNeonGreen protein to MalO sites on the infecting phage chromosome. Over the course of infection, the single focus multiplies into many foci that spread out across the interior of the cell and then organize regionally into multiple phage replication domains followed by cell lysis and release of phage particles. d , The dynamics of phage replication and assembly were visualized at a high MOI of 10 via a short pulse of concentrated (10 7 p.f.u. ml −1 ) SYTOX Orange-stained phage particles. Deletion of lsr2 results in diminished phage replication foci with reduced brightness and a loss of cell lysis. e , A Fionnbharth phage infection timecourse of WT (top panels) and ∆ lsr2 (bottom panels) cross-sectioned cells visualized via negative-stain TEM. Early log phase cultures were infected at a MOI of 3 and samples were collected at the indicated timepoints, rapidly fixed, stained, embedded and sectioned for TEM imaging. Arrows indicate significant events in the phage infection life cycle and observed morphological differences between WT and ∆lsr2 infection. Full size image M. smegmatis cells harbouring the MalI-mNeonGreen fusion were exposed to a short pulse of diluted (10 5 plaque-forming units (p.f.u.) ml −1 ) SYTOX Orange-stained Fionnbharth-MalO phage within a CellASIC microfluidic device and monitored via fluorescence microscopy. Single phage binding events were observed (red foci, Fig. 4c panel 2 from left and Supplementary Video 7 ), followed by formation of a proximal MalI-mNeonGreen focus, consistent with phage ejection and binding of MalI-mNeonGreen protein to MalO DNA sites on an infecting phage genome. As the infection continues, these foci spread across the interior of the cell and organize regionally into multiple phage assembly domains, followed by cell lysis and release of phage particles. We refer to these as z ones o f p hage DNA r eplication (ZOPR). The dynamics of phage replication and assembly were also visualized at a higher MOI of 10 in WT (Supplementary Video 8 ) and ∆ lsr2 cells (Supplementary Video 9 ) via a short pulse of SYTOX Orange-stained phage particles at 10 7 p.f.u. ml −1 (Fig. 4d ). Deletion of lsr2 results in diminished phage replication foci, brightness and organization, followed by a reduction in the proportion of cells undergoing cell lysis (Fig. 4d ). Lsr2 is thus required for the formation of active phage replication domains. To validate these findings and obtain higher spatial resolution on the lytic phage infection life cycle, WT and ∆lsr2 M. smegmatis cells were examined by negative-stain TEM. Log phase cultures were infected with Fionnbharth at a MOI of 3 and samples were collected at the indicated timepoints, rapidly fixed, stained, embedded and sectioned for TEM imaging. The timecourse of Fionnbharth infection reveals several morphological differences between WT (Fig. 4e top panels) and ∆lsr2 (Fig. 4e bottom panels) cells. Single phage particles appear as dark electron-dense spheroids 52 , and late in infection of WT M. smegmatis , hundreds of phage capsids are ordered into a highly compacted pseudo crystal lattice (Fig. 4e ). The electron micrographs mirror what was observed with fluorescence microscopy; Fionnbharth begins forming tightly organized phage assembly domains 2–3 h after infection and lsr2 deletion results in reduced mature phage capsid formation. ∆lsr2 infected cells contain fewer electron-dense capsids and more empty and misassembled capsids (Fig. 4e ). Deletion of lsr2 has numerous morphological consequences that have been observed at the colony and biofilm level as well as the cellular level 53 . Cells lacking Lsr2 are slightly shorter and wider and have altered DNA replication dynamics 53 . Our observations of the inner workings of ∆lsr2 cells via TEM are consistent with these findings. We saw more evidence of bulky DNA in ∆lsr2 cells than in WT cells, suggesting an overabundance of uncompacted host chromosomal DNA. This deficit in DNA organization could lead to gross morphological defects as well as less efficient or hampered assembly of phage capsids. Lsr2 re-localizes to zones of phage DNA replication To better understand the spatiotemporal dynamics of Lsr2 during phage infection, we imaged an M. smegmatis strain in which endogenous Lsr2 is tagged with the fluorescent protein Dendra2. This strain was recently used to visualize the chromosomal localization of Lsr2 during the mycobacterial life cycle 53 . Lsr2 forms nucleoprotein complexes on the host chromosome, these complexes are visible as discrete, dynamic foci near the DNA replication machinery. Upon infection with Fionnbharth, Lsr2 foci rapidly disintegrate and re-localize into the phage ZOPR (Fig. 5a and Supplementary Video 10 ). This dynamic redistribution of Lsr2 protein does not occur with infection of Lsr2-insensitive phages like BPs (Fig. 5b and Supplementary Video 11 ). These data suggest that Lsr2 either directly or indirectly associates with the Fionnbharth chromosome and that these interactions may be important for replication and assembly of phage DNA. Overall, the Fionnbharth genome has a GC% content of 67.4%, similar to its M. smegmatis host. However, there are two regions notably lower in GC% content (Fig. 5c ): one in the intergenic region between the divergently transcribed repressor and cro -like genes (genes 47 and 48 ), and a second immediately downstream of a putative DNA primase (gene 74 ) (Fig. 5c ). These are plausible regions of Lsr2 binding. We note, however, that most mycobacteriophage genomes, including BPs, show variations in GC% content and regions with lower GC% content. Fig. 5: Re-localization of Lsr2 to zones of phage DNA replication. a , b , Time-lapse of Lsr2-Dendra2 M. smegmatis cells grown and infected with fluorescent Fionnbharth ( a ) or BPs ( b ) phages in a CellASIC microfluidic device. Cells were infected via a 1 h pulse of SYTOX Orange-labelled phage particles at 10 7 p.f.u. ml −1 . Lsr2-Dendra2 forms nucleoprotein complexes on the host chromosome, these complexes are seen as discrete, dynamic foci near the DNA replication machinery. Upon phage infection with Fionnbharth, Lsr2 foci rapidly disintegrate and re-localize into the Fionnbharth ZOPR ( a ). This dynamic redistribution of Lsr2 protein does not occur with infection of Lsr2-insensitive phages like BPs ( b ). White arrows highlight important events in the infection life cycle. c , A genome map of Fionnbharth with GC% content displayed on the y axis. Bold arrows indicate two regions with notably lower GC% content: one in the intergenic region between the divergently transcribed repressor and cro -like genes, and a second immediately downstream of a putative DNA primase. Full size image Phage-encoded lsr2 does not impact lytic infection Finally, we note that lsr2 homologues are present in many actinobacteriophage genomes including those of Mycobacterium , Gordonia , Streptomyces and Microbacterium 54 , 55 (Fig. 6a ), raising the possibility that phages encode these to counter host defence mechanisms involving loss of Lsr2 activity. These phage-encoded lsr2 -like genes span substantial sequence diversity and many are distantly related to host lsr2 genes (Fig. 6b ). Moreover, they are present in the genomes of both lytic and temperate phages (Fig. 6a ), and for at least some temperate phages including Ladybird (Cluster A2), they are known to be lysogenically expressed 56 . In those instances, including all of the lsr2 -containing Mycobacterium phages, they are unlikely to confer a counter-defence mechanism. In some lytic phages (for example, Clusters AV and BE infecting Microbacterium and Streptomyces , respectively), Lsr2 is encoded within long terminal repeats, and in many lytic phages (for example, Clusters BD and BK infecting Streptomyces , and EE and EH infecting Microbacterium ), the protein is shorter than 80 residues, primarily spanning the N-terminal oligomerization domain and lacking the DNA binding domain. These ‘truncated’ Lsr2 proteins are more likely to act via dominant negative interactions than by complementation of host lsr2 loss. We tested several lsr2 -carrying mycobacteriophages (all have full-length lsr2 genes) for their response to deletion of the host lsr2 (Fig. 6c ). In several, the efficiency of plaquing is greatly reduced, indicating that these phage-encoded lsr2 genes do not compensate for host lsr2 loss. An alternative explanation is that phage lsr2 genes act in inter-phage competition and interfere, perhaps through dominant negative interactions, with phage superinfection of lysogens, or by exclusion in lytic growth. We note that neither Fionnbharth nor any other Cluster K phages carry their own lsr2 . Fig. 6: Actinobacteriophage Lsr2 diversity. a , A network phylogeny of actinobacteriophages, including phages of Gordonia , Microbacterium , Mycobacterium and Streptomyces , based on shared gene content 73 and constructed using Splitstree 67 , 73 . The tree was constructed using up to three members of each subcluster or non-divided cluster, and clusters are indicated as circles with their designations (A, B and so on). Green and blue circles represent phage clusters that carry or lack lsr2 homologs, respectively. Cluster text designations are coloured according to host species: Gordonia (pink), Streptomyces (purple), Microbacterium (red) and Mycobacterium (black/dark grey). Singletons are represented by small boxes, similarly coloured by cluster designation. Clusters of lytic phages have an orange outer ring, all others are temperate. Hosts and life cycles of singleton phages are not shown for simplicity but are available at . Kumao ( Mycobacterium ; temperate) is the only singleton carrying lsr2 . Scale bar indicates pairwise hamming distance 74 . b , A maximum likelihood phylogenetic tree of lsr2 in actinobacteriophages. All actinobacteriophages with an lsr2 homologue are shown as well as the lsr2 genes of M. smegmatis , M. tuberculosis , M. abscessus, Mycobacterium kansassii and Mycobacterium leprae . Cluster designations are shown. Scale bar indicates nucleotide substitution/site. c , Role of host Lsr2 in infection of lsr2 -containing phages. Phage names are shown in black or red type indicating the absence or presence of a canonical (113–153 residues) lsr2 gene in the phage genome. Phage names marked in red possess lsr2 , while phage names in black are phages that completely lack an lsr2 homologue. Phage lysates were tenfold serially diluted and spotted onto lawns of M. smegmatis mc 2 155, M. smegmatis Δ lsr2 and the complementation strain M. smegmatis Δ lsr2 pCG54. Phage names are shown at the left, and their cluster/subcluster/singleton (sin) designations are shown at the right. Full size image Discussion Here we show that Lsr2 plays an important role in productive infection of many mycobacteriophages. Loss of Lsr2 influences a broad variety of mycobacteriophages, manifested as resistance—that is, a sharp reduction in the efficiency of plating—or revealed as more subtle impacts such as reduced plaque size or increased turbidity. Lsr2 is known to play a role in organizing and maintaining host DNA replication systems 53 , and similarly plays a role in organizing phage zones of replication. A particularly notable observation is the finding that for the set of genomically diverse phages we examined, all preferentially bind at sites of new cell synthesis and not at regions of ‘old’ cell wall. This has several important implications. First, it suggests that phages may have evolved to specifically recognize cells that are actively growing and are thus metabolically active, to support phage replication. Second, it provides spatial information to coordinate phage infection with other structurally organized cellular components that might be needed for phage replication. Third, it suggests a specific mechanism that phages can exploit to outcompete other phages in a competitive environment. For example, a newly infecting phage undergoing lytic growth could prevent superinfection (and potential theft of valuable resources needed for reproduction) simply by inhibiting cell wall biosynthesis, for which there are many plausible targets. We note that mycobacteriophage Fruitloop interferes with superinfection by phage Rosebush by expressing a protein (gp52) that binds to and inactivates DivIVA which is required for cell wall biosynthesis 57 . Many mycobacteriophage proteins expressed during the lytic cycle are toxic for mycobacterial growth and often cause division or morphological defects, consistent with disruptions in cell wall synthesis 58 . Exquisite TEM data collected in 1961 recorded “groups of clustered mycobacteriophage particles” within sectioned cells of the Mycobacterium Jucho strain 52 . Our observations are remarkably consistent with these studies and contribute a temporal dimension that brings these dynamic structures to life. Mycobacteriophage ZOPR are evidently not compartmentalized similar to the pseudo-nuclear structures described for Pseudomonas large phages 59 and may be defined by the available space for DNA replication in an otherwise crowded cell. Nonetheless, the ZOPR are distinct from the sites of host DNA replication, as reflected in the re-organization of Lsr2 during phage infection. Cryo-electron tomography and super-resolution microscopy will be useful approaches for further defining the mycobacteriophage ZOPR. We show that host Lsr2 protein is required for establishing these zones and without it, phage replication is impaired, leading to defective epidemic spread (Extended Data Fig. 6 and Supplementary Video 6 ). Curiously, some phages (for example, Bxb1, RonRayGun) have only minor reductions in efficiency of plaquing in the absence of Lsr2 and presumably employ different strategies for replication and lytic growth. The newly developed tools described here, both for visualizing phage infection and constructing informative phage derivatives, are important for understanding phage–host dynamics in mycobacteria and should be broadly applicable. The combination of SYTOX Orange-stained phage particles and the N-QTF probe for newly synthesized mycolic acids shows the remarkable preference of phages to adsorb to regions of new cell wall synthesis. This may be a general phage strategy and these tools will be valuable for further exploring this question. The mycobacteriophage engineering methods 48 , 60 will also be generally applicable, especially combining mCherry reporter phages showing phage gene expression and MalO recombinant phages that reveal phage DNA localization. Elucidating mycobacteriophage resistance mechanisms is relevant to their potential therapeutic use. In M. tuberculosis , for which FionnbharthΔ 45 Δ 47 is a therapeutic candidate 18 , both domains of Lsr2 are essential for bacterial viability 28 , minimizing the prospects for lsr2 loss as a potential resistance mechanism. Interestingly, in M. abscessus , as in M. smegmatis , lsr2 is non-essential 34 , 61 , although its loss considerably diminishes M. abscessus virulence. M. abscessus lsr2 has not been identified as a phage resistance target in vitro 22 or in vivo 62 , including with the therapeutically useful phage Muddy, and this may reflect a beneficial trade-off between phage sensitivity and pathogenicity. Methods Bacterial strains and media Liquid cultures of M. smegmatis mc 2 155 were grown in Middlebrook 7H9 media and were used to propagate the phages used in this study. An unmarked deletion of lsr2 ( M. smegmatis GWB142) was constructed using the recombineering plasmid pJV53 and electroporation of an allelic exchange substrate with flanking homology to lsr2 63 . Isolation of phage-resistant mutants Six independent 1 ml cultures of 1 × 10 8 colony-forming units (c.f.u.) M. smegmatis mc 2 155 were incubated with 10 μl of lysates containing 10 8 –10 9 p.f.u. phage and incubated with shaking at 250 r.p.m. at 37 °C for 32 h. Subsequently, 75–150 μl aliquots were then spread on Middlebrook 7H10 solid media and incubated at 37 °C for 3 d or until isolated phage-resistant colonies were visible. Phage-resistant candidate strains were purified by streaking twice on Middlebrook 7H10 and used to inoculate 3 ml cultures of Middlebrook 7H9 with ADC (50 g l −1 albumin fraction V Cohn Analog (Lampire Biologicals), 20 g l −1 dextrose (Fisher), 8.5 g l −1 NaCl) and 0.05% Tween 80. After growth to saturation, cultures were used to prepare bacterial lawns and serial dilutions of phages were spotted onto the lawns to determine phage susceptibilities. DNA extraction and sequencing of bacterial strains DNA was isolated from the parent strain and phage-resistant mutants for DNA sequencing. Briefly, 1 ml of cell culture was lysed, pelleted and then resuspended in Nuclei lysis solution (Promega). The cell resuspension was added to a tube containing lysing matrix B (MP Biologicals) and milled three times with intermittent incubation on ice. Phenol-chloroform-isoamyl alcohol was then added and the aqueous phase was removed. DNA was precipitated using isopropanol and 3 M sodium acetate. The parent strain was then sequenced by both Illumina MiSeq and Oxford Nanopore MinIon technologies, and a hybrid assembly using long and short reads was subsequently performed with Unicycler 64 . The genome was checked for completeness and accuracy, then corrected using Consed 65 , 66 . Mutant strains were sequenced by Illumina MiSeq only and resulting reads were aligned to the completed parent strain, again using Consed. An in-house programme called AceUtil was used to identify differences between the mutant reads and the parent genome, and all mutations were confirmed by close inspection of the reads. Construction of complementation plasmids pCG52, pCG54 and pCG67 Vectors pCG52 and pCG54 were constructed by PCR linearization of vectors pLO73 and pLO76, respectively 43 , removing the mCherry gene in the process. Insertion of lsr2 into the vectors was done using a gblock (Integrated DNA Technologies) containing lsr2 flanked by 20 bp upstream and downstream homology to pLO73 and pLO76 using NEBuilder (New England Biolabs), following the manufacturer’s recommendations (Supplementary Table 1 ). The plasmid pCG67 was similarly made by inserting an lsr2 gblock with 20 bp upstream and downstream homology to pLO76, substituting guanine at position 303 for adenosine, resulting in a G100A amino acid substitution at the Lsr2 AT-hook core site in the DNA binding domain (Supplementary Table 1 ). Phylogenetic analyses Splitstree 67 was used to represent a network phylogeny of Actinobacteriophages. Up to three phage representatives per cluster, subcluster or singletons were used where available for phages infecting Gordonia, Streptomyces, Microbacterium and Mycobacterium . A maximum-likelihood phylogenetic tree was constructed using Qiagen CLC genomics workbench v22.0 with the GTR model, four substitution rate categories and 100 replicates. Alignments for maximum-likelihood phylogeny were made using Aliview 68 , with the option to translate nucleotide sequences to amino acid. Amino acid alignments of Mycobacterial Lsr2 were done using Qiagen CLC genomics workbench v22.0. Adsorption and one-step growth curve experiments To assess adsorption,1 ml cells of either WT mc 2 155 or Δ lsr2 freshly grown to saturation were used to inoculate 10 ml 7H9 ADC and grown for 4 h at 37 °C, with shaking at 250 r.p.m. until reaching an optical density (OD) of ~0.2 or approximately 7 × 10 7 c.f.u. ml −1 . After taking OD measurements, cells were pelleted and resuspended in 1 ml 7H9 with 1 mM CaCl 2 to a 10-fold increase in cell concentration. Phage were added to cells at a MOI of 0.0001, approximately 10 5 p.f.u. ml −1 . Cells were incubated at 37 °C with shaking at 250 r.p.m. Every 10 min for 1 h, aliquots were taken, cells were pelleted, and supernatants were diluted 10-fold and plated on mc 2 155 to assess plaque forming units. Experiments were done in biological triplicates. For one-step growth curves, 50 μl of either WT mc 2 155 or Δ lsr2 freshly grown to saturation were used to inoculate 10 ml 7H9 ADC and grown for ~16 h at 37 °C. Cells at log phase were diluted to OD of ~0.2. After taking OD measurements, cells were pelleted and resuspended in 1 ml 7H9 with 1 mM CaCl 2 to a 10-fold increase in cell concentration. Phage were added to cells at a MOI of 0.001, approximately 10 6 p.f.u. ml −1 . Cells were incubated at 37 °C with shaking at 250 r.p.m. Aliquots were removed at specific times, serially diluted 10-fold between 10 −1 and 10 −4 and plated on mc 2 155 cells to assess plaque forming units at each timepoint. Experiments were done in biological triplicates. Construction of MalO reporter Fionnbharth A DNA cassette containing seven Mal Operator sites was amplified from plasmid pCB182 using primers MalO cassette amplify F (5’-TCTGCTCGAGGAATTCTCCAGATTCTAGTG 3’) and MalO cassette amplify R (5’-GTAGCCATGCAGATGACCTACTCCCTGATT 3’) with Q5 polymerase 2X Master Mix (New England Biolabs). Two gBlocks (Supplementary Table 1 ) containing homology to 280 bp upstream of Fionnbharth gene 45 and 429 bp downstream of Fionnbharth gene 47 were used to build a larger substrate in which they flank a MalO cassette, constructed by PCR using primers with 18 bp of homology to the two gblocks. A linear substrate was assembled using NEBuilder to join the gBlocks and the MalO cassette, and the entire substrate was amplified using primers mal-Fionn-F (5’-AACATAGTCCAGATTTATGGACAAAGCAACTCG 3’) and mal-Fionn-R (5’-CGGCCGGTACTCCTACCAAGCACTACACAG 3’) with Q5 2x Master Mix (New England Biolabs). The recombinant phage was then made using CRISPY-BRIP 48 . Briefly, the amplified substrate was purified by gel extraction and electroporated into M. smegmatis mc 2 155pJV138, recombineering competent cells together with Fionnbharth genomic DNA. The mixture was then plated with a culture of M. smegmatis mc 2 155 containing plasmid pCCK510 with a single guide RNA targeting Fionnbharth gene 45 48 , and plated on 7H10 solid medium containing 100 nM Anhydrotetracycline to select for mutants containing the allelic replacement. Candidate recombinant phages were screened using two rounds of PCR. Microscopy For all imaging experiments, M. smegmatis mc 2 155 was sub-cultured in liquid Middlebrook 7H9 media supplemented with 5 g l −1 albumin, 2 g l −1 dextrose, 0.85 g l −1 NaCl, 0.003 g l −1 catalase, 0.2% (v/v) glycerol and 0.05% (v/v) Tween 80. Before imaging, M. smegmatis mc 2 155 was sub-cultured three times with the above complete 7H9 with 1 mM CaCl 2 media lacking Tween 80 to give the bacteria time to build their capsule, which is required for phage attachment. To prevent clumping, cultures grown without Tween 80 were subjected to high-speed shaking at 200–250 r.p.m. in inkwell culture vessels. Phase-contrast and epifluorescence images were collected with a widefield Nikon Eclipse Ti-E inverted microscope equipped with an Okolab Cage incubator warmed to 37 °C with Cargille Type 37 immersion oil. A Nikon CFI Plan Apo DM Lambda ×100 1.45 NA oil objective and a Nikon CFI Plan Apo DM Lambda ×20 0.75 NA objective were used with Perfect Focus System for maintenance of focus over time. N-QTF, Dendra2, mCherry2B and SYTOX Orange nucleic acid stain (ThermoFisher) were excited with a Lumencor Spectra X light engine with Chroma FITC (470/24) (for N-QTF and Dendra2) and mCherry (575/25) (for mCherry2B and SYTOX Orange) filter sets, respectively, and collected with a Spectra Sedat Quad filter cube ET435/26M-25 ET515/30M-25 ET595/40M-25 ET705/72M-25 (for N-QTF and Dendra2) and a Spectra CFP/YFP/mCherry filter cube ET475/20M-25 ET540/21M-25 ET632/60M-25 (for mCherry2B and SYTOX Orange). Images were acquired with an Andor Zyla 4.2 sCMOS controlled by NIS Elements software. For time-lapse experiments, images were collected every 10–12 min (unless specified otherwise) via ND acquisition using an exposure time of 100 ms or 200 ms and 50% or 100% illumination power for fluorescence. Multiple stage positions (fields) were collected using the default engine TiZ. Image analysis Fields best representing the overall experimental trend with the least technical artefacts were chosen for publication. Gamma, brightness and contrast were adjusted (identically for compared image sets) using FIJI 69 .The FIJI plug-ins Stack Contrast 70 and StackReg 71 were used for brightness matching and registering image stacks. Phase-contrast images were segmented using DeepCell 46 and analysed using a custom MATLAB programme. Briefly, peaks were located in image profiles in the red (SYTOX) and green (N-QTF) channels along lines perpendicular to the segmented cells, and the image background was measured where no cells were present. Peaks in the red channel that were more than one standard deviation above the measured background fluorescence intensity were called ‘phage proximal’. The N-QTF signal was background subtracted and the maximum intensity along lines that were phage proximal was measured. Generation of fluorescent phages with nucleic acid stain Concentrated phage stocks (200 µl, 10 10 –10 11 p.f.u. ml −1 ) were stained with SYTOX Orange nucleic acid stain 44 . Stained phages were washed four times in 15 ml of phage buffer (10 mM Tris, pH 7.5, 10 mM MgSO 4 , 68.45 mM NaCl, 1 mM CaCl2) using Amicon Ultra-15 centrifugal filter units. After staining, the titre and viability of phages were immediately assessed by plaque assay, and once stained, phages were used for no longer than 1 week as the viability decreased over time. For use in microfluidic experiments, SYTOX Orange-stained phages were normalized to a titre of approximately 10 7 p.f.u. ml −1 . Fluorescence microscopy with agarose pads Middlebrook 7H9 media with 2% agarose pads were prepared by mixing one part 10× 7H9 concentrate (which contained 7H9 powder and glycerol at 10× concentrations), one part albumin, dextrose and catalase (ADC) and eight parts of low-melt 2.4% agarose, and mounting on MatTek dishes (No. 1.5 coverslip, 50 mm, 30 mm glass diameter, uncoated). M. smegmatis mc 2 155 strains were grown to OD 600 of ~1.0 (corresponding to 3.5 × 10 8 c.f.u. ml −1 ) in 7H9 + 1 mM CaCl 2 without Tween 80 at 37 °C with shaking (200 r.p.m.) and, where required, diluted in media to achieve the desired cell density on the agarose pad. To create the infected seeder cells, 100 µl of normalized bacterial cultures at OD 600 = 0.5 were infected with the Fionnbharth-mCherry reporter phage at room temperature for 10 min with phage stocks at 10 9 p.f.u. ml −1 to a MOI of 100. Subsequently, infected cells were washed 3× with ice-cold phage buffer (10 mM Tris, pH 7.5, 10 mM MgSO 4 , 68.45 mM NaCl, 1 mM CaCl 2 ) to reduce the concentration of un-adsorbed free phage, followed by one wash with ice-cold 7H9 media + 1 mM CaCl 2 and without Tween 80. Seeder cells were diluted 1:1,000 with uninfected cells before being spotted on agarose pads. This ratio of uninfected to infected cells was optimized such that in randomly chosen microscopy fields (without previous knowledge of which cells in the field were infected), there was likely to be at least one infected cell (Fig. 4 ). Chilled cells (1 μl) were spotted onto opposite sides of an agarose pad (two strains were imaged on the same pad) and inverted onto the MatTek imaging dish. To prevent formation of small condensation droplets on the lid of the dish, the underside of the lid was soaked with a solution of 0.05% Triton-X-100 in 20% ethanol for 1 min and then allowed to dry. Phase-contrast and fluorescence images were collected every 12 min for 36 h using the ×20 objective. Fluorescence microscopy using microfluidic infection The CellASIC ONIX2 system from EMD Millipore with B04A plates was used for microfluidic imaging experiments (Figs. 2 , 3 and 5 ). Phages used in microfluidic infection experiments were stained with SYTOX Orange nucleic acid stain as described above 44 . M. smegmatis mc 2 155 strains were grown to OD 600 of ∼ 1 (corresponding to 3.5 × 10 8 c.f.u. ml −1 ) in 7H9 media with 1 mM CaCl 2 and without Tween 80 at 37 °C with shaking (250 r.p.m.) before being diluted tenfold and loaded into CellASIC B04A plates using the pressure-driven method according to the manufacturer protocol for bacterial cells. The slanted chamber of the plate immobilizes the cells but allows media to flow continuously. First, cells were equilibrated with constant 7H9 media with 1 mM CaCl 2 and without Tween 80 at a flowrate of 2 psi for approximately 1 h. Second, cells were stained with a constant flow of the N-QTF probe at a concentration of 500 nM in 7H9 media with 1 mM CaCl 2 and without Tween 80 for 1 h. Next, phages suspended at 10 7 p.f.u. ml −1 in 7H9 media with 1 mM CaCl 2 , without Tween 80 and with 500 nM N-QTF probe were pulsed over the cells for 1 h. For phage infection experiments requiring very low MOI ( ≪ 1), SYTOX Orange-stained phage stocks were diluted to 10 5 p.f.u. ml −1 . For phage infection experiments requiring high MOI, SYTOX Orange-stained phage stocks were employed at 10 7 p.f.u. ml −1 . Finally, cells were grown under constant flow of 7H9 media with 1 mM CaCl 2 , without Tween 80 and with 500 nM N-QTF probe for the duration of the experiment. Microfluidic experiments typically lasted 24 h, after which time uninfected or phage-resistant cells outgrew the chamber. Phase-contrast and fluorescence images were collected every 10–12 min. M. smegmatis mc 2 155 cells were stained with fluorescent d -amino acids (FDAA) (Fig. 2c ) 72 . Negative-stain TEM M. smegmatis mc 2 155 strains were sub-cultured three times in 7H9 with 1 mM CaCl 2 and without Tween 80 at 37 °C with shaking (200 r.p.m.). Next, 100 ml cultures were grown to OD 600 of ~1 before being filtered through a 10 µm syringe filter (Acrodisc syringe filter, 10 µm with Versapor membrane). Filtered cells were centrifuged and pelleted at 5,000 g for 10 min before phage infection, and cell densities were normalized via resuspension in 1 ml 7H9 media with 1 mM CaCl 2 and without Tween 80. Cells were infected with phage stocks at 10 11 p.f.u. to a final MOI of 3, 10 or 100 depending on the experiment. Infected cells were then incubated at 37 °C with shaking (200 r.p.m.) for 5 min before the Eppendorf tubes holding cells were plunged into ice to pause phage development in preparation for imaging at the Harvard Medical School EM Facility. Five microlitres of samples were adsorbed for 1 min to carbon-coated grids (EMS, CF400-CU) that had been made hydrophilic by a 20 s exposure to a glow discharge (25 mA). Excess liquid was removed with a filter paper (Whatman No. 1), the grid was then floated briefly on a drop of water (to wash away phosphate or salt), blotted again on a filter paper and then stained with 0.75% uranyl formate (EMS, 22451) or 1% uranyl acetate (EMS, 22400) for 20–30 s. After removing the excess stain with a filter paper, the grids were examined in a JEOL 1200EX transmission electron microscope or a TecnaiG² Spirit BioTWIN and images were recorded with an AMT 2k CCD camera. TEM imaging experiments were performed twice and micrographs best representing the overall experimental trend with the least technical artefacts were chosen for publication. To visualize the inner workings of phage-infected cells, thin sections of embedded cells were examined via TEM. M. smegmatis mc 2 155 and ∆lsr2 strains were sub-cultured three times in 7H9 with 1 mM CaCl 2 and without Tween 80 at 37 °C with shaking (200 r.p.m.), and were then normalized to OD 600 = 0.7 (2 × 10 8 c.f.u. ml −1 ) before being infected at MOI = 3. Samples were collected at indicated timepoints and fixed with 2.5% glutaraldehyde, 1.25% paraformaldehyde and 0.03% picric acid in 0.1 M sodium cacodylate buffer (pH 7.4), followed by centrifugation at 5,000 g for 10 min. A pellet of cells was fixed for at least 2 h at room temperature in the above fixative, washed in 0.1 M cacodylate buffer and post-fixed with 1% osmium tetroxide (OsO4)/1.5% potassium ferrocyanide (KFeCN6) for 1 h. This was followed by washing twice in water, once in maleate buffer (MB), incubation in 1% uranyl acetate in MB for 1 h, 2 washes in water and subsequent dehydration in grades of alcohol (10 min each: 50%, 70%, 90%, and 2 ×10 min 100%). The samples were then put in propyleneoxide for 1 h and infiltrated overnight in a 1:1 mixture of propyleneoxide and TAAB Epon embedding resin (TAAB, ). The following day, the samples were embedded in TAAB Epon and polymerized at 60 °C for 48 h. Ultrathin sections (about 60 nm) were cut on a Reichert Ultracut-S microtome, picked up on copper grids stained with lead citrate and examined in a JEOL 1200EX transmission electron microscope or a TecnaiG² Spirit BioTWIN; images were recorded with an AMT 2k CCD camera. Flow cytometry M. smegmatis mc 2 155 strains were sub-cultured three times in 7H9 with 1 mM CaCl 2 and without Tween 80 at 37 °C with shaking (200 r.p.m.). Next, 5 ml cultures were grown to OD 600 of ∼ 1 (~3.5 × 10 8 c.f.u. ml −1 ) before being filtered through a 10 µm syringe filter. Filtered cells were centrifuged and pelleted at 5,000 g for 10 min before phage infection, and cell densities were normalized to OD 600 = 0.1 via resuspension with 7H9 media with 1 mM CaCl 2 and without Tween 80. Cells were infected with SYTOX Orange-stained mycobacteriophages at 10 9 p.f.u. ml −1 to MOI = 100 and incubated at room temperature in the dark for 5 min. The Eppendorf tubes holding cells were then plunged into ice to pause phage development in preparation for flow cytometry. Cells were stained with the N-QTF probe at a concentration of 500 nM, kept on ice and protected from light before flow cytometry analysis. Cells were analysed by flow cytometry on a MACSQuant (VYB excitation: 488 nm, 561 nm; emission filter: 525/50, 615/20). Bacterial cells were gated on SSC-A/FSC-A and an unstained sample was used as a negative control to draw positive gates for N-QTF and SYTOX Orange fluorochromes (Extended Data Fig. 5 ). More than 50,000 events were recorded. Data were analysed using FlowJo. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability NCBI accession numbers for phages used in this study can be found in Supplementary Table 2 ; sequences and additional information can be found at phagesdb.org. Genome sequences for phage-resistant strains discovered and used for this study can be found at NCBI BioProject PRJNA862910 . Unprocessed imaging data described in this work cannot be deposited in a public repository due to file size limitations. To request access, contact the corresponding authors. Additional data supporting the findings in the current study are available from the corresponding authors upon reasonable request. All biological materials described in this study are available from G.F.H. at gfh@pitt.edu on reasonable request. Code availability Custom MATLAB code and other code used in this study are available at .
As antibacterial resistance continues to render obsolete the use of some antibiotics, some have turned to bacteria-killing viruses to treat acute infections as well as some chronic illnesses. Graham Hatfull, the Eberly Family Professor of Biotechnology in the Kenneth P. Dietrich School of Arts and Sciences at Pitt, has pioneered the use of these viruses—bacteriophages, phages for short—to treat infections in chronic diseases such as cystic fibrosis. Although the importance of resistance may have eluded the early discovers of antibiotics, Hatfull is intent on understanding how bacteria become resistant to phages. His lab has just discovered how a specific mutation in a bacterium results in phage resistance. The results were published Feb. 23, in the journal Nature Microbiology. The new methodology and tools his team developed also gave them the opportunity to watch in unprecedented detail as a phage attacks a bacterium. As the use of phage therapy expands, these tools can help others better understand how different mutations protect bacteria against invasion by their phages. For this study, the team started with Mycobacterium smegmatis, a harmless relative of the bacteria responsible for tuberculosis, leprosy and other hard-to-treat, chronic diseases. They then isolated a mutant form of the bacterium that is resistant to infection by a phage called Fionnbharth. Infection of Mycobacterium smegmatis by a genetically engineered mutant of phage Fionnbharth. Three steps in the infection process can be seen:1) A single phage particle binds to the bacterial cell, and is seen as a red dot 0.42 seconds into the video.2) Two seconds into the movie, green fluorescence is observed where the phage has injected its DNA into the cell. The green fluorescence comes directly from the phage DNA (ignore the bright green dots at the very ends of the cell). Over the next few seconds, the green-labeled DNA forms a zone of phage replication (ZOPR) and spreads throughout the cell. 3) At 6:25 seconds, lysis occurs and the cell explodes. Total time elapsed is about three hours. Credit: Charles Dulberger To understand how the specific mutation in the lsr2 gene helps these resistant bacteria fight off a phage, the team first needed to understand how phages killed a bacteria without the relevant mutation. Carlos Guerrero-Bustamante, a fourth-year graduate student in Hatfull's lab, genetically engineered two special kinds of phages for this study. Some produced red fluorescence when they entered a bacterial cell. Others had segments of DNA that would stick to fluorescent molecules so phage DNA would light up in an infected cell. Following the fluorescent beacons, "We could see where the phage DNA entered the cell," Guerrero-Bustamante said. The imaging methods they used were designed by Charles Dulberger, a collaborator and co-first author of the paper who was then at Harvard T.H. Chan School of Public Health. "We saw for the first time how the phages take that first step of binding to cells and injecting their DNA into the bacteria," said Hatfull, who is also a Howard Hughes Medical Institute Professor. "Then we applied those insights to ask, 'So, how's it different if we get rid of the Lsr2 protein?'" The link between Lsr2 and phage resistance has not been previously known, but with their new methods and tools, the team clearly saw the critical role it played. Typically, Lsr2 helps bacteria replicate its own DNA. When a phage attacks, however, the virus co-opts the protein, using it to replicate phage DNA and overwhelm the bacteria. When the lsr2 gene is missing or defective—as in the phage-resistant Mycobacterium smegmatis—the bacteria doesn't make the protein and phages don't replicate enough to take over the bacterial cell. This was a surprise. "We didn't know Lsr2 had anything to do with bacteriophages," Hatfull said. These new tools can be used to uncover all manner of surprises written in the genes of phage-resistant bacteria. It may also help today's researchers and tomorrow's clinicians to better understand and take advantage of phages' abilities while avoiding the missteps that led to antibiotic resistance. "This paper focuses on just one bacterial protein," and its resistance to just one phage, Hatfull said, but its implications are wide. "There are lots of different phages and lots of other proteins."
10.1038/s41564-023-01333-x
Earth
New analysis technique suggests there is far less gas to be fracked in U.K. than thought
Patrick Whitelaw et al. Shale gas reserve evaluation by laboratory pyrolysis and gas holding capacity consistent with field data, Nature Communications (2019). DOI: 10.1038/s41467-019-11653-4. www.nature.com/articles/s41467-019-11653-4 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-11653-4
https://phys.org/news/2019-08-analysis-technique-gas-fracked-uk.html
Abstract Exploration for shale gas occurs in onshore basins, with two approaches used to predict the maximum gas in place (GIP) in the absence of production data. The first estimates adsorbed plus free gas held within pore space, and the second measures gas yields from laboratory pyrolysis experiments on core samples. Here we show the use of sequential high-pressure water pyrolysis (HPWP) to replicate petroleum generation and expulsion in uplifted onshore basins. Compared to anhydrous pyrolysis where oil expulsion is limited, gas yields are much lower, and the gas at high maturity is dry, consistent with actual shales. Gas yields from HPWP of UK Bowland Shales are comparable with those from degassed cores, with the ca. 1% porosity sufficient to accommodate the gas generated. Extrapolating our findings to the whole Bowland Shale, the maximum GIP equate to potentially economically recoverable reserves of less than 10 years of current UK gas consumption. Introduction Shale gas arises from the cracking of insoluble organic matter in source rocks (kerogen) and any oil retained in the pores 1 , 2 , 3 . Shale gas produced in the USA is generally quite dry with methane contents being typically over 75% 3 , 4 , 5 with shales needing a vitrinite reflectance (VR) maturity of >1.4% Ro to produce dry gas 6 . To guide exploration and development where production has not commenced, it is essential that rigorous methodologies are established to estimate the maximum recoverable reserves. The UK is such a case, with the Carboniferous Bowland-Hodder Shale being the major gas source 7 , 8 , 9 , 10 , 11 . It has been estimated that the gas in place (GIP) for the entire Bowland Shale is large, the Upper and Lower units containing 164–447 and 658–1834 trillion standard cubic feet (TCF), respectively 7 . However, this was based on adsorbed and free gas estimates for US shales, and assumed that all Bowland Shale source rock with a maturity above 1.1% Ro, had already generated gas, contrary to US producing shales (Barnett, Marcellus and Fayetteville) having VR >1.4% Ro 6 . The large UK estimate may also be due to the assumption that all Carboniferous Shales of the Bowland basin are potential shale gas source rocks 7 . Rock-Eval pyrolysis is the standard approach for assessing source rock potential and quality in which volatile hydrocarbons are measured as they evolve 12 , 13 . Although hydrocarbon gases are not measured, an empirical relationship based on the S1 and S2 parameters (free and potential for generated hydrocarbons, respectively) to estimate shale gas yields has been developed 6 . Closed system pyrolysis uses micro-scale sealed-vessels (MSSV) where all volatiles are retained within the system 14 , 15 . The drawback with both techniques is that they do not replicate oil expulsion during maturation. In hydrous pyrolysis, albeit in a closed system, oil generated is expelled into the water phase and is thus not in as close contact with the source rock, so better replicating actual expulsion 16 . However, water and vapour are in equilibrium, with the pressure set by the temperature of the experiment. To better replicate petroleum systems, high pressure water pyrolysis (HPWP), where there is no free vapour space in the reactor can be used to understand source rock maturation, hydrocarbon generation and associated pressure effects 17 , 18 , 19 , 20 . We use sequential HPWP here to predict the maximum GIP using oil window and gas window mature UK Bowland Shales with expelled oil being removed at each stage. Comparisons are drawn firstly with recent reports for degassed core samples 21 , 22 and then the adsorbed plus pore (free) gas estimated for the gas window shale. It must be remembered that some differences between the different studies arise from the samples coming from different locations within the basin with consequent differences in sediment provenance, stratigraphical, structural and tectonic histories of the different parts of the same basin 12 . Moisture equilibration is essential since it affects both the free and adsorbed gas, and vast reductions in the amount of adsorbed methane with increasing humidity have been reported 23 . Further, much of the variation in the reported porosities of shales (1–8%) arises from the extent to which shales are moisture-equilibrated 6 , 24 , 25 . The implications of our findings for the entire Bowland Shale gas resource are considered on the basin and we show that these are actually ~10 times lower than previously thought. Results Gas and oil yields The methane and total hydrocarbon (C 1 –C 5 ) gas yields from the five stages in sequential HPWP for the oil window mature Rempstone shale (0.71% Ro, containing a mixture of types II, III and IV kerogen, Supplementary Table 1 , Supplementary Fig. 1 ) investigated at 800 bar and under anhydrous conditions are presented in Figs. 1a, b , respectively, together with the yields of oil expelled and the heavier oil/bitumen retained in the shale. The full gas compositions, vitrinite reflectance and Rock-Eval pyrolysis results for the matured shale samples from these experiments are listed in Supplementary Table 2 , together with those for experiments at 300 bar. Maturities >2.3% Ro were attained to represent the high maturities of the gas window. Slightly higher Ro values were achieved at 300 bar due to the previously described pressure retardation effect on maturation at higher pressure 20 . Fig. 1 Total hydrocarbon gas (C 1 –C 5 ), methane, expelled oil and retained oil/bitumen yields (mg/g TOC of the rock at the end of each stage) for the Rempstone shale. a Sequential HPWP (800 bar). b Anhydrous experiments. For stages 3–5 in the HPWP experiment, the results are also presented for the sample that was solvent extracted at the end of stage 2. The VR % (Ro) values below each histogram are for the start and end of each stage. The differences in the measured values from duplicate tests are generally within 6% for gas (C 1 –C 5 hydrocarbons) yields, 4% for expelled and retained oil yields and 2% for gas dryness Full size image In HPWP, oil generation peaks at VR of 1.0% Ro (stage 1), and extremely dry gas generation at >2% Ro only commences when the residual oil level is reduced to ca. 5% of its maximum value at the end of stage 2 (Fig. 1a ), corresponding to only 1% of the initial total organic carbon (TOC) of Rempstone shale. However, extracting the residual oil after stage 2 (1.3% Ro) to effectively increase the extent of expulsion to over 90% reduced the gas yield by nearly 50% from ca. 22 to 11 mg (g TOC) −1 and increased the dryness to over 60% (Fig. 1a ), with the dry gas yield from stages 4 plus 5 at >2% Ro being similar (12 and 14 mg (g TOC) −1 ) for both the 800 bar unextracted and extracted Rempstone shale (Fig. 1a , Supplementary Table 2 ). The small quantities of retained oil present after stage 3 contributed with the higher maturity to make the gas from stage 4 considerably dryer. The gas yields obtained at 800 bar were very similar to the 300 bar yields for Rempstone (Fig. 2 and Supplementary Table 2 ) confirming that over the maturity range of 1.3–2.0% Ro, the retained oil levels dictate the amount of gas generated, with the dryness increasing with decreasing gas yield. In geological settings before uplift occurs, it is likely that nearly all the oil will be expelled based on the evidence for US shales, for example, Marcellus, where relatively dry gas is obtained at >1.4% Ro 6 . Fig. 2 Total hydrocarbon gas (C 1 –C 5 ), methane, expelled oil and retained oil/bitumen yields (mg/g TOC of the rock at the end of each stage) for the HPWP experiment on Rempstone shale at 300 bar. The VR % (Ro) values below each histogram are for the start and end of each stage. The differences in the measured values from duplicate tests are generally within 6% for gas (C 1 –C 5 hydrocarbons) yields, 4% for expelled and retained oil yields and 2% for gas dryness Full size image Overall, the gas yields from the HPWP experiments are nearly three times lower than from anhydrous pyrolysis, with the gas under anhydrous conditions being considerably wetter at high maturities (>2% Ro) with a dryness of only 66 and 69% (stages 4 and 5, respectively, Fig. 1b ). This vast difference arises because virtually no expulsion of oil occurs in anhydrous pyrolysis with the retained oil/bitumen remaining constant after stage 2 compared with 80% expulsion occurring in HPWP. Cracking of oil gives considerably wetter gas than direct generation from kerogen 20 . Our total gas yield from anhydrous pyrolysis (137 mg (g TOC) −1 ) is comparable to that reported in a previous study (mean of 154 mg (g TOC) −1 for source rocks with hydrogen indexes (HIs) of ca. 400 14 . Figure 3 presents the gas and the expelled and retained oil/bitumen yields for the gas window (Grange Hill core, 1.95% Ro), where the HPWP experiment commenced at stage 3 due to its high starting maturity. The full gas compositions, vitrinite reflectance and Rock-Eval pyrolysis results for the matured shales are listed in Supplementary Table 3 . Relatively dry gas has been obtained from the first stage (stage 3) of the experiment between 1.9 and 2.1% Ro and the dryness then increases to over 90% for stages 4 and 5. Due to the shorter time spent at higher temperature in stage 3 for this maturity range, dry gas generation has been brought forward. The stage 3 gas yield for Grange Hill is much lower than for Rempstone (Figs. 1a, b and 2 ) due to the higher starting maturity. The low yield of gas from stage 5 (2.3–2.5% Ro) indicates that the end of the gas window has been reached. The dry gas yields at maturities greater than ca. 2% Ro are similar for both cores (10–14 mg (g TOC) −1 ) indicating that any variations in kerogen type do not impact significantly on gas yields at high maturities. Fig. 3 Total hydrocarbon gas (C 1 –C 5 ), methane, expelled oil and retained oil/bitumen yields (mg/g TOC of rock at the end of each stage) for the sequential HPWP experiment on the Grange Hill core sample at 300 bar. The VR % (Ro) values below each histogram are for the start and end of each stage. The differences in the measured values from duplicate tests are generally within 6% for gas (C 1 –C 5 hydrocarbons) yields, 4% for expelled and retained oil yields and 2% for gas dryness Full size image Comparison with degassed cores Assuming that all the gas generated remains in the shale, converting the HPWP gas yields for stages 3–5 for Rempstone shales to a volumetric basis normalised to a TOC content of 2% (the mean for the whole Upper Bowland Shale) gives a total of 22–28 TCF tonne −1 with 8–14 TCF tonne −1 being generated over the range 1.3–2.0% Ro, the uncertainty arising due to the profound effects of the relatively small amounts of retained oil/bitumen have on the gas yields. The desorbed gas content (adsorbed gas measured from desorption experiment) obtained for Grange Hill and two neighbouring wells were in the range 20–50 TCF tonne −1 at 1.9–2.3% Ro 22 , generally increasing with maturity, mean values normalised to a TOC of 2.0% being 25–28 TCF tonne −1 for the Lower Bowland Shales investigated. A range of 10–50 TCF tonne −1 has been reported for the Kirby Misperton-8 well in the Cleveland Basin covering a maturity range of 1.3–2.0% Ro 21 , but mainly at the higher maturities, where normalising to a TOC of 2.0% gives a mean of 12 TCF tonne −1 . Overall, to achieve this level of consistency between the HPWP results and the degassed cores implies that most of the gas generated from 1.3% Ro is retained in the shales. Porosity and adsorbed gas measurements The data for Nitrogen (N 2 ) sorption isotherms, mercury intrusion porosimetry (MIP) and X-ray computer tomography (XRCT) for the gas window Grange Hill core are presented in Figs. 4 and 5 . Figure 4 shows that the Grange Hill initial to stage 5 samples have a type IV isotherm representing a meso and macroporous pore network with mesopore volume increasing with maturity during HPWP (Supplementary Table 4 ). However, at 50% relative humidity (RH), the mesopore volume observed by N 2 adsorption isotherms decreases by 35–40% for Grange Hill initial and 39% for stage 5. Maturation in HPWP (stage 5) did not induce a significant change in the macropore volume (Fig. 5 ) and the increases in Brunauer-Emmett-Teller (BET) surface area and pore volumes during the gas window (Supplementary Tables 4 and 5 ) are broadly consistent with those reported in a previous study 26 . MIP can only be conducted on vacuumed dry samples, but still shows mesopores are dominant. Mesopore volumes from MIP are a factor of 6 greater than for N 2 adsorption (Supplementary Tables 4 and 5 ), this can be attributed to a ‘pore shielding’ mechanism, where mercury is shielded from a large cavity by a narrower neck/window size pore in the mesopore range, once intrusion occurs the large cavity volume is added to the mesopore volume. This is also evident in N 2 isotherms, which show (Fig. 4 ) nitrogen condensate can only desorb from larger mesopores when a narrower neck empties, indicating possible ink-bottle pores from the hysteresis in the desorption branch. MIP indicates that the dry porosity is only 1.1% for the initial shale (Supplementary Table 4 , multiplying 0.0042 cm 3 g −1 by the skeletal density of 2.689 g cm −3 from helium pycnometry). However, this low value could be partially attributable to the drilling mud present. After HPWP, the dry porosity increases to ca. 7% but, with a moisture content of ca. 2% w/w, corresponding to ca. 5% by volume, means that the wet porosity will be close to 1%. Fig. 4 N 2 sorption isotherms for dry initial and matured Grange Hill shale samples from sequential HPWP pyrolysis (solid lines are for adsorption and dotted lines for desorption) Full size image Fig. 5 XRCT pore visualisation for Grange Hill shale sample from stage 5 of the sequential HPWP experiment. a All pores showing the fissures induced. b Between 2.75 and 40 µm, excluding the larger fissures for comparison with MIP pore range Full size image Macropores imaged by XRCT are shown in Fig. 5 , and volume and size distributions listed in the Supplementary Table 6 . The large fissures induced by HPWP have created an additional 1.4% porosity for the moisture-equilibrated sample used for XRCT. Taking the XRCT pore volume for the initial shale and adding this to the meso and micropore volume from N 2 adsorption for the shale equilibrated at 50% RH, gives a total porosity little more than the 0.4% observed by XRCT. The low N 2 adsorption meso and micropore volumes could be influenced by the drilling mud present. However, even taking the dry porosity for the drilling mud extracted sample, which will be an over-estimate, gives a micro/mesopore porosity of ca. 0.6% and a total close to 1.0%. For the HPWP stage 5 sample, this analysis gives a porosity of 1.6% for the 50% RH data from N 2 adsorption isotherms, which will be lower at 100% RH. Further, it is uncertain whether the HPWP treatment in itself increases micro/mesopore volume, given that a small increase was observed by XRCT for the 2.75–40 µm macropores, but this is not expected to be significant at high humidity. Either way, the evidence overall suggests that 1.0% is a reasonable estimate of the porosity at high humidity for the Grange Hill shale. The high-pressure methane adsorption isotherms obtained for the HPWP pyrolysis matured gas window Grange Hill shale, both dry and moisture equilibrated (50 and 100% RH) at 25, 60 and 100 °C (Fig. 6 ), all display type 1 isotherms indicating micropore filling behaviour. Supplementary Table 7 lists the adsorption capacities at 100 and 300 bar, including monolayer capacities derived using duel site Langmuir equation. The stage 5 sample, dry at 25 °C, shows the largest methane adsorption with a monolayer capacity (Qm) of 1.37 mg g −1 . Overall, the results confirm adsorption capacities increase with maturity 25 , 26 . Micropores are reduced considerably after equilibration with moisture at 50% RH, Qm dropping by 27% to 1.00 mg g −1 for the stage 5 sample. However, this reduces further when taking into account both the temperature and the assumed humidity at the depth of this shale, 100 °C and 100% RH, respectively, reducing adsorption further by another 85% to a Qm of 0.15 mg g −1 . Thus, the combined effect of humidity (dry to 100% RH) and temperature going from ambient to 100 °C is to reduce the equilibrium methane adsorption capacity by a factor of 9 consistent with previous studies 23 , 27 . The amount of adsorbed methane is reduced further if present-day pressures of shales are below ca. 350 bar where equilibrium is reached (Fig. 6 ). On the other hand, these estimates may be low if capillary condensation is neglected 28 , but this only occurs to a significant extent for wet gas. Methane adsorption capacities reported for other shales range from 0.26 (Eagle Ford) to 1.50 mg g −1 (Barnett) for US shales (measured at 40–50 bar) 29 and between 1.00 and 4.08 mg g −1 for Qiongzhusi shale, China (measured at 140 bar) 30 . Not surprisingly, these estimates are considerably lower than for isolated type II kerogen 31 , which had an adsorption capacity of 15 mg g −1 . Fig. 6 High-pressure methane adsorption isotherms fitted to the dual site Langmuir model (dashed line) for 50% RH, 100% RH and dry samples at 25, 60 and 100 °C for matured Grange Hill shale samples from step 5 of the sequential HPWP experiments. The data points are the mean from duplicate experiments and the error bars represent the difference between each pair of values obtained Full size image Discussion To compare the GIP estimates from our pyrolysis experiments and the adsorbed plus free gas measurements for the Grange Hill core, the present-day temperature and hydrostatic pressure of 100 °C/300 bar matching many gas window Bowland Shales with ca. >2.0% Ro (Supplementary Fig. 2 ). Figure 7 compares the adsorbed and free gas estimates for this scenario for shales with the HPWP gas, assuming a porosity of 1% for the water equilibrated shale. The fact that the maximum HPWP yield of 37 TCF tonne −1 for Rempstone adjusted to a TOC of 3.0%, to match the value for the Grange Hill core, across the whole gas window is comparable to the shale holding capacity indicating that over-pressure will only occur either at higher TOCs or if the porosity is much less than 1%. Fig. 7 Comparison of gas generated at %Ro >1.3 and shale holding capacity (free plus adsorbed gas) based on the results for Rempstone and Grange Hill shales normalised to 3% TOC (TOC after stage 5 of HPWP for Grange Hill) Full size image To extrapolate our findings to estimate the maximum GIP, the calculation procedure described in the Methods section was used. Our estimates have been calculated apportioning the estimated mean net Upper Bowland Shale volume (32.9 TCF) used previously 7 between the thermal maturity ranges studied by HPWP for the gas window. Thus, we estimate that the shale volume in the gas generating window (>1.3% Ro) is probably only 21.5 ± 3 TCF with 8 ± 2 TCF at >2.0% Ro. Note that this estimate of the Bowland Shale volumes in the particular maturity ranges take no account of whether or not the rock formation is currently at depths >1500 m, which is the base for gas-shale production 7 . The gas yield for the unextracted Rempstone shale from stages 3, 4 and 5 was assumed to be the upper bound of gas generation, while the corresponding yield for the extracted sample, the lower bound (Supplementary Table 8 ). To provide a maximum estimate, we assume that the shale in the maturity range 1.3–2.0% Ro has generated the maximum stage 3 yield in HPWP and that at >2% Ro, the total HPWP gas yield for stages 3–5. This gives an estimated total GIP of 28 ± 11 TCF with 16 ± 6 TCF at maturities >2% Ro. This total is ca. 10 times lower than the previous mean estimate 7 , a factor of 5 arising from the estimated lower gas yields and a further factor of 2 from tailoring the volume of shale to the maturity range over which relatively dry gas is actually generated. The Lower Bowland Shale is estimated to be four times larger by volume than Upper Bowland where we assume that the lower average TOC 7 is roughly offset by the higher overall maturity range arising from its overall greater depth. This takes the maximum GIP estimate to 140 ± 55 TCF. Given that UK gas consumption is currently ca. 2.8 TCF per annum 32 and, assuming an economic recovery of 10%, which is unlikely for much of the Lower Bowland Shale due to its depth of over 3000 m, represents a maximum (14 ± 6 TCF), considerably below 10 years supply at the current consumption. Clearly, more shales need to be investigated covering different lithologies and over smaller maturity increments, particularly in the range 1.3–2.0% Ro, to provide more precise information as to how much lower the actual GIP is than this maximum estimate. Methods Bowland Shale samples The Carboniferous, basal Namurian, Upper Bowland Shale Formation was deposited in parts of the East Midlands, North Wales and Northern England in a series of subsiding grabens and half-grabens 33 , 34 , 35 , 36 . The thickness of the entire Bowland-Hodder Shale varies from 3.5 km in parts of East Midlands to ~0.1 km in parts of the Derbyshire Dome and Cheshire Basin (Fig. 18 in ref. 7 ), with the most prospective areas being the Bowland Basin (including Fylde), Gainsborough Trough and Widmerpool Gulf and Cleveland Basin (North Yorkshire). The shale contains hemipelagic mudstones and mass-flow limestones, sandstones, and rare volcanics passing laterally into platform/ramp carbonates 7 , with these lithologies presumably having lower potential for gas generation than the hemipelagic mudstones. However, these non-shale lithologies within the shales form an essential component of the shales to form a gas-shale source-reservoir rock, since the production of gas via fracking from shales requires that the total clay content is <35%. The proportions of high potential gas source shale and the other lithologies with low potential can be as high as 75% in the Lower Bowland Shale Fylde area (Lancashire) reducing to close to zero in the East Midlands Shelf, although in the nearby Widmerpool Gulf organic-rich hemipelagic shales occur 7 . In the Upper Bowland Shale, shale is the dominant lithology. The estimates for the gas volumes in place assume the presence of 30% shale in the lower part and 50% in the upper part. Although data are sparse, this indicates that the Lower Bowland Shale will have a lower average TOC. However, we have taken the average TOC for the whole shale to be 2.0% 7 to estimate the GIP. In the Widmerpool Trough, the Remsptone-1 well is on the southern edge and the Bowland Shale is underlain by the Widmerpool Formation and other Visean shales, limestone and siltstones. This oil window mature shale rock is from a borehole core (Rempstone-1 well) of Namurian (Pendleian) age obtained at a depth of between 665 and 667 m. Whereas the Grange Hill-1 3113 m sample is from the Lower Bowland Shale (Brigantian, Dinantian) with a provenance from the prodelta sources to the north east, the Rempstone sample comes from the Upper Bowland Shale (Pendleian, Namurian) with a province from the prodeltas to the north and south of the Widmerpool Trough on the Derbyshire and Midlands Highs. Drilling ceased within the Lower Bowland Shale in the Grange Hill-1 well, but the evidence from the nearby Becconsall-1z well and the Clitheroe and Lancaster Fells districts is that the Lower Bowland Shales are underlain by shales and limestones as in the Rempstone-1 well 22 . Cessation of rifting occurred across large parts of the UK during late Visean and was followed by a period of regional thermal subsidence. While shale deposition continued in the Widmerpool Trough until Kinderscountian/Marsdenian times, culminating with the siliclastic sandstones of the Millstone Grit Group, the Upper Bowland Shale in Grange Hill-1 is overlain by the Pendle Grit (part of the Millstone Grit Group). These sandstones represent the progradation of deltas across the Visean and early Namurian basins. Both the Grange Hill-1 and Rempstone-1 wells were inverted and eroded during the Variscan orogeny during Late Carboniferous prior to deposition of Permo-Triassic rocks. Both these cores have TOCs higher than the average of 2.0% for the Bowland Shale 22 . Soxhlet extraction to determine bitumen/oil content of rocks Soxhlet extraction was used to determine the bitumen/oil content of the core samples and was carried out using a cellulose extraction thimble and a 250 ml round bottom flask. Prior to extraction, the cellulose extraction thimble was pre-extracted using 150 ml dichloromethane (DCM)/methanol mixture (93:7 volume:volume for 24 h to remove any impurities present. The rock sample was ground into a fine powder and placed within the cleaned thimble, and extracted in the same manner as the thimble was cleaned. The extracted sample was then stored for analysis, and the solvent was evaporated using a rotary evaporator until the majority of the solvent was removed. The oil/bitumen remaining after evaporation was transferred to a pre-weighed vial using DCM and left to dry. The weight of the vial and extract was taken and oil/bitumen weight calculated by difference after all the DCM had evaporated. Pyrolysis and product analysis Prior to pyrolysis, the non-extracted cores were crushed to 2–5 mm chips that were thoroughly mixed to obtain a homogenous sample. Sequential pyrolysis tests have been carried out under anhydrous (5–20 bar) and high-pressure water (300 and 800 bar) conditions in a 25 ml Hastalloy cylindrical pressure vessel rated to 1400 bar at 420 °C, connected to a pressure gauge and rupture disc rated to 950 bar. Heat was applied by means of a fluidised sand bath, controlled by an external temperature controller. The sand bath (connected to a compressed air source) was pre-heated to the required experimental temperature and left to equilibrate before the start of each run. For all experiments, after the addition of sample and water (for runs with water added) to the rector and reactor assembly, the reaction vessel system was flushed with nitrogen gas to replace air in the reactor head space. After which 2 bar pressure of nitrogen was pumped into the pressure vessel system to produce an inert atmosphere during the pyrolysis runs. The 300 bar experiments at 350 and 380 °C were performed by initially filling the vessel with 15 ml distilled water, after which the pressure vessel was then lowered into the sand bath and allowed to attain vapour pressures of 175 and 235 bar at 350 and 380 °C, respectively, before the addition of excess distilled water via a compressed air driven liquid pump to increase the pressure to 300 bar. The 300 bar run at 420 °C was conducted by adding 10 ml distilled water to the vessel, the expansion of the water gave the required pressure and the experiment was not pressurised. The 800 bar experiment at all temperatures were performed similarly to the 300 bar runs at 350 and 380 °C, also filling the vessel initially with 15 ml distilled water before increasing the pressure to 800 bar. The anhydrous experiment was also performed in the same manner as the high water pressure runs without water, the 5–20 bar pressure observed was generated due to the expansion of the 2 bar nitrogen in the system during the run and water generated from the shale during the experiment. After the required temperature and pressure for all conditions were attained, the experiments were then allowed to run for the required time, after which the sand bath was switched off and left to cool to ambient temperature before product recovery 17 , 18 , 19 , 20 . The sequential experiments were conducted as described above and depicted in Fig. 8 . The experiments were conducted starting with 19 g of the oil window mature Rempstone shale for all conditions. The starting rock (0.71% Ro and T max of 441 °C after the removal of suppression of VR 37 and T max 38 , respectively) was first heated at 350 °C for 24 h, and at the end of the run the experiment was stopped and allowed to cool to ambient temperature, before the generated products (gas, expelled oil and pyrolysed rock) were recovered, and the pyrolysed rock dried to remove water. After drying the rock, about 3 g was put aside for further analysis, and the rest re-heated. The process was repeated heating the same rock sample successively at 380 °C for 24 h, 420 °C for 24 h, 420 °C for 48 h, and finally 420 °C for 120 h. Fig. 8 Schematic diagram showing the temperatures and times used for the 5 stages in the sequential pyrolysis experiments on the Rempstone shale. The sequential experiment for Grange Hill was started at stage 3 given the initial vitrinite reflectance of 1.95% Ro Full size image For the gas mature Grange Hill core with starting vitrinite reflectance of 1.95% Ro, HPWP at 300 bar was conducted in the same manner as the Rempstone sequential pyrolysis, however starting at the third stage of the sequential pyrolysis. The core was heated successively at 420 °C for 24 h, 420 °C for 48 h, and finally 420 °C for 120 h. The removal of the expelled oil and the gas after each maturity stage enables the maturity interval to be identified over which dry shale gas will be generated. At the higher temperatures of 380 and 420 °C used to reach high maturities, the water is supercritical and could have greater extractive power possibly leading to more oil expelled when compared with 350 °C 19 . After every pyrolysis stage, the gases were collected with the aid of a gas tight syringe and transferred to a gas bag (after the total volume had been recorded), and immediately analysed for the C 1 –C 5 hydrocarbon composition by gas chromatography on a Clarus 580 GC fitted with a FID and TCD detectors operating at 200 °C. Hundred microlitres of gas samples were injected (split ratio 10:1) at 250 °C with separation performed on an alumina plot fused silica 30 m × 0.32 mm × 10 µm column, with helium as the carrier gas. The oven temperature was programmed from 60 °C (13 min hold) to 180 °C (10 min hold) at 10 °C min −1 . Individual gas yields were determined quantitatively in relation to methane (injected separately) as an external gas standard. The total yield of the hydrocarbon gases generated was calculated using the total volume of generated gas collected in relation to the aliquot volume of gas introduced to the GC, using relative response factors of individual C 2 –C 5 gases to methane predetermined from a standard mixture of C 1 –C 5 gases 19 . The expelled oil floating on top of the water after the experiments was collected with a spatula and recovered by washing with cold DCM (for runs where expelled oil was generated), after which the water in the vessel was decanted and the pyrolysed rock oven dried overnight at 45 °C. The floating (expelled) oil on top of the water, together with oil adhered to the side of the reactor wall (recovered by washing with DCM), were all combined and referred to as expelled oil. About 1 g of the dried pyrolysed rock was crushed and soxhlet extracted as described above to recover the oil retained in the rock (bitumen). Vitrinite reflectance (VR) Measurements were conducted on the initial (non-extracted) and pyrolysed rocks solvent extracted residues mounted in epoxy resin, using standard methods 39 . Prior to reflectance measurements, the samples were ground and polished using successively finer grades of silicon carbide and colloidal silica to produce a scratch free polish surface. Measurements were made using a LEICA DM4500P microscope with motorised fourfold turret for reflectance. The microscope was fitted with oil immersion objectives. The white light source was a 12 V 100 W halogen lamp with a LED illumination slider 29 × 11.5 mm in the incident light axis. Calibration was carried out using a 3.13% Ro Zirconian standard, and a blank (0% Ro), and was checked using a YAG standard (0.89% Ro) to ensure a linear calibration. Random VR (% Ro) measurements were carried out at 546 nm, and between 6 and 32 points count were taken depending on the number of recognisable vitrinite particles available for measurement in each sample. Measurement and data were collected via the Hilgers Fossil Man system connected to the LEICA DM4500P microscope. Rock Eval pyrolysis and total organic carbon (TOC) Analysis were conducted on the initial and pyrolysed non-extracted and extracted rocks from the sequential experiments. Rock Eval pyrolysis used a Vinci Technologies Rock Eval 6 standard instrument, with about 60 mg of crushed powdered rock being heated using an initial oven programme of 300 °C for 3 min and then from 300 to 650 °C at the rate of 25 °C min −1 in an N 2 atmosphere. The oxidation stage was achieved by heating at 300 °C for 1 min and then from 300 to 850 °C at 20 °C min −1 and held at 850 °C for 5 min. Hydrocarbons released during the two-stage pyrolysis were measured using a flame ionisation detector (FID) and CO and CO 2 measured using an infra-red (IR) cell 20 . Methane adsorption Isotherms were obtained using a Micromeritics High Pressure Volumetric Analyser (HPVA-100) at 25, 60 and 100 °C up to pressures of 100 bar on both moisture-equilibrated and dry shales. The crushed shale samples (2–5 mm) with moisture present (equilibrated at 50 and 100% RH over 48 h) or vacuum dried for 48 h at 80 °C (dry) were loaded into the 10 cm 3 sample cell (~10 g). Skeletal densities of the shale were calculated using helium pycnometry on the vacuum dried shale, with the assumption that helium penetrates all accessible porosity. Free space for analysis was calculated by taking the free space of the empty cell calculated from helium expansion minus the volume of the shale. Monolayer capacities (Qm) were calculated using the dual site Langmuir equation to predict adsorption beyond the experimental range as it could not be reached through experimental means 40 . N2 sorption isotherms BET specific surface area, micro, meso and macroporosity of the shale samples were analysed using a Micrometrics ASAP 2420 instrument. Using N 2 as the adsorbate at −196 °C, isotherms were acquired from 0.001 to 0.998 relative pressure. About 3 g of shale samples (2–5 mm) were placed into a glass tube with filler rod. Dry samples were vacuumed dried at 80 °C for 15 h prior to analysis. Wet samples (50% RH equilibrated) were frozen at −196 °C in liquid N 2 for 30 min in the glass tube with filler rod prior to analysis, with the instrument and sample taken to vacuum manually with frozen water held in the pores and surface of the samples. This method eliminates the free space procedure as the isotherm is started immediately as the vacuum set-point is reached (0.013 mbar), therefore a separate free space analysis was carried out on blank tubes similar to the method above for methane adsorption. Surface areas of the shale were calculated using BET surface area equation from 0.05 to 0.25 relative pressure giving positive BET C values 41 . Micro and mesopore volumes were determined using Horvath-Kawazoe model, assuming slit pore geometry on a carbon/graphite surface. Mercury intrusion porosimetry (MIP) Macro and mesopore volumes by MIP were measured with a Micrometrics Autopore IV Mercury Porosimeter. The shale (1.5 g, 2–5 mm) was vacuum dried for 48 h at 80 °C, and placed within a 5 cm 3 solid penetrometer, 0.366 IV. The pressure was increased stepwise from vacuum up to ~4137 bar and the volume of mercury entering the shale pores can be converted to pore volume and size. The radii of the penetrated pores at a given pressure was calculated using the Washburn equation for slit/angular shaped pores with a contact angle of 151.5° and a surface tension of 475.5 mN/m for mercury intrusion in shale 42 providing a pore size distribution from 231 µm to 3 nm. Humidity generation Humidity generation was obtained with an oversaturated salt solution placed into a pre-vacuumed desiccator. For 50% RH, 15 g of magnesium nitrate hexahydrate (Mg(NO 3 )2·6H 2 O) was dissolved in 10 ml of distilled water and for 100% RH 8 g of potassium nitrate (KNO 3 ) was dissolved in 10 ml of distilled water 43 . Samples were placed within the desiccator, which was subsequently sealed and evacuated for 3 min. The samples were then left to equilibrate for 48 h at 20 °C. X-ray computer tomography XRCT measurements were carried out on an Xradia Zeiss Versa XRM500 CT system with a maximum electron acceleration of 160 kV. Images were captured using a 2 × 2 camera binning mode over 180° rotation using parameters in Table 1 . Pore size modelling was conducted using the Avizo version 9.0.1 programme. Sub-volumes were extracted using a 600 voxel count per axis equivalent to 1.5 mm. Non-local means filtering was applied using a 21 pixel search window, local neighbourhood of 5 pixels and a similarity value of 0.6. Segmentation was applied to identify pore labelling occurring within the thresholds of 0–5800 for Grange Hill Virgin Extracted and 0–6500 for Grange Hill 300 bar 420 °C 120 h. Volume fraction and labelling was applied identifying pore volume and distribution. Sieve analysis was applied to pores with a diameter between 2.75 and 40 µm with volume fraction and labelling applied to identify pore size volume and distribution within this range to compare with MIP pore range. Table 1 XRCT parameters Full size table GIP calculation for the entire Bowland Shale To estimate the GIP for the entire Bowland Shale from the estimated maturity profile, the individual gas yields were converted from milligram to volume using their different gas densities to obtained the total (C 1 –C 5 ) gas volume (cubic feet), and the pyrolysed rock converted to volume assuming a bulk shale density 7 of 2.6 g cm −3 , similar to the Grange Hill core as depicted in Supplementary Fig. 2 . Our estimates have been calculated taking the estimated amount of shale in the three different thermal maturity ranges measured in the HPWP experiments, namely, 1.3–2.0, 2.0–2.3 and >2.3% Ro. The present-day temperature maturity gradients from the petroleum system models 7 were used to assess the maturity range of the Bowland Shale. These were then used to split the percentage volumes of the shale reservoir into various maturity ranges. A hydrostatic gradient was used to predict the pressure-depth histories, as in previous models 7 . The pore pressure is given by the pore fluid density (water assumed), gravitational acceleration and the depth of burial at present day. The advantage of using the same pressure assumptions for assessing the proportions of the Bowland Shale in the different maturity windows is that the maturity-depth gradients in the wells are the same as in the previous reports 7 . The estimates from the area of Bowland Basin at particular levels of maturity were made from the well maturity gradients and present-day depth to the top of the Bowland Shale 7 . As indicated, our estimates have been calculated apportioning the Upper Bowland shale volume using the previously reported median result 7 , with a volume of 9.31e11 m 3 (32.9 TCF). The volume of Lower Bowland shale was assumed to be four times that of Upper Bowland 7 . The Basin volume was sub-divided by maturity ranges using the estimations given below. 35% (±10%) between 1.1 and 1.3% Ro; 40% (±15%) between 1.3 and 2.0% Ro; 5% (±2%) between 2.0 and 2.3% Ro; 15% (±5%) between 2.3 and 3% Ro; 5% (±2%) >3% Ro. Data availability The data underlying Figs. 1, 2, 3 & 7 are presented in the Supplementary Tables, and the source data supporting Figs. 4, 5 & 6 are available from the corresponding author upon reasonable request.
A team of researchers from the University of Nottingham, the British Geological Survey (BGS) and Advanced Geochemical Systems, Ltd, has found evidence that suggests the amount of shale gas available for fracking in the U.K. is much less than previously thought. In their paper published in the journal Nature Communications, the group describes their new technique and what it showed. Just six years ago, the BGS announced that they had found that gas fields beneath the ground in parts of England and Scotland held approximately 1,300tn cubic feet of obtainable shale gas. Since that time, energy firms have instigated fracking projects that have extracted some of that gas. But others have complained that doing so has caused small earthquakes in the areas near the extraction sites. Also, some environmentalists in the country have suggested that relying on fracked gas reduces efforts made to convert the country to more sustainable resources. In this new effort, the researchers have used what they describe as a new technique to estimate the amount of gas under the ground in the U.K. and found it to be much less than what BGS found in 2013. They suggest there is enough there for just seven to 10 years of extraction—not the 50 claimed by researchers with the earlier study. The researchers describe their new technique as based on a study of actual shale deposits using gas absorption data along with field data. They studied shale samples from two locations in the Bowland Shale, and used that to calculate the amount of gas at the entire site. They note that the 2013 study included no field studies—the researchers on the project used data from energy companies. The researchers with the new effort further note that great strides have been made in learning how to measure gas below the surface over the past several years. The lead scientist at BGS, Mike Stephenson, who was not involved in the new effort, suggested to the press that much more study of U.K. gas fields is required to determine the true amount of shale gas.
10.1038/s41467-019-11653-4
Chemistry
A slingshot to shoot drugs onto the site of an infection
Ranallo, S. et al. "Antibody powered nucleic acid release using a DNA-based nanomachine." Nature Communications (2017). DOI: 10.1038/ncomms15150 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms15150
https://phys.org/news/2017-05-slingshot-drugs-site-infection.html
Abstract A wide range of molecular devices with nanoscale dimensions have been recently designed to perform a variety of functions in response to specific molecular inputs. Only limited examples, however, utilize antibodies as regulatory inputs. In response to this, here we report the rational design of a modular DNA-based nanomachine that can reversibly load and release a molecular cargo on binding to a specific antibody. We show here that, by using three different antigens (including one relevant to HIV), it is possible to design different DNA nanomachines regulated by their targeting antibody in a rapid, versatile and highly specific manner. The antibody-powered DNA nanomachines we have developed here may thus be useful in applications like controlled drug-release, point-of-care diagnostics and in vivo imaging. Introduction One of the most exciting research paths in the field of nanotechnology and supramolecular chemistry is aimed at rationally designing and developing responsive molecular machines that, like naturally occurring proteins, can perform a specific function in response to a certain molecular input 1 , 2 , 3 , 4 , 5 . Several supramolecular nanodevices of increasing chemical complexity have been described in the recent years for applications ranging from controlled release of a therapeutic cargo 6 , signal transduction 7 , 8 and sensing 9 . With its highly predictable base-pairings, its low cost, ease of synthesis and biocompatibility, DNA has become the material of choice to design and engineer nanomechanical devices and machines that display specific structures and functions 10 , 11 , 12 , 13 , 14 . A wide range of DNA-based nanodevices have been reported that, in response to a specific molecular cue, can give a signal, release a cargo or perform a directional motion 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 . Despite their impressive performances, a limitation affects these DNA-based nanodevices: their activity, in fact, is usually triggered by a quite restricted class of molecular cues and inputs. These inputs range from environmental stimuli (like pH or temperature) 23 , 24 to chemical inputs that, in the majority of cases, are limited to DNA strands or, more seldom, to small molecules and proteins 25 , 26 , 27 . Several DNA-based sensors, able to signal the presence of a specific antibody, have been reported to date 28 , 29 , 30 . However, the demonstration of DNA nanomachines performing more complex functions (such as, for example, drug-release) and employing antibodies as regulatory inputs has witnessed only limited efforts 31 , 32 . Thus motivated, here we report an antibody-powered DNA-based nanomachine that can reversibly load and release a molecular cargo on binding to a specific antibody. Our strategy takes inspiration from transport proteins, highly evolved machines that are essential to the crucial mechanism of cell transport 33 , 34 . These proteins can load and release a specific molecular cargo through a conformational change mechanism that can be regulated by different inputs 35 . By mimicking this mechanism we designed a DNA-based nanomachine that is able to load a DNA strand in a highly specific and stable fashion and release it only in the presence of a specific antibody. Results Design of an antibody-powered DNA nanomachine Our strategy to rationally design an antibody-driven DNA-based nanomachine takes advantage of triplex forming DNA sequences that are designed to recognize a specific DNA strand (blue in Fig. 1 ) through the formation of a clamp-like structure that involves both Watson–Crick (–) and Hoogsteen (·) interactions ( Fig. 1 ) 36 . This clamp-like structure is conjugated at the two ends with a pair of antigens. Antibody binding to the two antigens on the nanomachine causes a conformational change that induces the triplex-complex opening (see for analogy antibody triggered stem-loop opening) 28 and energetically disrupts the less stable triplex-forming Hoogsteen interactions (·) thus destabilizing the nanomachine/cargo complex. As the Watson–Crick interactions in such complex are not strong enough to retain the cargo, this latter is released from the nanomachine ( Fig. 1 ). Figure 1: Working principle of antibody-powered DNA-based nanomachine. A DNA strand (black) labelled with two antigens (green hexagons) can load a nucleic acid strand (blue) through a clamp-like triplex-forming mechanism. The binding of a bivalent macromolecule (here an antibody) to the two antigens causes a conformational change that reduces the stability of the triplex complex with the consequent release of the loaded strand. Full size image Selection of DNA cargo strand Instrumental for our strategy, to observe the antibody-induced DNA cargo release, is the need to find an optimal thermodynamic trade-off that requires to meet the following main conditions. First, a strong difference in stability between the triplex conformation (containing both Watson–Crick and Hoogsteen interactions) and a simple duplex conformation (only Watson–Crick base-pairings). Second, the duplex conformation, under the chosen experimental conditions (for example, temperature and concentration range), should be unstable enough to allow release of the cargo. Finally, the triplex conformation should not be too stable so that bidentate binding to the nanomachine by the antibody would be allowed. To achieve this, we have studied DNA cargos of different length (thus leading to complexes of different stabilities) and tested them with a triplex-forming DNA nanomachine (involving both Watson–Crick and Hoogsteen interactions) and a control DNA nanomachine lacking the triplex forming portion ( Fig. 2a ). As expected 36 , because of the additional Hoogsteen interactions, for all cargos tested the triplex-forming DNA nanomachine shows a higher affinity (and thus stability) compared to the control nanomachine able to only form a duplex complex ( Fig. 2b–e ). We find that a 12-nt DNA cargo leads to the strongest difference in affinity between triplex and duplex formation under our experimental conditions ( Fig. 2f,g ). Using this DNA cargo we show that, while the complex formed with the triplex-forming nanomachine is stable at temperatures below 50 °C ( T m =52.1±0.5 °C), the complex obtained with the control nanomachine (only duplex) is partially unstable at room temperature and leads to an almost complete denaturation at temperatures close to 40 °C ( T m =37.0±0.5 °C) ( Fig. 2h,i ). In our next experiments we have thus employed a 12-nt DNA strand as our molecular cargo. Figure 2: Designing the antibody-powered nanomachine. To find the optimal DNA cargo length to observe the antibody-induced release from the nanomachine, we have compared the binding affinity of a triplex-forming nanomachine with that of a control nanomachine able to only form a duplex complex ( a ) using cargo strands of different length (13 nt ( b ), 12 nt ( c ), 11 nt ( d ) and 10 nt ( e )). We have observed the strongest difference in affinity (here depicted as the difference of the relative occupancy) between the triplex-forming nanomachine and the control nanomachine with the 12-nt DNA cargo ( f , g ). ( h , i ) Using the 12-nt DNA cargo, we have also performed melting denaturation experiments showing that, while the triplex complex is stable up to 50 °C ( T m =52.1±0.5 °C), the nanomachine/cargo complex solely based on Watson–Crick interactions (control) shows a melting temperature of 37.0±0.5 °C. The experiments in this figure were performed using a DNA nanomachine (either triplex-forming or control) labelled with a fluorophore/quencher pair (FAM and BHQ-1) so that the binding of the DNA cargo can be easily followed through the decrease or increase, respectively, of the fluorescence signal. The binding curve experiments were performed in 50 mM Na 2 HPO 4 , 150 mM NaCl and 10 mM MgCl 2 at pH 6.8, 37 °C at a concentration of nanomachine of 3 nM and adding increasing concentrations of cargo strand. Melting curve experiments were performed using the same buffer solution at an equimolar concentration (10 nM) of nanomachine and 12-nt cargo. Full size image Characterization of antibody-powered DNA nanomachine As a first test bed for the optimization of an antibody-powered DNA-based nanomachine we have conjugated the DNA-based triplex-forming nanomachine with two copies of the small-molecule hapten digoxigenin (Dig) at the 5′ and 3′ ends ( Fig. 3a ) and we have used as triggering input the anti-digoxigenin antibody (anti-Dig) ( Fig. 3a ). To monitor the release of the cargo we have also labelled the DNA strand cargo with a fluorophore and quencher at the two extremities ( Fig. 3a ). Because binding of such optically labelled DNA strand cargo to the triplex-forming DNA-based nanomachine causes a conformational stretch that brings the fluorophore faraway from the quencher, we can easily follow its load/release from the DNA-based nanomachine. More specifically, we observe a strong signal increase on loading and a consequent signal decrease when this cargo strand is released from the nanomachine. Figure 3: Antibody-powered DNA-based nanomachine. ( a ) We first used digoxigenin (Dig) as antigen and anti-Dig antibodies as molecular triggers of our nanodevices. The nucleic acid cargo strand (orange) is labelled with a fluorophore/quencher pair to easily follow its load/release from the nanomachine. ( b , c ) Kinetic profiles show triplex complex formation and subsequent cargo release at different concentrations of anti-Dig antibody. ( d ) The approach is highly specific and works well also in 90% serum (orange bar). ( e ) We can achieve reversible load and release of the molecular cargo by cyclically adding anti-Dig antibody and free Dig in a solution containing both the nanomachine and the cargo strand. ( f – j ) Comparable efficiency and results can be achieved using a nanomachine that is labelled with two molecules of DNP at the two ends and thus triggered with anti-DNP antibodies. ( k ) The two nanomachines can orthogonally work in the same solution without crosstalk. ( l ) Moreover, the cargo strand displaced on antibody binding can activate a toehold strand-displacement reaction. The experiments shown in this and in the following figures were performed in 50 mM Na 2 HPO 4 , 150 mM NaCl and 10 mM MgCl 2 at pH 6.8, 37 °C at an equimolar (50 nM) concentration of nanomachine and cargo unless otherwise noted. Cycles’ experiments were performed adding the concentration of antibody indicated in e , j and a concentration of 300 nM of free Dig or DNP. The experimental values represent mean±s.d. of three separate measurements. Full size image Antibody binding to the nanomachine allows to finely modulate the release of the cargo strand. By adding increasing concentrations of anti-Dig antibody to a solution containing the nanomachine/cargo complex, for example, we can release the DNA cargo in a finely controlled manner and achieve an almost complete release (that is, 90±2%) at a 100 nM anti-Dig antibody concentration ( Fig. 3b,c ). The antibody-induced release is rapid and we achieve equilibration in <60 s. Native polyacrylamide gel electrophoresis (PAGE) experiment ( Supplementary Fig. 1 ) further supports the occurred release of the cargo strand on antibody binding. Of note, under our experimental conditions the nanomachine binds the cargo with high yield: in the absence of antibody more than 94% of cargo is bound to the nanomachine (a value obtained from the affinity constant of the interaction between the 12-base cargo and the Dig-labelled clamp nanomachine, K 1/2 =0.20±0.05 nM) and negligible spontaneous leakage of the cargo strand is observed ( Supplementary Fig. 2 ). Moreover, a control experiment using a nanomachine containing only a single Dig hapten shows that binding of the anti-Dig antibody does not lead to any release of the cargo strand ( Supplementary Fig. 3 ). The fitted curve of % cargo release versus antibody concentration ( Fig. 3c ) appears to be bilinear rather than hyperbolic thus suggesting that we are in the ‘ligand-depletion’ regime as the affinity of the antibody for its antigen is well below the 50 nM concentration of the nanomachine employed in our experiment. Consistent with this, the fitted curve gives a K 1/2 (antibody concentration at which the % of cargo release achieved is half the maximum cargo release) of 23±2 nM, which is within error of the 25 nM (half of 50 nM of nanomachine concentration) expected for a stoichiometric 1:1 nanomachine:antibody ratio. To further support this, we have performed antibody-induced cargo release experiments at different concentrations of nanomachine (ranging from 20 to 100 nM) and found that the produced K 1/2 values were always within error of the values expected for a 1:1 stoichiometry (the half of the nanomachine concentration employed; Supplementary Fig. 4 ). To confirm the proposed mechanism of our antibody-controlled nanomachine, we have measured the rate of cargo release in presence and absence of the antibody. In presence of antibody (100 nM), the rate of cargo release ( k Ab =0.036 s −1 ) is increased by ∼ 8-fold compared to that in the absence of antibody ( k triplex =0.0047, s −1 ). Of note, the rate of release in the presence of antibody is similar to the cargo release rate of a duplex control nanomachine ( k duplex =0.058 s −1 ) ( Supplementary Fig. 5 ). Moreover, the antibody-induced cargo release rate is proportional to the concentration of antibody ( Supplementary Fig. 6 ), thus suggesting that antibody binding represents the rate-limiting step of the cargo-release mechanism of the nanomachine. Finally, we also performed binding curves between the labelled cargo strand and the nanomachine in the absence and presence of the specific input antibody (anti-Dig antibody). We found that, as expected, the binding of the antibody to the nanomachine causes a conformational change that affects its ability to form a triplex complex with the cargo strand. As a result, the observed affinity of the reaction leading to the cargo/nanomachine complex gets poorer in the presence of the antibody ( Supplementary Fig. 7 ). These results support the hypothesis that our nanomachine undergoes a conformational change upon binding to the antibody that affects the affinity (and thus release rate) for the cargo strand. Because the conformational change that causes the DNA cargo release is solely induced by the binding of the specific antibody, this effect is highly specific. We demonstrate that no release of the cargo is observed at saturating concentrations of different non-specific antibodies and proteins ( Fig. 3d , Supplementary Fig. 8 ). A control experiment, employing a DNA-based nanomachine labelled with a single copy of Dig, also provides a confirmation that antibody-induced cargo release requires bivalent binding of the antibody to the nanomachine ( Fig. 3d , control). Of note, the binding-induced conformational change that drives cargo release in this nanomachine renders it selective enough to be used in complex sample matrices. The nanomachine, for example, when deployed in 90% bovine blood serum (as a safe and convenient proxy for human samples) shows a cargo release efficiency comparable to that observed in pure buffer ( Fig. 3d , orange bar, Supplementary Fig. 9 ). The nanomachine also works in 100% bovine blood serum although, as expected due to the different pH which affects the stability of the triplex state, with a lower efficiency ( Supplementary Fig. 10 ). The nanomachine is also able to load and release the molecular cargo in a reversible way. We demonstrate this by cyclically adding the specific anti-Dig antibody and the free Dig in a solution containing an equimolar concentration of the nanomachine and DNA cargo ( Fig. 3e ). The concentration of free Dig (that is, 300 nM) needed to achieve antibody release from the nanomachine and loading of the cargo strand is not as high as expected in the case where a monovalent epitope (free Dig) competes with a bivalent epitope (nanomachine). We note, however, that the presence of the cargo strand strongly supports this competition thus presumably facilitating Dig-induced antibody release from the nanomachine. The design principle of our antibody-powered DNA nanomachine is highly generalizable and can be easily adapted to other antibodies via the expedient of changing the employed recognition element. To demonstrate this, we have fabricated a second DNA nanomachine construct conjugated with a different antigen (that is, dinitrophenol, DNP) and show that anti-DNP antibodies can trigger the release of a DNA strand cargo with an efficiency, specificity and response time comparable to those observed with the anti-Dig-powered DNA nanomachine ( Fig. 3f–j , Supplementary Figs 11 and 12 ). Because they specifically respond to their target antibody, different nanomachines can be used orthogonally in the same solution without crosstalk. To demonstrate this, we have employed two different DNA nanomachines responding to anti-Dig and anti-DNP antibodies, respectively ( Fig. 3k ) in the same solution. Each nanomachine can load and release a DNA strand cargo labelled with a different fluorophore (FAM and Quasar) so that their load/release can be followed separately. The addition of one of the two antibodies in a solution containing both nanomachines causes the release of the specific DNA cargo and only in the presence of both antibodies we observe the release of the two cargos ( Fig. 3k ). Activation of a strand-displacement reaction by antibody binding The cargo strand released by antibody binding can in principle be used to trigger other chemical or biological functions. In this work we have focused our attention on the toehold strand-displacement reaction, a process through which two DNA strands hybridize with each other displacing one (or more) prehybridized strands. Such reaction has been intensively employed for a wide range of possible applications that include controlled building of complex DNA nanostructures 25 , 37 , control of gene transcription 38 and biosensing. 39 To demonstrate toehold strand displacement reaction induced by the antibody released cargo we have designed a 24-nt cargo strand that can trigger a displacement reaction in a preformed target duplex complex. The cargo strand is composed of a 12-nt portion complementary to the nanomachine that also recognizes the toehold binding domain of the preformed target duplex ( Fig. 3l , orange portion) and of an additional 12-nt domain ( Fig. 3l , blue portion) that acts as invading strand during the displacement reaction. If the cargo is loaded on the nanomachine its binding to the preformed complex cannot occur and thus no displacement reaction is observed. On addition of the antibody the cargo is released and the strand displacement reaction can proceed ( Fig. 3l ). This effect is specific and no strand displacement is observed on addition of a non-specific antibody ( Supplementary Fig. 13 ). Modular antibody-powered DNA nanomachine A possible limitation of our approach is represented by the need to conjugate the antibody-powered DNA nanomachine with two antigens, a task that could prove challenging from a synthetic point of view. In response to this limitation we have designed a modular version of our nanomachine ( Fig. 4a ). To do this, we have added to the two ends of the same triplex-forming nanomachine used before two 18-nt DNA tails that can hybridize an antigen-conjugated complementary strand. Such modular DNA nanomachine is thus composed of: (i) a loading module that contains the recognition portion for the DNA strand cargo (black strand in Fig. 4a ) and (ii) the triggering module that contains the recognition elements for the specific antibody (orange strand in Fig. 4a ). The modular antibody-powered DNA nanomachine designed in this way shows a fast kinetic of release ( Fig. 4b ) and an efficiency that is comparable to that of the non-modular counterpart. Also in this case we demonstrate cargo release by native PAGE experiments ( Supplementary Fig. 14 ). We show that we can modulate the amount of released cargo by varying the concentration of the triggering antibody ( Fig. 4c ) and we achieve a high specificity and efficiency even in complex media (that is, 90% serum) ( Fig. 4d ). Finally, also with the modular nanomachine we observe a reversible load and release activity by cyclically adding the triggering antibody and the free antigen in a solution containing both the nanomachine and the cargo strand ( Fig. 4e ). The modular nature of this nanomachine allows an easier generalization to other, more complex, recognition elements (and thus triggering antibodies). To demonstrate this, we have used as our recognition element the DNP antigen ( Fig. 4f–j ) and a short peptide (p17, 12 residues) that is recognized by HIV diagnostic antibodies ( Fig. 4k–o ). In both cases the effect of the antibody is rapid and specific and we observe efficient load-release of the molecular cargo even in 90% serum ( Figs 4i,n ). Moreover, the modularity of our approach renders it easy to design a nanomachine that behaves like a AND-logic gate and whose functionality can be triggered only with the concomitant presence of two different antibodies. To demonstrate this, we fabricated a single nanomachine exhibiting two different recognition elements, Dig and DNP ( Fig. 4p ). The addition of increasing concentrations of either of the targeted antibodies in isolation does not lead to any DNA cargo release ( Fig. 4q ). As expected, however, cargo release is achieved when the second target antibody is added ( Fig. 4q ). Finally, the modular nature of our approach also allows to reversibly change the recognition element employed so that the same nanomachine can be triggered by different antibodies with the simple expedient of changing the recognition element. To do this, we have used, in the construction of our nanomachine, slightly shorter DNA strand conjugated with the recognition element ( Fig. 4r , orange strand). This allows to displace, using a common DNA strand displacement reaction, the first recognition element conjugated DNA strand and substitute it with a second strand conjugated with a different recognition element. As a proof of principle of this strategy we have first used a nanomachine containing Dig as recognition element ( Fig. 4r ). In the presence of the anti-Dig antibody the DNA cargo is released as expected ( Fig. 4r ). The addition of a strand conjugated with DNP ( Fig. 4r , grey strand) allows to displace the Dig-conjugated strand and the anti-Dig antibody and to restore cargo loading. Such nanomachine can now be triggered in the presence of anti-DNP antibody causing a new release of the DNA cargo ( Fig. 4r ). Figure 4: Modular antibody-powered DNA nanomachine. Modular nanomachines employing three different antigens: digoxigenin ( a – e ), dinitrophenol (DNP) ( f – j ) and a 12-residue epitope (p17 peptide) excised from the HIV-1 matrix protein ( k – o ). All these nanomachines are triggered by their specific target antibodies while exhibiting no significant response to high concentrations of the non-specific targets. ( p ) Such modular antibody-powered nanomachine can be adapted to an AND-logic gate that releases its cargo only in the simultaneous presence of two different antibodies. To demonstrate this, we modified a modular nanomachine with the recognition elements Dig and DNP. ( q ) Due to the steric hindrance mechanism that disrupts triplex-forming interactions, we observe the cargo release only in the simultaneous presence of both anti-Dig and anti-DNP antibodies. ( r ) The modular antibody-powered nanomachine also allows to reversibly change the recognition element on the fly via the displacement and substitution of the antigen-conjugated strand (orange and grey). By doing so we can achieve a controlled release of the DNA cargo with two distinct antibodies in the same solution. The experiments reported here were performed in 50 mM Na 2 HPO 4 , 150 mM NaCl and 10 mM MgCl 2 at pH 6.8, 37 °C at an equimolar (50 nM) concentration of nanomachine, each antigen conjugated strand and cargo. Cycles’ experiments were performed adding a concentration of 100 nM of antibody and a concentration of 300 nM of Dig, DNP and p17 peptide. The experimental values represent mean±s.d. of three separate measurements. Full size image Discussion Despite the fact that several DNA-based platforms have been demonstrated for the detection of specific antibodies 28 , 29 , 30 , only few examples have been reported to date where a certain function of DNA-based nanomachine can be controlled by these important biomolecules 31 , 32 . In response to this consideration and taking inspiration from naturally occurring transport factors, proteins that bind a molecular cargo and release it only on an input-induced conformational change 33 , 34 , 35 , here we have designed a new class of DNA-based nanomachines that can load and release a molecular cargo on the binding of a specific target antibody. The system we propose here is highly versatile and in principle, generalizable to any antibody for which an antigen can be attached to a DNA-anchoring strand. In support of this claim, we have demonstrated here that our approach can be extended to three different triggering antibodies and the effect can be specific and selective enough even in complex media (90% serum). We have also demonstrated that our nanomachine can reversibly load and release the cargo on cyclic addition of the specific antibody and of the free antigen and that the modularity of our approach allows to design nanomachines that can respond to different antibodies in an orthogonal way or whose recognition module can be substituted on-the-fly on need. And while many examples have been reported where the release of DNA strands can be controlled by several molecular cues (that is, pH 40 , proteins 41 and so on), the possibility to use antibodies as triggering input to release a specific DNA strand might open new routes in the field of DNA nanotechnology. For example, because antibodies represent a wide class of clinical and diagnostic markers, the antibody-powered DNA nanomachines we have developed here may be useful in a range of applications, including point-of-care diagnostics, controlled drug-release and in vivo imaging. Finally, we have demonstrated that such strategy can be successfully used to activate a toehold strand displacement reaction on antibody binding. Because the toehold DNA strand displacement process has been used to assemble dynamic and static DNA-based nanostructures 25 , 37 , it would be in principle straightforward to rationally design a DNA self-assembly process that would allow to assemble or disassemble DNA nanostructures using potentially any antibody as the triggering molecular input. This would allow to design and build novel DNA nanostructures whose diagnostic or drug-delivery function could be triggered using specific diagnostic or clinically relevant antibodies. Methods Chemicals Sheep polyclonal anti-Dig antibodies were purchased from Roche Diagnostic Corporation (Germany), mouse monoclonal anti-DNP antibodies were purchased from Sigma-Aldrich, USA, murine monoclonal anti-HIV antibodies were purchased from Zeptometrix Corporation, USA, rat monoclonal anti-FLAG antibodies were purchased from Novus Biologicals, UK. All the antibodies were aliquoted and stored at 4 °C for immediate use or at −20 °C for long-term storage. Bovine serum albumin (A4503), fetal bovine serum (F0804), digoxigenin (D9026) and 2,4-dinitrophenol (D198501) were purchased from Sigma-Aldrich (Italy). Oligonucleotides and DNA-based nanomachines High-performance liquid chromatography-purified oligonucleotides were purchased from IBA (Gottingen, Germany) or Biosearch Technologies (Risskov, Denmark). The DNA strand cargos or the DNA nanomachines were modified with FAM (5-carboxyfluorescein) or Quasar670 and BHQ-1 (black hole quencher 1) or BHQ-2 (black hole quencher 2). The sequences and modification schemes are as follows: Triplex-forming DNA-based nanomachine: 5′-(FAM) TCTCTCCTTTCTCCTGTTTCTCCTCTTTCCTCTCT (BHQ1)-3′ Duplex-forming DNA-based nanomachine (control): 5′-(FAM) TCTCTCCTTTCTCCTGTTTCTTTTTTTTTTTTTTT (BHQ1)-3′ DNA cargo 13 nt: 5′-GAGAAAGGAGAGA-3′ DNA cargo 12 nt: 5′-AGAAAGGAGAGA-3′ DNA cargo 11 nt: 5′-GAAAGGAGAGA-3′ DNA cargo 10 nt: 5′-AAAGGAGAGA-3′ Anti-Dig-powered DNA-based nanomachine: 5′-(Dig) TCTCTCCTTTCTGTTTCTCTTTCCTCTCT (Dig)-3′ Anti-DNP powered DNA-based nanomachine: 5′-(DNP) TCTCTCCTTTCTGTTTCTCTTTCCTCTCT (DNP)-3′ DNA cargo 12 nt: 5′-(FAM) AGAAAGGAGAGA (BHQ1)-3′ DNA cargo 11 nt: 5′-(FAM) GAAAGGAGAGA (BHQ1)-3′ DNA cargo 10 nt: 5′-(FAM) AAAGGAGAGA (BHQ1)-3′ Anti-Dig-powered DNA-based nanomachine (single-labelled control): 5′-(Dig) TCTCTCCTTTCTGTTTCTCTTTCCTCTCT-3′ Modular DNA-based nanomachine: 5′- ATGGCATTAACCTTGCT TCTCTCCTTTCTGTTCTCTTTCCTCTCT AGGTTCATCATCAACTAG -3′ Here the portions in italic represents tail 1, where the first antigen-conjugated strand hybridizes. The portions in bold represents tail 2, where the second antigen-conjugated strand hybridizes. We also designed a nanomachine containing a frame inversion at the junction of one of its two tails with the following sequence: Modular DNA-based nanomachine (frame inversion): 5′- CAAGAATAAAACGCCACTGT TCTCTCCTTTCTGTTCTCTTTCCTCTCT-3′–3′- GTCACCGCAAAATAAGAACA -5′ Here the portions in italic represent the two tails and have same sequence but are oriented head-to-head due to the indicated (3–3′) frame inversion. This nanomachine allows to use a single-recognition element-conjugated strand to bind both tails thus lowering the production costs. Recognition-element-conjugated oligonucleotides (DNA/PNA as denoted below) were used as received from the appropriate vendors. The sequences and modification schemes are as follows: Dig-labelled strand (tail 1): 5′-(Dig) AGCAAGGTTAATGCCAT-3′ Dig-labelled strand (tail 2): 5′-CTAGTTGATGATGAACCT (Dig)-3′ DNP-labelled strand (tail 1): 5′-(DNP) AGCAAGGTTAATGCCAT-3′ DNP-labelled strand (tail 2): 5′-CTAGTTGATGATGAACCT (DNP)-3′ In the sequences above, Dig was introduced onto the DNA via EDC/NHS coupling to an amine attached via a 5-carbon linker on the 5′ end or on the 3′ end. DNP was attached via a triethylene glycol (TEG) spacer arm on either the 5′ or the 3′ terminus of the appropriate oligonucleotide. When using p17 as the recognition element we have employed a p17-PNA chimera as this is more convenient to fabricate than the equivalent DNA-polypeptide chimera. The p17-conjugated sequence we employed is as follows: p17-labelled strand: N term − ELDRWEKIRLRP −CAGTGGCGTTTTATTCT-C term The sequences in italics represent the amino-acid (standard one-letter code) sequence of the polypeptide antigen. This peptide-labelled strand was used with the modular DNA-based nanomachine containing the frame inversion (see above). Antibody-induced toehold strand-displacement reaction strands To activate toehold strand-displacement reaction by the released cargo ( Fig. 3l ) we have used the anti-Dig-powered DNA-based nanomachine (see sequence above) and the following sequence as cargo strand: Invanding cargo strand: 5′- AGAAAGGAGAGA AAGGAAAGAGGA -3 In this sequence the portion in bold (12 nucleotides) represents the domain recognized by the DNA-nanomachine while the portion in italics represents the strand-invading domain. The two strands forming the target duplex used for this experiment are labelled with Quasar570 and Quasar 670 and have the following sequences: Strand 1: 5′-AAGGAAAGAGGAAGAAAA (Quasar570)-3′ Strand 2: 5′-(Quasar670) TTTTCTTCCTCTTTCCTTTCTCTCCTTTCT-3′ Substitution and displacement strands The following sequences and strands were used to reversibly change the recognition element on the fly via the displacement and substitution of the antigen-conjugated strand ( Fig. 4r ): DNA-based nanomachine: 5′-CTTCGAATGGCATTAACCTTGCTTCTCTCCTTTCTGTTCTCTTTCCTCTCTAGGTTCATCATCAACTAGCTTTCT-3′ Dig-labelled strand (tail 1): 5′-(Dig) AGCAAGGTTAATGCCAT-3′ Dig-labelled strand (tail 2): 5′-CTAGTTGATGATGAACCT (Dig)-3′ DNP-labelled displacement strand (tail 1): 5′-(DNP) AGCAAGGTTAATGCCATTCGAAG-3′ DNP-labelled displacement strand (tail 2): 5′-AGAAAGCTAGTTGATGATGAACCT (DNP)-3′ Fluorescent experiments Fluorescent experiments were conducted at pH 6.8 in 50 mM Na 2 HPO 4 buffer, 150 mM NaCl, 10 mM MgCl 2 at 37 °C in a 100 μl cuvette (total volume of the solution 100 μl). Equilibrium fluorescence measurements were obtained using a Cary Eclipse Fluorimeter respectively with excitation at 490 (±5) nm (for DNA strands labelled with FAM) and acquisition at 517 (±5) nm or with excitation at 647 (±5) nm (for DNA strands labelled with Quasar670) and acquisition at 655 (±5) nm. Melting curves were obtained by preparing a 100 μl solution containing 10 nM of DNA-based nanomachine and 10 nM of DNA cargo strand and waiting 10 min for reaction before temperature ramping. Temperature was ramped between 20 and 70 °C at 1 °C min −1 . Data were normalized on a scale from 0.01 (set as background signal) to 1. In Fig. 2h , we have subtracted the normalized values of the control nanomachine to 1.01 to better compare triplex-forming nanomachine (signal-on) and control nanomachine (signal-off) results. Binding curves were obtained by preparing a 100 μl solution containing 50 nM of DNA-based nanomachine and 50 nM of DNA cargo strand and by sequentially adding increasing concentrations of the target antibody. Binding curve of the modular DNA-based nanomachines were obtained by preparing a 100 μl solution containing 50 nM of DNA-based nanomachine, 50 nM of DNA cargo strand and 50 nM of each antigen-conjugated strand and by sequentially increasing the concentration of the target antibody. For each concentration, the fluorescence signal was recorded every 10 min until it reached equilibrium. For experiments performed in serum we mixed the serum (90%) with a 10 × buffer (10%) (500 mM Na 2 HPO 4 , 1.5 M NaCl and 100 mM MgCl 2 at pH 6.8) so that the final ionic strength of the solution is similar to that used in previous experiments (50 mM Na 2 HPO 4 , 150 mM NaCl and 10 mM MgCl 2 , pH 6.8) at 37 °C. For strand displacement experiments, we have used a concentration of target duplex complex of 10 nM and followed the signal of the released strand labelled with Quasar570 with excitation at 540 (±5) nm and acquisition at 566 (±5) nm. For the binding curves, the observed fluorescence in the presence of different concentrations of antibody, F [antibody] , was fitted using the following four parameter logistic equation (equation (1)): 42 where, F min and F max are the minimum and maximum fluorescence values, K 1/2 is the equilibrium antibody concentration at half-maximum signal, n H is the Hill coefficient and [Antibody] is the concentration of the specific antibody added. This model is not necessarily physically relevant, but it does a good (empirical) job of fitting effectively bi-linear binding curves such as those we obtain for most of our nanomachines, providing a convenient and accurate means of estimating K 1/2 . The signals obtained in Fig. 2 with the triplex-forming and control nanomachine have been normalized on a 0–1 scale to allow for more ready interpretation of the results. More specifically, the relative occupancy (defined as the fraction of nanomachine bound to the cargo) was plotted against the cargo concentration. To obtain the relative occupancy, we considered the maximum signal of the triplex forming nanomachine as the signal of the unbound nanomachine (occupancy=0) while the minimum signal was considered as the signal of the completely bound nanomachine (occupancy=1). Conversely, for the duplex-control nanomachine ( Fig. 2a ) we considered the minimum signal as the signal of the unbound nanomachine (occupancy=0) and the maximum signal as the signal of the completely bound nanomachine (occupancy=1). In the other figures the % of cargo release was plotted against antibody concentration. In this case, the maximum cargo release (100%) was considered as the signal corresponding to the free cargo (in absence of nanomachine), while the minimum cargo release (0%) was considered as the signal of the completely bound cargo (in the presence of a saturating amount of nanomachine). Native PAGE experiments Native polyacrylamide gel (18%) was first incubated with running buffer (1 × TAE solution, pH 6.5) for 1 h at 37 °C. A volume of 30 μl of each DNA sample was mixed with 3.5 μl of glycerol, and then the mixture was added into the gel for electrophoresis. The native PAGE was carried out in a Mini-PROTEAN Tetra cell electrophoresis unit (Bio-Rad) at 37 °C, using 1 × TAE buffer at pH 6.5 and at a constant voltage of 50 V for 2 h 30 min (using Bio-Rad PowerPac Basic power supply). After 30 min of staining in 1 × SYBR gold (Invitrogen) (dissolved in a 1 × TAE buffer at pH 8.0), the gel was scanned by a Gel Doc XR+ system (Bio-Rad). In these experiments we used the following modified cargo strand containing the usual recognition domain and a 16-nt hairpin tail (bold below) that allow dye intercalation: Cargo strand Gel: 5′- CTGCGTTTCGCAGTTT AGAAAGGAGAGA-3′ Data availability Data supporting the findings of this study are available within the article (and its Supplementary Information files) and from the corresponding author on reasonable request. Additional information How to cite this article: Ranallo, S. et al . Antibody-powered nucleic acid release using a DNA-based nanomachine. Nat. Commun. 8, 15150 doi: 10.1038/ncomms15150 (2017). Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
An international team of researchers from the University of Rome Tor Vergata and the University of Montreal has reported, in a paper published this week in Nature Communications, the design and synthesis of a nanoscale molecular slingshot made of DNA that is 20,000 times smaller than a human hair. This molecular slingshot could "shoot" and deliver drugs at precise locations in the human body once triggered by specific disease markers. The molecular slingshot is only a few nanometres long and is composed of a synthetic DNA strand that can load a drug and then effectively act as the rubber band of the slingshot. The two ends of this DNA "rubber band" contain two anchoring moieties that can specifically stick to a target antibody, a Y-shaped protein expressed by the body in response to different pathogens such as bacteria and viruses. When the anchoring moieties of the slingshot recognize and bind to the arms of the target antibody the DNA "rubber band" is stretched and the loaded drug is released. "One impressive feature about this molecular slingshot," says Francesco Ricci, Associate Professor of Chemistry at the University of Rome Tor Vergata, "is that it can only be triggered by the specific antibody recognizing the anchoring tags of the DNA 'rubber band'. By simply changing these tags, one can thus program the slingshot to release a drug in response to a variety of specific antibodies. Since different antibodies are markers of different diseases, this could become a very specific weapon in the clinician's hands." "Another great property of our slingshot," adds Alexis Vallée-Bélisle, Assistant Professor in the Department of Chemistry at the University of Montreal, "is its high versatility. For example, until now we have demonstrated the working principle of the slingshot using three different trigger antibodies, including an HIV antibody, and employing nucleic acids as model drugs. But thanks to the high programmability of DNA chemistry, one can now design the DNA slingshot to 'shoot' a wide range of threrapeutic molecules." "Designing this molecular slingshot was a great challenge," says Simona Ranallo, a postdoctoral researcher in Ricci's team and principal author of the new study. "It required a long series of experiments to find the optimal design, which keeps the drug loaded in 'rubber band' in the absence of the antibody, without affecting too much its shooting efficiency once the antibody triggers the slingshot." The group of researchers is now eager to adapt the slingshot for the delivery of clinically relevant drugs, and to demonstrate its clinical efficiency. "We envision that similar molecular slingshots may be used in the near future to deliver drugs to specific locations in the body. This would drastically improve the efficiency of drugs as well as decrease their toxic secondary effects," concludes Ricci. The next step in the project is to target a specific disease and drug for which the therapeutic slingshot can be adapted for testing on cells in vitro, prior to testing in mice.
10.1038/ncomms15150
Medicine
Researchers discover how hormones define brain sex differences
Jessica Tollkuhn, Epigenomic organization and activation of brain sex differences, Nature (2022). DOI: 10.1038/s41586-022-04686-1. www.nature.com/articles/s41586-022-04686-1 Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-04686-1
https://medicalxpress.com/news/2022-05-hormones-brain-sex-differences.html
Abstract Oestradiol establishes neural sex differences in many vertebrates 1 , 2 , 3 and modulates mood, behaviour and energy balance in adulthood 4 , 5 , 6 , 7 , 8 . In the canonical pathway, oestradiol exerts its effects through the transcription factor oestrogen receptor-α (ERα) 9 . Although ERα has been extensively characterized in breast cancer, the neuronal targets of ERα, and their involvement in brain sex differences, remain largely unknown. Here we generate a comprehensive map of genomic ERα-binding sites in a sexually dimorphic neural circuit that mediates social behaviours. We conclude that ERα orchestrates sexual differentiation of the mouse brain through two mechanisms: establishing two male-biased neuron types and activating a sustained male-biased gene expression program. Collectively, our findings reveal that sex differences in gene expression are defined by hormonal activation of neuronal steroid receptors. The molecular targets we identify may underlie the effects of oestradiol on brain development, behaviour and disease. Main In mammals, gonadal steroid hormones regulate sex differences in neural activity and behaviour. These hormones establish sex-typical neural circuitry during critical periods of development and activate the display of innate social behaviours in adulthood. Among these hormones, oestradiol is the principal regulator of brain sexual differentiation in mice. In males, the testes briefly activate at birth, generating a sharp rise in testosterone that subsides within hours 10 . Neural aromatase converts circulating testosterone to 17β-oestradiol, which acts through ERα in discrete neuronal populations to specify sex differences in cell number and connectivity 1 , 3 , 11 . Despite extensive characterization of the neural circuits controlling sex-typical behaviours 12 , 13 , the underlying genomic mechanisms by which steroid hormone receptors act in these circuits remain unknown. Recent advancements in low-input and single-cell chromatin profiling methods have provided transformative insights into how transcription factors (TFs) regulate gene expression in small numbers of cells 14 . We set out to use these methods to discover the neuronal genomic targets of ERα and how they coordinate brain sexual differentiation. Genomic targets of ERα in the brain To determine the genomic targets of ERα in the brain, we used an established hormone starvation and replacement paradigm that reproducibly elicits sex-typical behaviours 2 and replicates the medium conditions required to detect ERα genomic binding in cell lines 15 . At 4 h after treatment with oestradiol benzoate (E2) or vehicle control, we profiled ERα binding in three interconnected limbic brain regions in which ERα regulates sex-typical behaviours: the posterior bed nucleus of the stria terminalis (BNSTp), medial pre-optic area and posterior medial amygdala 11 , 12 , 16 (Fig. 1a ). We used the low-input TF profiling method CUT&RUN, which we first validated in MCF-7 breast cancer cells by comparing to a previous dataset for chromatin immunoprecipitation with sequencing (ChIP–seq) of ERα (Extended Data Fig. 1 ). We detected 1,930 E2-induced ERα-bound loci in the brain (Fig. 1b , Extended Data Fig. 2 and Supplementary Table 1 ). The most enriched TF-binding motif in these peaks was the oestrogen response element (ERE), the canonical binding site of oestrogen receptors (Extended Data Fig. 2c, d ). Comparison of these ERα-binding sites to those previously detected in peripheral mouse tissues revealed that most are specific to the brain (Fig. 1c and Extended Data Fig. 2f ). Brain-specific ERα binding events were uniquely enriched for synaptic and neurodevelopmental disease Gene Ontology terms, including neurotransmitter receptors, ion channels, neurotrophin receptors and extracellular matrix genes (Fig. 1d , Extended Data Fig. 2h–k and Supplementary Table 1 ). We also found evidence supporting direct crosstalk between oestradiol and neuroprotection, as ERα directly binds loci for the neurotrophin receptors Ntrk2 (also known as Trkb ) and Ntrk3 (Extended Data Fig. 2k and Supplementary Table 1 ). Moreover, ERα targets the genes encoding androgen and progesterone receptors ( Ar and Pgr ; Supplementary Table 1 ). Fig. 1: Genomic targets of ERα in sexually dimorphic neuronal populations. a , Coronal sections containing sexually dimorphic brain areas used for ERα CUT&RUN. MPOA, medial pre-optic area; BNSTp, posterior bed nucleus of the stria terminalis; MeAp, posterior medial amygdala. b , Line plots (top) and heatmaps (bottom) of mean IgG and ERα CUT&RUN (C&R) CPM ±1 kb around E2-induced ERα CUT&RUN peaks (DiffBind edgeR, P adj < 0.1). The heatmaps are sorted by E2 ERα CUT&RUN signal. Colour scale is counts per million (CPM). Veh, vehicle. c , Cross-tissue ERα comparison, showing the proportion of ERα peaks detected specifically in brain. d , Top Gene Ontology biological process terms associated with genes nearest to brain-specific or shared (≥4 other tissues) ERα CUT&RUN peaks (clusterProfiler, P adj < 0.1). e , Combined sex E2 versus vehicle RNA-seq in BNSTp Esr1 + cells; light grey and red dots (DESeq2, P adj < 0.1), dark grey and red dots (DESeq2, P < 0.01), purple dots (validated by in situ hybridization (ISH)). FC, fold change. Positive FC is E2-upregulated, negative FC is E2-downregulated. f , Images (left panels) and quantitative analysis (right panels) of ISH for select genes induced by E2 in both sexes. Boxplot centre, median; box boundaries, first and third quartiles; whiskers, 1.5 × IQR from boundaries. Two-way analysis of variance: Brinp2 P = 0.0373, Rcn1 P = 0.0307, Enah P = 0.0003, Tle3 P = 0.0001; n = 4 per condition; scale bar, 200 µm. g , MA plot of E2-regulated ATAC–seq peaks in BNSTp Esr1 + cells; red dots are E2-open peaks (DiffBind edgeR, log 2 [FC] > 1, P adj < 0.05), grey dots are E2-close peaks (DiffBind edgeR, log 2 [FC] < −1, P adj < 0.05). h , Example ERα peaks at E2-induced genes. Top left number is the y-axis range in CPM. Shaded band indicates peak region. Source Data Full size image To determine the effects of ERα binding on gene expression and chromatin state, we focused on a single brain region, the BNSTp, given its central role in the regulation of sex-typical behaviours. The BNSTp receives olfactory input through the accessory olfactory bulb and projects to the medial pre-optic area, medial amygdala, hypothalamus and mesolimbic reward pathway 11 , 17 . We used our oestradiol treatment paradigm and performed translating ribosome affinity purification (TRAP), followed by RNA sequencing (RNA-seq), on the BNSTp from Esr1 Cre/+ ; Rpl22 HA/+ mice, enabling selective capture of ribosome-bound transcripts from Esr1 + cells. We identified 358 genes regulated by oestradiol, including genes known to be induced by E2 in breast cancer, such as Pgr and Nrip1 (Fig. 1e and Supplementary Table 2 ). We then validated several of these E2-regulated genes by in situ hybridization (Fig. 1f , Extended Data Fig. 3 and Extended Data Table 1 ). Genes that contribute to neuron wiring ( Brinp2 , Unc5b and Enah ) and synaptic plasticity ( Rcn1 and Irs2 ) were robustly induced by oestradiol in the BNSTp, illustrating how oestradiol signalling may sculpt sexual differentiation of BNSTp circuitry. To identify oestradiol-responsive chromatin regions, which may involve signalling pathways other than direct ERα binding 18 , we used our oestradiol treatment paradigm and performed assay for transposase-accessible chromatin with sequencing (ATAC–seq) on BNSTp Esr1 + cells collected from Esr1 Cre/+ ; Sun1 – GFP lx/+ mice. Across sexes, we detected 7,293 chromatin regions that increase accessibility with treatment (E2-open) as well as 123 regions that decrease accessibility (E2-close; Fig. 1g , Extended Data Fig. 4a–e and Supplementary Table 3 ). Motif enrichment analysis of these E2-open regions, which occurred primarily at distal enhancer elements (Extended Data Fig. 4c ), showed that 89% contain an ERE (Extended Data Fig. 4f ), consistent with the observation that nearly all ERα-binding sites overlapped an E2-open region (Extended Data Fig. 4g ). These results indicate that direct oestrogen receptor binding, rather than indirect signalling pathways, drives most E2-responsive chromatin regions in the BNSTp 19 . After examining the relationship between oestradiol-regulated chromatin loci and gene expression, we noted that E2-open regions localized at both E2-upregulated and E2-downregulated genes (Extended Data Fig. 5a ). E2-open regions at downregulated genes contained EREs yet lacked widespread ERα binding (Extended Data Fig. 5b, c ), suggesting that transient ERα recruitment may contribute to gene repression 20 . E2-upregulated genes with corresponding E2-responsive chromatin loci include Brinp2 , Rcn1 , Enah and Tle3 (Fig. 1h ); E2-downregulated genes include Astn2 , a regulator of synaptic trafficking, and Nr2f1 (Extended Data Figs. 3 and 5d ). Although most oestradiol regulation events were shared between sexes in our treatment paradigm, we noted certain sex-dependent effects. Pairwise comparison by sex revealed nearly 300 differential genes between females and males in our TRAP RNA-seq data (Supplementary Table 2 ). Moreover, we observed 306 genes with a differential response to oestradiol between sexes (Extended Data Fig. 5e, f and Supplementary Table 2 ). These sex-dependent, E2-responsive genes lacked enrichment of E2-responsive chromatin regions (Extended Data Fig. 5g ), which may indicate further oestradiol regulation at the translational level 21 . Likewise, across ERα CUT&RUN and ATAC–seq modalities, we observed negligible sex differences and sex-dependent, E2-responsive loci (Extended Data Fig. 5h–j and Supplementary Table 3 ), demonstrating that females and males mount a similar genomic response to exogenous oestradiol on removal of the hormonal milieu. Sex differences in gene regulation Across rodents and humans, the BNSTp of males is approximately 1.5–2 times larger than that of females 22 , 23 . In mice, this structural dimorphism arises from male-specific neonatal ERα activation, which promotes neuron survival 24 , 25 . Although BNSTp Esr1 + neurons are known to be GABAergic 16 , the identity of male-biased GABAergic neuron types remains unclear. To characterize these cells, we reanalysed a single-nucleus RNA-seq (snRNA-seq) dataset collected from the BNST of adult, gonadally intact females and males 26 . Seven BNSTp Esr1 + transcriptomic neuron types emerged from this analysis, and two of these marked by Nfix (i1:Nfix) and Esr2 (i3:Esr2) are more abundant in males than in females (Fig. 2a, b and Extended Data Fig. 6a, b ). Although a male bias in Esr2 /ERβ-labelled cells is known 27 , Nfix expression has not been described previously in the BNSTp. Immunofluorescent staining confirmed that males have twice as many ERα + Nfix + neurons as females (Fig. 2c and Extended Data Fig. 6c ). Fig. 2: Sex differences in cell type abundance and gene regulation in BNSTp Esr1 + cells. a , Uniform manifold approximation and projection (UMAP) visualization of BNSTp Esr1 + snRNA-seq inhibitory neuron clusters, coloured by identity (left), sex (middle) and Esr1 expression (right). b , Proportion of BNSTp Esr1 + nuclei in each BNSTp Esr1 + inhibitory neuron cluster per sex. Higher proportions of i1:Nfix ( P adj = 0.002) and i3:Esr2 ( P adj = 0.002) neurons are in males than females. Boxplot centre, median; box boundaries, first and third quartiles; whiskers, 1.5 × IQR from boundaries, n = 7, ** P adj < 0.01, one-sided, Wilcoxon rank-sum test, adjusted with the Benjamini–Hochberg procedure. c , BNSTp immunofluorescence (IF) staining for GFP (left micrographs) and Nfix (middle micrographs) in P14 female and male Esr1 Cre/+ ; Sun1 – GFP lx/+ animals (scale bar, 100 µm), with combined images (right micrographs) and their quantification (boxplots; right). Boxplot centre, median; box boundaries, first and third quartiles; whiskers, 1.5 × IQR from boundaries, n = 6, P = 0.0422, * P < 0.05, two-sided, unpaired t -test. d , Heatmap of median MetaNeighbor area under the receiver operating characteristic curve (AUROC) values for BNSTp Esr1 + clusters and cortical/hippocampal GABAergic neuron subclasses. The colour bar indicates the developmental origin of GABAergic subclasses. CGE, caudal ganglionic eminence; MGE, medial ganglionic eminence. e , Top: heatmap of MetaNeighbor AUROC values for BNSTp and MPOA Esr1 + clusters. Bottom: average expression of i1:Nfix marker genes across BNSTp and MPOA Esr1 + clusters. Dotted box indicates shared identity of i1:Nfix and i20:Gal.Moxd1 cells. n = 297 i20:Gal.Moxd1 cells, 2,459 i1:Nfix cells. Boxplot centre, median; box boundaries, first and third quartiles; whiskers, 1.5 × IQR from boundaries. f , Number of differentially expressed genes (DEGs) between females and males (DESeq2, P adj < 0.1) per BNST neuron snRNA-seq cluster. g , R 2 between percentage of TF gene expression and number of sex DEGs per cluster across snRNA-seq clusters. The inset shows correlation for the top-ranked TF gene, Esr1 . The error band represents the 95% confidence interval. h , Differential ATAC sites between gonadectomized (GDX), vehicle-treated females and males (top) and gonadally intact females and males (middle). Blue dots (edgeR, log 2 [FC] > 1, P adj < 0.05), red dots (edgeR, log 2 [FC] < −1, P adj < 0.05). Bottom: enrichment analysis of sex-biased ATAC peaks at sex DEGs. i , Top: k -means clustering (c1–c4) of differentially accessible ATAC peaks across four conditions(edgeR, P adj < 0.01). Bottom: dotplot showing the percentage of sites per cluster overlapping E2-open ATAC loci and motif enrichment analysis of peaks in each cluster (AME algorithm). ARE, androgen response element. j , Example ATAC peaks in k -means clusters 1 and 2. Top left number is the y-axis range in CPM. Shaded band indicates peak region. Source Data Full size image To interpret the functional relevance of BNSTp Esr1 + neuron types, we compared their gene expression profiles to the mouse cortical and hippocampal single-cell RNA-seq atlas using MetaNeighbor 28 , 29 . i1:Nfix neurons uniquely matched the identity of Lamp5 + neurogliaform interneurons 30 , 31 (Fig. 2d and Extended Data Fig. 6d, e ) and also shared markers ( Moxd1 and Cplx3 ; Extended Data Fig. 6b, f, g ) with a male-biased neuron type (i20:Gal/Moxd1) in the sexually dimorphic nucleus of the preoptic area (SDN-POA) that is selectively activated during male-typical mating, inter-male aggression and parenting behaviours 32 . Beyond these two genes, i1:Nfix and i20:Gal/Moxd1 neuron types share a transcriptomic identity, consistent with observed Nfix immunofluorescence in both the BNSTp and SDN-POA (Fig. 2e and Extended Data Fig. 6h ). Together, these results define male-biased neurons in the BNSTp and reveal a common Lamp5 + neurogliaform identity between the BNSTp and SDN-POA. We next examined sex differences in gene expression and found extensive and robust (false discovery rate < 0.1) sex-biased expression across BNST neuron types (Fig. 2f , Extended Data Fig. 7a–d and Supplementary Table 4 ). Most sex differences were specific to individual types (for example, Dlg2 /PSD-93 and Kctd16 in i1:Nfix neurons), whereas select differences were detected in multiple populations (for example, Tiparp and Socs2 ; Extended Data Fig. 7b, c ). Relative to all other TF genes in the genome, Esr1 , along with coexpressed hormone receptors, A r and progesterone receptor ( P gr ), correlated best with sex-biased gene expression (Fig. 2g and Extended Data Fig. 7e, f ), indicating potential regulatory function. To identify chromatin regions controlling sex differences in BNSTp gene expression, we performed ATAC–seq on BNSTp Esr1 + cells collected from gonadally intact Esr1 Cre/+ ; Sun1 – GFP lx/+ mice. Approximately 18,000 regions differed in accessibility between sexes; moreover, these regions localized at sex-biased genes detected in Esr1 + neuron types (Fig. 2h , Extended Data Fig. 7g, h and Supplementary Table 5 ). By contrast, gonadectomy reduced the number of sex-biased regions to 71 (Fig. 2h and Supplementary Table 5 ). We compared chromatin accessibility across sexes and gonadal hormone status using k -means clustering and discovered male-specific, but not female-specific, responses to gonadectomy (Fig. 2i and Extended Data Fig. 7i–k ). Notably, chromatin regions that close specifically in males on gonadectomy (cluster 1) primarily contained the androgen response element, whereas regions closing across both sexes (cluster 2) were enriched for the ERE and strongly overlapped E2-open regions (Fig. 2i, j ). Thus, in the BNSTp, oestradiol maintains chromatin in an active state across both sexes, whereas testosterone promotes chromatin activation and repression in males. Collectively, these data indicate that gonadal hormone receptors drive adult sex differences in gene expression, largely as a consequence of acute hormonal state. ERα drives neonatal chromatin state Sexual dimorphism in BNSTp wiring emerges throughout a 2-week window following birth, well after neural oestradiol has subsided in males. To determine the genomic targets of the neonatal surge, we performed ATAC–seq on BNSTp Esr1 + cells at postnatal day 4 (P4), which corresponds to the onset of male-biased BNSTp cell survival and axonogenesis 33 , 34 . We detected about 2,000 sex differences in chromatin loci at this time, and nearly all sex differences were dependent on neonatal oestradiol (NE; Fig. 3a , Extended Data Fig. 8a, b and Supplementary Table 6 ). NE-open regions were similarly induced by oestradiol in our adult dataset (Extended Data Fig. 8c, d ). To determine whether ERα drives male-typical chromatin opening, we performed ERα CUT&RUN on Esr1 + cells from females treated acutely with vehicle or oestradiol on the day of birth. Oestradiol rapidly recruited ERα to NE-open regions (Fig. 3a , Extended Data Fig. 8e–h and Supplementary Table 7 ). Our results demonstrate that ERα activation controls neonatal sex differences in the chromatin landscape. Fig. 3: Neonatal ERα genomic binding drives a sustained male-biased gene expression program. a , Heatmap of P4 BNST Esr1 + ATAC, P0 IgG CUT&RUN and P0 ERα CUT&RUN CPM ±1 kb around 1,605 NE-open and 403 NE-close ATAC peaks (edgeR, P adj < 0.1). ERα + , Sun1–GFP + nuclei; ERα − , Sun1–GFP − nuclei. b , UMAPs of adult (left) and neonatal (middle left) BNST Esr1 + snRNA-seq clusters; neonatal snRNA-seq clusters coloured by sex (middle right) and time point (right). c , Left: UMAPs of Nfix expression (top left), gene activity score (top right), motif chromVAR deviation score (bottom left) and CUT&RUN chromVAR deviation score (bottom right). Right: neonatal single-nucleus ATAC (snATAC) and adult BNSTp Nfix CUT&RUN tracks at the Nfix locus. Top left number is the y-axis range in CPM. Shaded band indicates peak region. Peak–RNA correlation indicates correlation coefficient for snATAC peaks correlated with Nfix expression. d , Heatmap of differential snATAC CPM between males (M) and females (F) at 1,605 NE-open sites, scaled across snRNA-seq clusters and grouped using k -means clustering. The barplot indicates the percentage of overlap for each k -means cluster with total and E2-induced BNSTp Nfix CUT&RUN peaks. e , Top: number of sex DEGs (MAST, P adj < 0.05) in P4 multiome clusters. Bottom: heatmaps indicating RNA log 2 [FC] of P4 sex DEGs (left) and Pearson’s correlation coefficient of NE-open (red) and NE-close (blue) ATAC peaks (right) linked to sex DEGs in each cluster. Genes without significant differential expression or correlation coefficients (not significant (NS)) are shown in white. f , Cyp19a1 /aromatase expression on P4. g , Left: NE-open ATAC peaks correlating with Lrp1b expression in Cyp19a1 − clusters, i2:Tac2 and i12:Esr1. Top left number is the y-axis range in CPM. Shaded band indicates peak region. Right, sex difference in Lrp1b expression in i2:Tac2 ( n = 260 female, 153 male, P adj = 2.13 × 10 −8 ), i4:Bnc2 ( n = 437 female, 373 male, P adj = 5.62 × 10 −37 ), i12:Esr1 ( n = 803 female, 507 male, P adj = 1.09 × 10 −12 ) cells. *** P adj < 0.001, MAST. h , Proportion of P4 sex DEGs detected as sex biased on P14. i , Top: i1:Nfix-specific, NE-open ATAC peaks at Fat1 and Scg2 loci on P4 and P14. Top left number is the y-axis range in CPM. Shaded band indicates peak region. Bottom: Sex difference in i1:Nfix Fat1 and Scg2 expression on P4 ( Fat1 , P adj = 1.28 × 10 −37 ; Scg2 , P adj = 1.54 × 10 −46 ; n = 887 female, 676 male) and P14 ( Fat1 , P adj = 1.13 × 10 −11 ; Scg2 , P adj = 1.52 × 10 −5 ; n = 554 female, 829 male). *** P adj < 0.001, MAST. Full size image Previous studies have proposed that adult sex differences in behaviour arise from permanent epigenomic modifications induced during the neonatal hormone surge 35 . Our datasets allowed us to examine whether chromatin regions regulated by neonatal hormone maintain sex-biased accessibility into adulthood. Only a small proportion of NE-regulated regions (about 10%) are maintained as sex biased in gonadally intact adults (Extended Data Fig. 9a ), implying substantial reprogramming of sex differences as a result of hormonal production during puberty (Fig. 2h ). Notably, although most NE-open loci did not maintain male-biased accessibility after puberty, they still localized at adult male-biased genes and clustered around adult male-biased ATAC peaks (Extended Data Fig. 9b–d ). These results suggest that certain male-biased genes undergo sequential regulation by ERα and AR in early life and adulthood, respectively. Sustained sex-biased gene expression Our identification of approximately 2,000 chromatin regions controlled by the neonatal hormone surge suggests that ERα drives extensive sex differences in the expression of genes that control brain sexual differentiation. To identify these genes, and assess the longevity of their expression, we performed single-nucleus multiome (RNA and ATAC) sequencing on female and male BNST Esr1 + cells collected at P4 and P14, after the closure of the neonatal critical period 3 (Fig. 3b and Extended Data Fig. 10a, b ). We profiled 14,836 cells and found that Esr1 + neuron identity is largely the same across P4, P14 and adulthood 36 (Fig. 3b and Extended Data Fig. 10c–f ). To identify TFs regulating Esr1 + neuron identity, we ranked TFs on their potential to control chromatin accessibility and their expression specificity across neuron types 37 (Extended Data Fig. 10g ). This approach uncovered canonical GABAergic identity TF genes a priori, including Lhx6 , Prox1 and Nkx2-1 , as well as regulators Zfhx3 and Nr4a2 (Extended Data Fig. 10g ). In addition, Nfix was predicted to regulate the identity of the male-biased i1:Nfix neuron type (Fig. 3c and Extended Data Fig. 10f, g ). Profiling Nfix binding in the adult BNSTp confirmed that the binding sites of this factor, including at the Nfix locus itself, are maintained in an active state primarily in i1:Nfix neurons (Fig. 3c , Extended Data Fig. 10h, i and Supplementary Table 8 ). Further examination of NE-responsive chromatin regions showed that NE-open regions vary as a function of neuron identity, with NE-open regions in i1:Nfix neurons preferentially containing Nfix binding events (Fig. 3d ). These data suggest that, in addition to specifying the chromatin landscape, identity TFs may dictate the cellular response to neonatal oestradiol by influencing ERα binding. Differential expression analysis across Esr1 + neuron types on P4 identified >400 sex-biased genes (Fig. 3e , Extended Data Fig. 11a and Supplementary Table 9 ). Performing RNA-seq on BNSTp Esr1 + cells collected from females treated at birth with vehicle or oestradiol showed that these sex differences largely arise as a consequence of the neonatal surge (Extended Data Fig. 11b–e and Supplementary Table 10 ). Notably, oestradiol-dependent sex differences in gene expression and chromatin state occurred in neurons lacking Cyp19a1 /aromatase expression (Fig. 3e–g ), indicative of non-cell-autonomous oestradiol signalling. To link our chromatin and gene expression data, we constructed a gene regulatory map across Esr1 + neuron types consisting of sex-biased genes and NE-regulated enhancers with correlated accessibility (Fig. 3e and Extended Data Fig. 11f, g ). This map demonstrates both divergent responses across neuron types as well as neuron-type-specific enhancers for common sex-biased targets. Notably, we identified Arid1b , an autism spectrum disorder candidate gene, among genes regulated by distinct enhancers across neuron types (Extended Data Fig. 11g ). Further examination showed that about 40% of high-confidence (family-wise error rate ≤ 0.05) autism spectrum disorder candidate genes 38 , including Grin2b, Scn2a1 (also known as Scn2a ) and Slc6a1 , contained NE-open chromatin regions and ERα occupancy (Extended Data Fig. 8j and Supplementary Table 6 ). We also examined whether sex-biased genes, and their corresponding enhancers, are sustained across the neonatal critical period by comparing Esr1 + neurons between P4 and P14. Although the total number of sex-biased genes declined between P4 and P14, a subset persisted as sex biased throughout the neonatal critical window (Fig. 3h and Supplementary Table 10 ). In i1:Nfix neurons, about 20% of differentially expressed genes on P4 persisted as sex biased on P14. These genes regulate distinct components of neural circuit development, including neurite extension ( Klhl1 and Pak7 (also known as Pak5 )), axon pathfinding ( Epha3 and Nell2 ), neurotransmission ( Kcnab1 and Scg2 ) and synapse formation ( Il1rap and Tenm2 ; Fig. 3h and Extended Data Fig. 11h ). Together, these results show that neonatal ERα activation drives the epigenetic maintenance of a gene expression program that facilitates sexual differentiation of neuronal circuitry. Sustained sex differences require ERα The adult display of male mating and territoriality behaviours requires ERα expression in GABAergic neurons 16 . To determine whether ERα is also required for sustained sex differences in gene expression, we performed snRNA-seq on 38,962 BNST GABAergic neurons isolated from P14 conditional mutant males lacking ERα ( Vgat Cre ; Esr1 lx/lx ; Sun1 – GFP lx ), and littermate control females and males ( Vgat Cre ; Esr1 +/+ ; Sun1 – GFP lx ; Fig. 4a and Extended Data Fig. 12a–d ). GABAergic neurons in ERα-mutant males did not deviate from P14 control or adult BNST neuron types (Fig. 4a and Extended Data Fig. 12b ), indicating that ERα is dispensable for neuron identity. However, the abundance of male-biased i1:Nfix and i3:Esr2 neurons dropped to female levels in Vgat Cre ; Esr1 lx/lx males (Fig. 4b and Extended Data Fig. 12d ), suggesting that neonatal ERα activation is essential for their male-typical abundance. Fig. 4: ERα is required for sustained sex differences in gene expression. a , UMAPs of adult (top left) and P14 (top right) BNST Vgat + snRNA-seq clusters; P14 Vgat + snRNA clusters coloured by group (bottom left) and Esr1 + status (bottom right). b , Top: number of female versus male sex DEGs (MAST, P adj < 0.05) in P14 snRNA clusters (black bar). Number of female versus male sex DEGs detected in female versus male KO comparison (grey bar). Bottom: heatmap of mean expression of i1:Nfix sex DEGs, scaled across control males, control females and conditional ERα-KO males. c , Neonatal ERα activation drives a sustained male-typical gene expression program. Full size image Differential expression analysis between control females and control or conditional ERα-knockout (KO) males in each neuron type established that ERα is required for nearly all sexually dimorphic gene expression, with the exception of those located on the Y chromosome or escaping X inactivation (Fig. 4b , Extended Data Fig. 12e and Supplementary Table 11 ). Notably, ERα-KO males exhibited feminized expression of sex-biased genes (Fig. 4c and Extended Data Fig. 12f ). Together, these findings demonstrate that the neonatal hormone surge drives a sustained male-typical gene expression program through activation of a master regulator TF, ERα (Fig. 4c ). Discussion Here we identify the genomic targets of ERα in the brain and demonstrate that BNSTp sexual differentiation is defined by both male-biased cell number and gene expression. We find that sexual dimorphism in the BNSTp equates to increased numbers of i1:Nfix and i3:Esr2 neurons in males. The transcriptomic identity of i1:Nfix neurons resembles that of cortical Lamp5 + neurogliaform interneurons, which provide regional inhibition through synaptic and ambient release of GABA 39 . As all BNSTp neurons and much of the POA are GABAergic, we predict that higher numbers of i1:Nfix inhibitory neurons enables stronger disinhibition of downstream projection sites. The net result is a gain of responses to social information, leading to male-typical levels of mounting or attacking 40 . Male-biased populations of inhibitory neurons also modulate sex-typical behaviours in Drosophila , but they do not rely on gonadal hormones to specify sex-biased enhancers 41 , 42 , 43 , 44 , 45 . In vertebrates, hormone receptor signalling may have evolved to coordinate gene regulation throughout a neural circuit as a strategy for controlling context-dependent behavioural states. Moreover, the association between hormone receptor target genes identified here and human neurological and neurodevelopmental conditions may explain the notable sex biases of these diseases. Our data show that the neonatal hormone surge activates ERα to drive a sustained male-biased gene expression program in the developing brain. We speculate that this program establishes male-typical neuronal connectivity across the neonatal critical period and potentially primes the response to hormone receptor activation at puberty. In the adult brain, gonadectomy ablated sex differences in chromatin accessibility, and under these conditions, Esr1 + neurons of both sexes exhibited a similar genomic response to exogenous oestradiol. Together, these findings suggest that although sex differences in developmental gonadal hormone signalling establish dimorphisms in BNSTp circuitry, the genome remains responsive to later alterations in the hormonal milieu. Likewise, manipulating hormonal status, circuit function or individual genes consistently demonstrates that both sexes retain the potential to engage in behaviours typical of the opposite sex 46 , 47 , 48 , 49 . This study implicates puberty as a further critical period for sexual differentiation of gene regulation and provides an archetype for studying hormone receptor action across life stages, brain regions and species. Methods Animals All animals were maintained on a 12-h light/12-h dark cycle and provided food and water ad libitum. All mouse experiments were performed under strict guidelines set forth by the CSHL Institutional Animal Care and Use Committee. All animals were randomly assigned to experimental groups. Esr1 Cre (ref. 50 ), Rpl22 HA (ref. 51 ), ROSA26 CAG-Sun1–sfGFP–Myc (ref. 52 ); abbreviated as Sun1 – GFP ), Vgat Cre (ref. 53 ) and C57Bl6/J wild-type mice were obtained from Jackson Labs. Esr1 lx mice were received from S. A. Khan 54 . Adult male and female mice were used between 8 and 12 weeks of age. For adult hormone treatment experiments, animals were euthanized for tissue collection 4 h after subcutaneous administration of 5 μg E2 (Sigma E8515) suspended in corn oil (Sigma C8267) or vehicle 3 weeks post-gonadectomy. For neonatal CUT&RUN, ATAC–seq and RNA-seq experiments, animals were treated with 5 μg E2 or vehicle on P0 and collected 4 h later (ERα CUT&RUN) or 4 days later (ATAC–seq and nuclear RNA-seq). For neonatal multiome, snRNA-seq and IF quantification, animals were collected on P4 (multiome) or P14 (multiome, snRNA-seq and IF staining). Cell lines Cell lines include mHypoA clu-175 clone (Cedarlane Labs) and MCF-7 (ATCC). Cell lines were not tested for mycoplasma contamination. Cells were maintained in standard DMEM supplemented with 10% FBS and penicillin/streptomycin. Before CUT&RUN, MCF7 cells were grown in phenol-red-free DMEM medium containing 10% charcoal-stripped FBS and penicillin/streptomycin for 48 h and then treated with 20 nM 17-β-oestradiol or vehicle (0.002% ethanol) for 45 min. Adult RNA-seq and in situ hybridization Experiments were performed as previously described 55 . Briefly, the BNSTp was microdissected following rapid decapitation of deeply anaesthetized adult Esr1 Cre/+ ; Rpl22 HA/+ mice. Tissue homogenization, immunoprecipitation and RNA extraction were performed, and libraries were prepared from four biological replicate samples (each consisting of 8–9 pooled animals) using NuGEN Ovation RNA-Seq kits (7102 and 0344). Multiplexed libraries were sequenced with 76-bp single-end reads on the Illumina NextSeq. Validation by in situ hybridization staining and quantification was performed by an investigator blinded to experimental condition, as previously described 16 , 55 . Riboprobe sequences are listed in Extended Data Table 1 . Isolation of nuclei from adult mice for ATAC–seq Adult Esr1 Cre/+ ; Sun1 – GFP lx/+ mice (four pooled per condition) were deeply anaesthetized with ketamine/dexmedetomidine. Sections of 500 μm spanning the BNSTp were collected in an adult mouse brain matrix (Kent Scientific) on ice. The BNSTp was microdissected and collected in 1 ml of cold supplemented homogenization buffer (250 mM sucrose, 25 mM KCl, 5 mM MgCl 2 , 120 mM tricine-KOH, pH 7.8), containing 1 mM dithiothreitol, 0.15 mM spermine, 0.5 mM spermidine and 1× EDTA-free PIC (Sigma Aldrich 11873580001). The tissue was dounce homogenized 15 times in a 1-ml glass tissue grinder (Wheaton) with a loose pestle. Next, 0.3% IGEPAL CA-630 was added, and the suspension was homogenized five times with a tight pestle. The homogenate was filtered through a 40-μm strainer and then centrifuged at 500 g for 15 min at 4 °C. The pellet was resuspended in 0.5 ml homogenization buffer containing 1 mM dithiothreitol, 0.15 mM spermine, 0.5 mM spermidine and 1× EDTA-free PIC. A total of 30,000 GFP + nuclei were collected into cold ATAC-RSB (10 mM Tris-HCl pH 7.5, 10 mM NaCl, 3 mM MgCl 2 ) using the Sony SH800S Cell Sorter (purity mode) with a 100-μm sorting chip. After sorting, 0.1% Tween-20 was added, and the nuclei were centrifuged at 500 g for 5 min at 4 °C. The pellet of nuclei was directly resuspended in transposition reaction mix. ATAC–seq library preparation Tn5 transposition was performed using the OMNI-ATAC protocol 56 . A 2.5 μl volume of Tn5 enzyme (Illumina 20034197) was used in the transposition reaction. Libraries were prepared with NEBNext High-Fidelity 2× PCR Master Mix (NEB M0541L), following the standard protocol. After the initial five cycles of amplification, another four cycles were added, on the basis of qPCR optimization. Following amplification, libraries were size selected (0.5×–1.8×) twice with AMPure XP beads (Beckman Coulter A63880) to remove residual primers and large genomic DNA. Individually barcoded libraries were multiplexed and sequenced with paired-end 76-bp reads on an Illumina NextSeq, using either the Mid or High Output Kit. Cell line CUT&RUN To collect cells for CUT&RUN, cells were washed twice with Hank’s buffered salt solution (HBSS) and incubated for 5 min with pre-warmed 0.5% trypsin–EDTA (10×) at 37 °C/5% CO 2 . Trypsin was inactivated with DMEM supplemented with 10% FBS and penicillin/streptomycin (mHypoA cells) or phenol-red-free DMEM supplemented with 10% charcoal-stripped FBS and penicillin/streptomycin (MCF-7 cells). After trypsinizing, cells were centrifuged at 500 g in a 15-ml conical tube and resuspended in fresh medium. CUT&RUN was performed as previously described 14 , with minor modifications. Cells were washed twice in wash buffer (20 mM HEPES, pH 7.5, 150 mM NaCl, 0.5 mM spermidine, 1× PIC, 0.02% digitonin). Cell concentration was measured on a Countess II FL Automated Cell Counter (Thermo Fisher). A total of 25,000 cells were used per sample. Cells were bound to 20 μl concanavalin A beads (Bangs Laboratories, BP531), washed twice in wash buffer, and incubated overnight with primary antibody (ERα: Santa Cruz sc-8002 or EMD Millipore Sigma 06-935, Nfix: Abcam ab101341) diluted 1:100 in antibody buffer (wash buffer containing 2 mM EDTA). The following day, cells were washed twice in wash buffer, and 700 ng ml −1 protein A-MNase (pA-MNase, prepared in-house) was added. After 1 h incubation at 4 °C, cells were washed twice in wash buffer and placed in a metal heat block on ice. pA-MNase digestion was initiated with 2 mM CaCl 2 . After 90 min, digestion was stopped by mixing 1:1 with 2× stop buffer (340 mM NaCl, 20 mM EDTA, 4 mM EGTA, 50 μg ml −1 RNase A, 50 μg ml −1 glycogen, 0.02% digitonin). Digested fragments were released by incubating at 37 °C for 10 min, followed by centrifuging at 16,000 g for 5 min at 4 °C. DNA was purified from the supernatant by phenol–chloroform extraction, as previously described 14 . Adult brain CUT&RUN Nuclei were isolated from microdissected POA, BNSTp and MeAp from gonadectomized C57Bl6/J mice, following anatomic designations 57 (Fig. 1a ), as described previously 52 . Following tissue douncing, brain homogenate was mixed with a 50% OptiPrep solution and underlaid with 4.8 ml of 30% then 40% OptiPrep solutions, in 38.5-ml Ultra-clear tubes (Beckman-Coulter C14292). Ultracentrifugation was performed with a Beckman SW-28 swinging-bucket rotor at 9,200 r.p.m. for 18 min at 4 °C. Following ultracentrifugation, an ≈1.5-ml suspension of nuclei was collected from the 30/40% OptiPrep interface by direct tube puncture with a 3-ml syringe connected to an 18-gauge needle. Nucleus concentration was measured on a Countess II FL Automated Cell Counter. For ERα CUT&RUN (1:100, EMD Millipore Sigma 06-935), 400,000 nuclei were isolated from BNST, MPOA and MeA of five animals. For Nfix CUT&RUN (1:100, Abcam ab101341), 200,000 nuclei were isolated from BNSTp of five animals. A total of 400,000 cortical nuclei were used for the CUT&RUN IgG control (1:100, Antibodies-Online ABIN101961). Before bead binding, 0.4% IGEPAL CA-630 was added to the nucleus suspension to increase affinity for concanavalin A magnetic beads. All subsequent steps were performed as described above, with a modified wash buffer (20 mM HEPES, pH 7.5, 150 mM NaCl, 0.1% BSA, 0.5 mM spermidine, 1× PIC). CUT&RUN library preparation Cell line CUT&RUN libraries were prepared using the SMARTer ThruPLEX DNA-seq Kit (Takara Bio R400676), with the following PCR conditions: 72 °C for 3 min, 85 °C for 2 min, 98 °C for 2 min, (98 °C for 20 s, 67 °C for 20 s, 72 °C for 30 s) × 4 cycles, (98 °C for 20 s, 72 °C for 15 s) × 14 cycles (MCF7) or 10 cycles (mHypoA). Brain CUT&RUN libraries were prepared using the same kit with 10 PCR cycles. All samples were size selected with AMPure XP beads (0.5×–1.7×) to remove residual adapters and large genomic DNA. Individually barcoded libraries were multiplexed and sequenced with paired-end 76-bp reads on an Illumina NextSeq, using either the Mid or High Output Kit. For the mHypoA experiment, samples were sequenced with paired-end 25-bp reads on an Illumina MiSeq. Nfix immunofluorescence staining Brains were dissected from perfused P14 Esr1 Cre/+ ; Sun1 – GFP lx/+ animals and cryosectioned at 40 μm before immunostaining with primary antibodies to GFP (1:1,000, Aves GFP-1020) and Nfix (1:1,000, Thermo Fisher PA5-30897), and secondary antibodies against chicken (1:300, Jackson Immuno 703-545-155) and rabbit (1:800, Jackson Immuno 711-165-152), as previously described 16 . A Zeiss Axioimager M2 System equipped with MBF Neurolucida Software was used to take 20× wide-field image stacks spanning the BNSTp (five sections, both sides). The number of Nfix + , GFP + and Nfix + GFP + cells was quantified using Fiji/ImageJ from the centre three optical slices by an investigator blinded to condition. Neonatal bulk ATAC–seq Female and male Esr1 Cre/+ ; Sun1 – GFP lx/+ mice were injected subcutaneously with 5 μg E2 or vehicle on P0 and collected on P4 (4–5 animals pooled per condition and per replicate). The BNSTp was microdissected, as described above, and collected in 300 μl of cold, supplemented homogenization buffer. Nuclei were extracted as described for the adult brain. After filtering through a 40-μm strainer, the nuclei were diluted 3:1 with 600 μl of cold, supplemented homogenization buffer and immediately used for sorting. A total of 30,000 GFP + nuclei were collected into cold ATAC-RSB buffer using the Sony SH800S Cell Sorter (purity mode) with a 100-μm sorting chip. After sorting, nuclei transposition and library preparation were performed, as described above. P0 ERα CUT&RUN Female Esr1 Cre/+ ; Sun1 – GFP lx/+ mice were injected subcutaneously with 5 μg E2 or vehicle on P0 and collected 4 h later (5 animals pooled per condition and per replicate). The BNSTp, MPOA and MeA were microdissected, and nuclei were extracted, as described for the neonatal bulk ATAC–seq experiment. After filtering through a 40-μm strainer, the nuclei were diluted 3:1 with 600 μl of cold, supplemented homogenization buffer. A 2 mM concentration of EDTA was added, and the sample was immediately used for sorting. A total of 150,000 GFP + nuclei were collected into cold CUT&RUN wash buffer using the Sony SH800S Cell Sorter (purity mode) with a 100-μm sorting chip. GFP − events were collected into cold CUT&RUN wash buffer, and 150,000 nuclei were subsequently counted on the Countess II FL Automated Cell Counter for ERα− and IgG negative-control CUT&RUN. All subsequent steps were performed as described for the adult brain CUT&RUN experiments. P0 CUT&RUN libraries were prepared with 10 PCR cycles. Neonatal single-nucleus multiome sequencing The BNST was microdissected fresh from P4 and P14 female and male Esr1 Cre/+ ; Sun1 – GFP lx/+ mice, as described above (4–5 animals pooled per condition). Nuclei were extracted and prepared for sorting, as performed for the neonatal bulk ATAC–seq experiment, with the inclusion of 1 U μl −1 Protector RNase inhibitor (Sigma) in the homogenization buffer. A total of 40,000–50,000 GFP + nuclei were collected into 1 ml of cold ATAC-RSB buffer, supplemented with 0.1% Tween-20, 0.01% digitonin, 2% sterile-filtered BSA (Sigma A9576) and 1 U μl −1 Protector RNase inhibitor. The nuclei were centrifuged in a swinging-bucket rotor at 500 g for 10 min at 4 °C. About 950 μl of supernatant was carefully removed, and 200 μl 10x Genomics dilute nuclei buffer was added to the side of the tube without disturbing the pellet. The nuclei were centrifuged again at 500 g for 10 min at 4 °C. About 240 μl of supernatant was carefully removed, and the nuclei were resuspended in the remaining volume (about 7 μl). Samples were immediately used for the 10x Genomics Single Cell Multiome ATAC + Gene Expression kit (1000285), following the manufacturer’s instructions. snRNA-seq and snATAC–seq libraries were sequenced on an Illumina NextSeq, using the High Output kit. Each sample was sequenced to a depth of about 40,000–80,000 mean reads per cell for the snATAC library and about 40,000–50,000 mean reads per cell for the snRNA library. P14 snRNA-seq The BNSTp was microdissected from P14 female and male Vgat Cre ; Esr1 +/+ ; Sun1 – GFP lx and male Vgat Cre ; Esr1 lx/lx ; Sun1 – GFP lx mice. Tissue samples from individual animals were immediately flash frozen in an ethanol dry-ice bath and stored at −80 °C until n = 3 animals were collected per group. On the day of the experiment, tissue samples were removed from −80 °C and maintained on dry ice. With the tissue still frozen, cold, supplemented homogenization buffer was added to the tube, and the tissue was immediately transferred to a glass homogenizer and mechanically dounced and filtered, as described for our other neonatal experiments. A total of 80,000–90,000 GFP + nuclei were collected into 100 μl of cold ATAC-RSB buffer, supplemented with 1% sterile-filtered BSA (Sigma A9576), and 1 U μl −1 Protector RNase inhibitor, in a 0.5-ml DNA lo-bind tube (Eppendorf) pre-coated with 30% BSA. After collection, nuclei were pelleted with two rounds of gentle centrifugation (200 g for 1 min) in a swinging-bucket centrifuge at 4 °C. After the second round, the supernatant was carefully removed, leaving about 40 μl in the tube. The nuclei were gently resuspended in this remaining volume and immediately used for the 10x Genomics Single Cell 3′ Gene Expression kit v3 (1000424), following the manufacturer’s instructions. Each biological sample was split into two 10× lanes, producing 6 libraries that were pooled and sequenced on an Illumina NextSeq 2000 to a depth of about 45,000–60,000 mean reads per cell. Neonatal nuclear RNA-seq Female Esr1 Cre/+ ; Sun1 – GFP lx/+ mice were injected subcutaneously with 5 μg E2 or vehicle on P0. Four days later, animals were rapidly decapitated, and 400-μm sections were collected in cold homogenization buffer using a microtome (Thermo Scientific Microm HM 650V). The BNST was microdissected (4 animals pooled per condition) and collected in 1 ml of cold, supplemented homogenization buffer containing 0.4 U ml −1 RNAseOUT (Thermo Fisher, 10777019). Nuclei were isolated as described for neonatal bulk ATAC–seq. A total of 12,000 GFP + nuclei were collected into cold Buffer RLT Plus supplemented 1:100 with β-mercaptoethanol (Qiagen, 74034) using the Sony SH800S Cell Sorter (purity mode) with a 100-μm sorting chip. Nuclei lysates were stored at −80 °C until all replicates were collected. Nuclei samples for all replicates were thawed on ice, and RNA was isolated using the Qiagen RNeasy Plus Micro Kit (74034). Strand-specific RNA-seq libraries were prepared using the Ovation SoLo RNA-seq system (Tecan Genomics, 0501-32), following the manufacturer’s guidelines. Individually barcoded libraries were multiplexed and sequenced with single-end 76-bp reads on an Illumina NextSeq, using the Mid Output Kit. Bioinformatics and data analysis CUT&RUN data processing Paired-end reads were trimmed to remove Illumina adapters and low-quality basecalls (cutadapt -q 30) 58 . Trimmed reads were aligned to mm10 using Bowtie2 (ref. 59 ) with the following flags:--dovetail--very-sensitive-local--no-unal--no-mixed--no-discordant--phred33. Duplicate reads were removed using Picard ( ) MarkDuplicates (REMOVE_DUPLICATES = true). Reads were filtered by mapping quality 60 (samtools view -q 40) and fragment length 61 (deepTools alignmentSieve --maxFragmentLength 120). Reads aligning to the mitochondrial chromosome and incomplete assemblies were also removed using SAMtools. After filtering, peaks were called on individual replicate BAM files using MACS2 callpeak (--min-length 25 -q 0.01) 62 . To identify consensus Nfix peaks across samples, MACS2 callpeak was performed on BAM files merged across biological replicates ( n = 2) and subsequently intersected across treatment and sex. TF peaks that overlapped peaks called in the IgG control were removed using bedtools intersect (-v) 63 before downstream analysis. CUT&RUN data analysis CUT&RUN differential peak calling was performed with DiffBind v2.10.0(ref. 64 ). A count matrix was created from individual replicate BAM and MACS2 narrowpeak files ( n = 2 per condition). Consensus peaks were recentred to ±100 bp around the point of highest read density (summits = 100). Contrasts between sex and treatment were established (categories = c(DBA_TREATMENT, DBA_CONDITION)), and edgeR 65 was used for differential peak calling. Differential ERα peaks with P adj < 0.1 were used for downstream analysis. For Nfix, differential peaks with a P adj < 0.1 and abs(log 2 [FC]) > 1 were used for downstream analysis. Differential peak calling for the MCF-7 CUT&RUN experiment was performed with DESeq2 ( P adj < 0.1) in DiffBind. Differential peak calling for the P0 ERα CUT&RUN experiment was performed with DESeq2 ( P adj < 0.01) in DiffBind. To identify sex-dependent, oestradiol-responsive peaks for adult brain ERα CUT&RUN, the DiffBind consensus peakset count matrix was used as input to edgeR, and an interaction between sex and treatment was tested with glmQLFTest. Brain E2-induced ERα CUT&RUN peaks were annotated to NCBI RefSeq mm10 genes using ChIPseeker 66 . DeepTools plotHeatmap was used to plot ERα CUT&RUN (Fig. 1b ), representing CPM-normalized bigwig files pooled across replicate and sex per condition, at E2-induced ERα peaks. Heatmaps of individual ERα CUT&RUN replicates are shown in Extended Data Fig. 2 . CUT&RUNTools 67 was used to plot ERα CUT&RUN fragment ends surrounding ESR1 motifs (JASPAR MA0112.3) in E2-induced ERα ChIP–seq peaks. BETA (basic mode, -d 500000) 68 was used to determine whether ERα peaks were significantly overrepresented at E2-regulated RNA-seq genes ( P < 0.01), as well as sex-dependent E2-regulated genes ( P < 0.01), compared to non-differential, expressed genes. Motif enrichment analysis of ERα peaks was performed with AME 69 using the 2020 JASPAR core non-redundant vertebrate database. Motif enrichment analysis was performed using a control file consisting of shuffled primary sequences that preserves the frequency of k -mers (--control --shuffle--). The following seven ERα ChIP–seq files were lifted over to mm10 using UCSC liftOver and intersected with E2-induced ERα peaks to identify brain-specific and shared (≥4 intersections) ERα-binding sites: uterus (intersection of GEO: GSE36455 (uterus 1) 70 and GEO: GSE49993 (uterus 2) 71 ), liver (intersection of GEO: GSE49993 (liver 1) 71 and GEO: GSE52351 (liver 2) 72 ), aorta 72 (GEO: GSE52351 ), efferent ductules 73 (Supplementary Information) and mammary gland 74 (GEO: GSE130032 ). ClusterProfiler 75 was used to identify associations between brain-specific and shared ERα peak-annotated genes and Gene Ontology (GO) biological process terms (enrichGO, ont = 'BP', P adj < 0.1). For Disease Ontology (DO) and HUGO Gene Nomenclature Committee (HGNC) gene family enrichment, brain-specific ERα peak-associated gene symbols were converted from mouse to human using bioMart 76 and then analysed with DOSE 77 ; enrichDO, P adj < 0.1) and enricher ( P adj < 0.1). Log-odds ESR1 and ESR2 motif scores in brain-specific and shared ERα peaks were calculated with FIMO 78 , using default parameters. MCF7 ERα CUT&RUN data were compared to MCF7 ERα ChIP–seq data from ref. 79 (GEO: GSE59530 ). Single-end ChIP–seq fastq files for two vehicle-treated and two 17β-oestradiol (E2)-treated IP and input samples were accessed from the Sequence Read Archive and processed identically to ERα CUT&RUN data, with the exception of fragment size filtering. Differential ERα ChIP–seq peak calling was performed using DiffBind DESeq2 ( P adj < 0.01). DeepTools was used to plot CPM-normalized ERα CUT&RUN signal at E2-induced ERα ChIP–seq binding sites. DREME 80 and AME were used to compare de novo and enriched motifs between E2-induced MCF7 ERα CUT&RUN and ChIP–seq peaks. Adult RNA-seq data processing and analysis Reads were adapter trimmed and quality filtered ( q > 30) ( ), and then mapped to the mm10 reference genome using STAR 81 . The number of reads mapping to the exons of each gene was counted with featureCounts 82 , using the NCBI RefSeq mm10 gene annotation. Differential gene expression analysis was performed using DESeq2 (ref. 83 ) with the following designs: effect of treatment (design = ~ batch + hormone), effect of sex (design = ~ batch + sex), two-way comparison of treatment and sex (design = ~ batch + hormone_sex), four-way comparison (design = ~ 0 + hormone_sex) and sex–treatment interaction (design = ~ batch + sex + hormone + sex:hormone). ATAC–seq data processing ATAC–seq data were processed using the ENCODE ATAC–seq pipeline ( ) with default parameters. To generate CPM-normalized bigwig tracks, quality-filtered, Tn5-shifted BAM files were converted to CPM-normalized bigwig files using DeepTools bamCoverage (--binSize 1 --normalizeUsing CPM). Adult GDX treatment ATAC–seq data analysis ATAC–seq differential peak calling was performed with DiffBind v2.10.0. A DiffBind dba object was created from individual replicate BAM and MACS2 narrowPeak files ( n = 3 per condition). A count matrix was created with dba.count, and consensus peaks were recentred to ±250 bp around the point of highest read density (summits = 250). Contrasts between sex and treatment were established (categories = c(DBA_TREATMENT, DBA_CONDITION)), and edgeR was used for differential peak calling. Differential peaks with an FDR < 0.05 and abs(log 2 [FC]) > 1 or abs(log 2 [FC] )> 0 were used for downstream analysis. DeepTools computeMatrix and plotHeatmap were used to plot mean ATAC CPM at E2-open ATAC peaks. To identify sex-dependent, oestradiol-responsive peaks, the DiffBind consensus peakset count matrix was used as input to edgeR, and an interaction between sex and treatment was tested with glmQLFTest. E2-open ATAC peaks and total vehicle or E2 ATAC peaks (intersected across replicate and sex for each treatment condition) were annotated to NCBI RefSeq mm10 genes using ChIPseeker. ClusterProfiler was used to calculate the enrichment of GO biological process terms. DO and HGNC gene family enrichment was performed on E2-open ATAC peak-associated genes, as described above for ERα CUT&RUN analysis. BETA (basic mode, -d 500000) 68 was used to determine whether E2-open ATAC peaks were significantly overrepresented at E2-regulated RNA-seq genes ( P < 0.01), as well as sex-dependent E2-regulated genes ( P < 0.01), compared to non-differential, expressed genes. Motif enrichment analysis of E2-open ATAC peaks was performed with AME, using the 2020 JASPAR core non-redundant vertebrate database. FIMO was used to determine the percentage of E2-open ATAC peaks containing the enriched motifs shown in Extended Data Fig. 4h, i . Adult gonadally intact ATAC–seq analysis ATAC–seq differential peak calling and comparison between gonadally intact (abbreviated as intact) and GDX ATAC samples were performed with DiffBind v2.10.0 and edgeR. A DiffBind dba object was created from individual replicate BAM and MACS2 narrowPeak files for the four groups: female intact ( n = 2), male intact ( n = 2), female GDX vehicle treated ( n = 3), male GDX vehicle treated ( n = 3). A count matrix was created with dba.count, and consensus peaks were recentred to ±250 bp around the point of highest read density (summits = 250). The consensus peakset count matrix was subsequently used as input to edgeR. Differential peaks (abs(log 2 [FC]) > 1, P adj < 0.05) were calculated between female intact and male intact and between female GDX vehicle treated and male GDX vehicle -treated groups using glmQLFTest. BETA was used to assess statistical association between gonadally intact, sex-biased ATAC peaks and sex DEGs called in BNSTp Esr1 + snRNA-seq clusters (top 500 genes per cluster, ranked by P adj ). Sex DEGs ranked by ATAC regulatory potential score 68 , a metric that reflects the number of sex-biased peaks and distance of sex-biased peaks to the TSS, are shown in Extended Data Fig. 7g . HGNC gene family enrichment was performed on sex DEGs, using a background of expressed genes in any of the seven BNSTp Esr1 + clusters. To identify differential peaks across the four conditions, an ANOVA-like design was created in edgeR by specifying multiple coefficients in glmQLFTest (coefficient = 2:4). A matrix of normalized counts in these differential peaks ( P adj < 0.01) was clustered using k -means clustering (kmeans function in R), with k = 4 and iter.max = 50. For each k -means cluster, the cluster centroid was computed, and outlier peaks in each cluster were excluded on the basis of having low Pearson’s correlation with the cluster centroid ( R < 0.8). Depth-normalized ATAC CPM values in these peak clusters are shown in Fig. 2i (mean across biological replicates per group) and Extended Data Fig. 7 (individual biological replicates). Peak cluster overlap with E2-open ATAC loci (abs(log 2 [FC]) > 0, P adj < 0.05) was computed with bedtools intersect (-wa). For each peak cluster, motif enrichment analysis was performed by first generating a background peak list (matching in GC content and accessibility) from the consensus ATAC peak matrix using chromVAR (addGCBias, getBackgroundPeaks) 84 , and then calculating enrichment with AME using the background peak list as the control (--control background peaks). In Fig. 2i , the JASPAR 2020 AR motif (MA0007.3) is labelled as ARE, and the ESR2 motif (MA0258.2) is labelled as ERE. Adult snRNA-seq and single-cell RNA-seq analysis Mouse BNST snRNA-seq data containing 76,693 neurons across 7 adult female and 8 adult male biological replicates 26 were accessed from GEO: GSE126836 and loaded into a Seurat object 85 . Mouse MPOA single-cell RNA-seq data containing 31,299 cells across 3 adult female and 3 adult male biological replicates 32 were accessed from GEO: GSE113576 and loaded into a Seurat object. Cluster identity, replicate and sex were added as metadata features to each Seurat object. Pseudo-bulk RNA-seq analysis was performed to identify sex differences in gene expression in the BNST snRNA-seq dataset. Briefly, the Seurat object was converted to a SingleCellExperiment object (as.SingleCellExperiment). Genes were filtered by expression (genes with >1 count in ≥5 nuclei). NCBI-predicted genes were removed. For each cluster, nuclei annotated to the cluster were subsetted from the main Seurat object. Biological replicates containing ≤20 nuclei in the subsetted cluster were excluded. Gene counts were summed for each biological replicate in each cluster. Differential gene expression analysis across sex in each cluster was performed on the filtered, aggregated count matrix using DESeq2 (design = ~ sex) with alpha = 0.1. The BNSTp_Cplx3 cluster was excluded, as none of the replicates in this cluster contained more than 20 nuclei. Clusters containing ≥25% nuclei with ≥1 Esr1 counts in the main Seurat object were classified as Esr1 + (i1:Nfix, i2:Tac2, i3:Esr2, i4:Bnc2, i5:Haus4, i6:Epsti1, i7:Nxph2, i8:Zeb2, i9:Th, i10:Synpo2, i11:C1ql3, i12:Esr1, i13:Avp, i14:Gli3). To identify TFs that correlate with sex DEG number per cluster (Fig. 2g ), a linear regression model with percentage of TF expression as the predictor variable and sex DEG number per cluster as the response variable was generated using the lm function in R stats (formula = percentage of TF expression ~ DEG number). This model was tested for all TFs in the SCENIC 86 mm10 database. All TFs were then ranked by R 2 to identify those most predictive of sex DEG number, and the ranked R 2 values are shown in Fig. 2g . To visualize BNSTp Esr1 + snRNA-seq data (Fig. 2a ), BNSTp Esr1 + clusters were subsetted from the main Seurat object. Gene counts were normalized and log transformed (LogNormalize), and the top 2,000 variable features were identified using FindVariableFeatures (selection.method = vst). Gene counts were scaled, and linear dimensionality reduction was performed by principal component analysis (runPCA, npcs = 10). BNSTp Esr1 + clusters were visualized with UMAP (runUMAP, dims = 10). To generate the heatmaps in Extended Data Fig. 7a , pseudo-bulk counts for each biological replicate included in the analysis were normalized and transformed with variance-stabilizing transformation (DESeq2 vst), subsetted for sex-biased genes in each cluster, and z -scaled across pseudo-bulk replicates. To examine differential abundance of BNSTp Esr1 + clusters between sexes (Fig. 2b ), the proportion of total nuclei in each BNSTp Esr1 + cluster was calculated for each biological replicate. After calculating the proportions of nuclei, sample MALE6 was excluded as an outlier for having no detection (0 nuclei) of i1:Nfix and i2:Tac2 clusters and overrepresentation of the i5:Haus4 cluster. The one-sided Wilcoxon rank-sum test (wilcox.test in R stats) was used to test for male-biased abundance of nuclei across biological replicates in each cluster. P values were adjusted for multiple hypothesis testing using the Benjamini–Hochberg procedure (method = fdr). To identify marker genes enriched in the i1:Nfix cluster relative to the remaining six BNSTp Esr1 + clusters (Extended Data Fig. 6b ), differential gene expression analysis was performed using DESeq2 with design = ~ cluster_id (betaPrior = TRUE), alpha = 0.01, lfcThreshold = 2, altHypothesis = greater. To identify the enrichment of Lamp5 + subclass markers in BNSTp and MPOA Esr1 + clusters (Extended Data Fig. 6e ), a Seurat object was created from the Allen Brain Atlas Cell Types dataset. Gene counts per cell were normalized and log transformed (LogNormalize), and subclass-level marker genes were calculated with the Wilcoxon rank-sum test (FindAllMarkers, test.use = wilcox, min.diff.pct = 0.2). The mean expression of Lamp5 + subclass markers (avg_log[FC] > 0.75, P adj < 0.05, <40% in non- Lamp5 + subclasses) was calculated in BNSTp and MPOA Esr1 + clusters and visualized using pheatmap. To generate the UMAP plots shown in Extended Data Fig. 6g , BNSTp Esr1 + clusters were integrated with MPOA/BNST Esr1 -expressing clusters (e3: Cartpt_Isl1, i18: Gal_Tac2, i20: Gal_Moxd1, i28: Gaba_Six6, i29: Gaba_Igsf1, i38: Kiss1_Th) using Seurat. Anchors were identified between cells from the two datasets, using FindIntegrationAnchors. An integrated expression matrix was generated using IntegrateData (dims = 1:10). The resulting integrated matrix was used for downstream PCA and UMAP visualization (dims = 1:10). MetaNeighbor analysis MetaNeighbor 28 was used to quantify the degree of similarity between BNSTp Esr1 + clusters and MPOA Esr1 + clusters and between BNSTp Esr1 + clusters and cortical/hippocampal GABAergic neuron subclasses from the Allen Brain Atlas Cell Types database 29 . Briefly, the BNST and MPOA Seurat objects were subsetted for Esr1 + clusters, and then transformed and merged into one SingleCellExperiment object. For the BNSTp and cortex comparison, BNSTp Esr1 + clusters were merged into a SingleCellExperiment with cortical/hippocampal GABAergic cortical clusters. Unsupervised MetaNeighbor analysis was performed between BNST and MPOA clusters, and between BNST and cortical/hippocampal clusters, using highly variable genes identified across datasets (called with the variableGenes function). The median AUROC value per cortical/hippocampal GABAergic subclass across Allen Brain Atlas datasets for each BNSTp Esr1 + cluster is shown in Fig. 2d . Neonatal bulk ATAC–seq analysis Differential peak calling on the neonatal bulk ATAC–seq experiment was performed with DiffBind v2.10.0 and edgeR. A count matrix was created from individual replicate BAM and MACS2 narrowpeak files ( n = 3 per condition). Consensus peaks were recentred to ±250 bp around the point of highest read density (summits = 250), and the consensus peakset count matrix was subsequently used as input to edgeR. Differential peaks across the three treatment groups (NV female, NV male, NE female) were calculated by specifying multiple coefficients in glmQLFTest (coefficient = 4:5). To identify accessibility patterns across differential peaks ( P adj < 0.05), a matrix of normalized counts in differential peaks was hierarchically clustered using pheatmap, and the resulting dendrogram tree was cut with k = 6 to achieve 6 peak clusters (Extended Data Fig. 8a ). The two largest clusters were identified as having higher accessibility in NV males and NE females compared to NV females (cluster 3, labelled as NE open), or lower accessibility in NV male and NE female compared to NV females (cluster 5, labelled as NE close). Motif enrichment analysis of NE-open peaks was performed with AME using the 2020 JASPAR core non-redundant vertebrate database. GO biological process, DO and HGNC gene family enrichment analyses were performed, as described above for adult GDX treatment ATAC–seq data analysis. Neonatal single-nucleus multiome data processing and analysis Raw sequencing data were processed using the Cell Ranger ARC pipeline (v2.0.0) with the cellranger-arc mm10 reference. Default parameters were used to align reads, count unique fragments or transcripts, and filter high-quality nuclei. Individual HDF5 files for each sample containing RNA counts and ATAC fragments per cell barcode were loaded into Seurat (Read10X_h5). Nuclei with lower-end ATAC and RNA QC metrics (<1,000 ATAC fragments, <500 counts, nucleosomal signal > 3, TSS enrichment < 2) were removed. DoubletFinder 87 was then used to remove predicted doublets from each sample (nExp = 9% of nuclei per sample). Following doublet removal, nuclei surpassing upper-end ATAC and RNA QC metrics (>60,000 ATAC fragments, >20,000 RNA counts, >6,000 genes detected) were removed. After filtering, Seurat objects for each sample were subsetted for the RNA assay and merged. Gene counts were normalized and log transformed (LogNormalize), and the top 2,000 variable features were identified using FindVariableFeatures (selection.method = 'vst'). Gene counts were scaled, regressing out the following variables: number of RNA counts, number of RNA genes, percentage of mitochondrial counts and biological sex. Linear dimensionality reduction was performed by principal component analysis (runPCA, npcs = 25). A k -nearest-neighbours graph was constructed on the basis of Euclidean distance in PCA space and refined (FindNeighbors, npcs = 25), and then the nuclei were clustered using the Louvain algorithm (FindClusters, resolution = 0.8). snRNA clusters were visualized with UMAP (runUMAP, dims = 25). To reduce the granularity of clustering, a phylogenetic tree of cluster identities was generated from a distance matrix constructed in PCA space (BuildClusterTree) and visualized as a dendrogram (PlotClusterTree). DEGs between clusters in terminal nodes of the phylogenetic tree were calculated (FindMarkers, test.use = 'wilcox', P adj < 0.05), and clusters were merged if they had fewer than 10 DEGs with the following parameters: >0.5 avg_log[FC], <10% expression in negative nuclei, and >25% expression in positive nuclei. The final de novo snRNA-seq clusters are shown in Extended Data Fig. 10c . Inhibitory neuron clusters ( Slc32a1 / Gad2 +) from the neonatal multiome dataset were subsequently assigned to adult BNST Esr1 + cluster labels using Seurat. Adult BNST Esr1 + clusters (as defined above) were subsetted from the adult snRNA-seq object and randomly downsampled to 5,000 nuclei. Normalization, data scaling and linear dimensionality reduction were performed with the same parameters as for neonatal and adult Esr1 + inhibitory neuron clusters. Anchor cells between adult (reference) and neonatal (query) datasets were first identified using FindTransferAnchors. Reference cluster labels, as well as the corresponding UMAP structure, were subsequently transferred to the neonatal dataset using MapQuery. Prediction scores, which measure anchor consistency across the neighbourhood structure of reference and query datasets as previously described 85 , were used to quantify the confidence of label transfer from adult to neonatal nuclei. Extended Data Fig. 10d shows the prediction scores per reference cluster and time point of nuclei mapped onto adult reference cluster labels as well as the percentage of nuclei from each de novo cluster mapped onto each adult reference cluster (prediction score > 0.5). To further validate the quality of label transfer between adult and neonatal datasets, we computed DEGs between neonatal clusters post label transfer (FindMarkers, test.use = 'wilcox', P adj < 0.05, min.diff.pct = 0.1, avg_log[FC] > 0.5) and calculated their background-subtracted, average expression (AddModuleScore) in neonatal and adult BNST Esr1 + nuclei (visualized in Extended Data Fig. 10e ). To generate pseudo-bulk, normalized ATAC bigwig tracks for each snATAC cluster, we first re-processed the cellranger ARC output BAM file for each sample using SAMtools (-q 30 -f 2) and removed duplicate reads per cell barcode using picard MarkDuplicates (BARCODE_TAG=CB REMOVE_DUPLICATES = true). Sinto ( ) was used to split ATAC alignments for each cluster into individual BAM files using cell barcodes extracted from the Seurat object. CPM-normalized bigwig files were computed for each pseudo-bulk BAM file using DeepTools bamCoverage (--binSize 1--normalizeUsing CPM). To analyse the neonatal multiome snATAC data, we used ArchR 88 . Separate Arrow files were created for each multiome sample, and then merged into a single ArchR project. Gene activity scores per nucleus were calculated at the time of Arrow file creation (addGeneScoreMat = TRUE). Metadata (cluster label, sex, time and QC metrics) were transferred from the previously generated Seurat object to the ArchR project by cell barcode-matching. Dimensionality reduction was performed on the snATAC data using ArchR’s iterative Latent Semantic Indexing approach (addIterativeLSI). Per-nucleus imputation weights were added using MAGIC 89 in ArchR (addImputeWeights) to denoise sparse ATAC data for UMAP visualization. Cluster-aware ATAC peak calling was performed using ArchR’s iterative overlap peak merging approach (addReproduciblePeaks, groupBy = 'cluster'). Following peak calling, CISBP human motif annotations were added for each peak (addPeakAnnotation), and chromVAR deviation scores (addDeviationsMatrix) were calculated for each motif. In addition, chromVAR was used to calculate per-nucleus deviation scores for consensus BNSTp Nfix CUT&RUN peaks. To perform neuron identity regulator analysis (Extended Data Fig. 10g ), the correlation between TF RNA expression and motif deviation score was calculated for all TFs in the CISBP motif database (correlateMatrices). TFs with a correlation coefficient >0.5 and a maximum TF RNA log 2 [FC] value between each cluster in the top 50% were classified as neuron identity regulators (coloured pink in Extended Data Fig. 10g ). For visualization of gene activity and CISBP motif deviation scores (Fig. 3c and Extended Data Fig. 10g ), scores were imputed (imputeMatrix), transferred to the original Seurat object by cell barcode matching, and visualized using FeaturePlot. Signac 90 was used to generate and store peak-by-cell count matrices for each sample. snATAC markers for each cluster were calculated (FindAllMarkers, test.use = 'LR', vars.to.regress = 'nCount_ATAC', min.pct = 0.1, min.diff.pct = 0.05, logfc.threshold = 0.15). Pseudo-bulk snATAC cluster CPM was computed for each marker peak using DeepTools multiBigwigSummary and visualized with pheatmap (Extended Data Fig. 10f ). Motif enrichment analysis of snATAC marker peaks for each cluster was performed using FindMotifs. The top three enriched motifs per snATAC cluster are shown in Extended Data Fig. 10f . To identify sex-biased enrichment of NE-open loci across P4 snATAC clusters (Fig. 3d ), we first filtered out low-abundance P4 snATAC clusters (<400 nuclei), and then computed the difference in ATAC CPM between males and females at NE-open loci in each cluster. Differential ATAC CPM values were scaled across clusters, then grouped using k -means clustering ( k = 12, iter.max = 50) and visualized with pheatmap (Fig. 3d ). To call sex DEGs ( P adj < 0.05) in each cluster and time point, we used MAST 91 in Seurat (FindMarkers, test.use = 'MAST', min.pct = 0.05, logfc.threshold = 0.2, latent.vars = 'nFeature_RNA', 'nCount_RNA'). To link NE-regulated loci to sex DEGs at P4 and P14 (Fig. 3e and Extended Data Fig. 11h ), we computed the Pearson correlation coefficient between sex DEG expression and NE-regulated peak accessibility for each cluster (LinkPeaks, min.distance = 2,000, distance = 1,000,000, min.cells = 2% of cluster size). Sex DEG log 2 [FC] values and NE-regulated ATAC site correlation coefficients were hierarchically clustered and visualized using ComplexHeatmap 92 . P14 snRNA-seq data processing and analysis Raw sequencing data were processed using the Cell Ranger pipeline (v6.0.0) with the refdata-gex-mm10-2020-A reference. Default parameters were used to align reads, count unique transcripts and filter high-quality nuclei. Individual HDF5 files for each sample were loaded into Seurat. Nuclei with lower-end RNA QC metrics (<1,000 counts) were removed. DoubletFinder 87 was then used to remove predicted doublets from each sample (nExp = 9% of nuclei per sample). Following doublet removal, nuclei surpassing upper-end RNA QC metrics (>20,000 counts, >6,000 genes detected) were removed. After filtering, Seurat objects were merged. Gene counts were normalized and scaled, as described for the single-nucleus multiome data processing. The P14 snRNA-seq dataset was assigned to adult BNST inhibitory cluster labels using Seurat. Adult BNST inhibitory clusters were subsetted from the adult snRNA-seq object and randomly downsampled to 10,000 nuclei. Normalization, data scaling and linear dimensionality reduction were performed with the same parameters for P14 and adult inhibitory neuron clusters. Label transfer was then performed as described for the single-nucleus multiome data processing. Extended Data Fig. 12b shows the prediction scores of P14 nuclei mapped onto adult reference cluster labels. To validate the quality of label transfer between adult and P14 datasets, we computed DEGs between P14 clusters post label transfer, as described above, and calculated their background-subtracted, average expression (AddModuleScore) in P14 and adult BNST inhibitory clusters (shown in Extended Data Fig. 12c ). Sex DEGs between control females and and control or conditional ERα KO males were calculated for each P14 cluster, as described above for the multiome analysis. Cluster abundance for each group was computed and is plotted in Extended Data Fig. 12d . Neonatal bulk nuclear RNA-seq data processing and analysis Reads were trimmed to remove Illumina adapters and low-quality basecalls (cutadapt -q 30), and then mapped to the mm10 reference genome using STAR. Technical duplicate reads (identical start and end positions with the same strand orientation and identical molecular identifiers) were removed using the nudup.py python package ( ). The number of reads mapping to each gene (including introns) on each strand (-s 1) was calculated with featureCounts 82 , using the mm10.refGene.gtf file. Differential gene expression analysis was performed using DESeq2 (design = ~ treatment) after prefiltering genes by expression (rowMeans ≥ 5). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All sequencing data generated in this study have been deposited in GEO ( GSE144718 ). The following publicly available datasets were also analysed: MCF7 ERα ChIP–seq ( GSE59530 ), mouse liver ERα ChIP–seq ( GSE49993 ), mouse liver ERα ChIP–seq ( GSE52351 ), mouse uterus ERα ChIP–seq ( GSE36455 ), mouse uterus ERα ChIP–seq dataset ( GSE49993 ), mouse aorta ERα ChIP–seq ( GSE52351 ), mouse mammary gland ERα ChIP–seq ( GSE130032 ), BNST snRNA-seq ( GSE126836 ), MPOA single-cell RNA-seq ( GSE113576 ) and the Allen Brain Institute Cell Type Database ( ). Source data are provided with this paper. Code availability Custom scripts can be found at .
Sex hormones play an important role in shaping an animal's behavior, and their influence starts early. Early-life hormonal surges help shape the developing brain, establishing circuitry that will influence behavior for a lifetime. Hundreds of genes in the brain fall under the control of estrogen. Fluctuating levels of the hormone cause shifts in mood, energy balance, and behavior throughout life, in addition to sculpting developing neural circuits early on. These effects occur when activated estrogen receptors sit directly on a cell's DNA to turn genes on or off. Cold Spring Harbor Laboratory Assistant Professor Jessica Tollkuhn, graduate student Bruno Gegenhuber, and their colleagues, have been mapping exactly where estrogen receptors latch onto DNA inside mouse brain cells. They've looked at both males and females and compared the brains of adults to the still-developing brains of young pups. In the journal Nature, they report on the hormone receptor's targets in the brain and show that estrogen sets up physical differences in the brains of males and females during development. Tollkuhn explains that estrogen is present in the brains of both males and females: some neurons make it themselves out of testosterone. In male mice, estrogen generated through a surge of testosterone that is released soon after birth shapes developing circuitry. As a result, certain brain regions are larger and contain more cells in males than they do in females—a difference that affects a range of behaviors in adulthood, including mating, parenting, and aggression. "There's this critical period when the brain is developing and wiring up that it has to get this input in order to make these permanent changes in the brain wiring. This is a transient surge, but it seems to have extremely long-lasting effects on brain development." Tollkuhn's team examined where estrogen receptors landed after this hormonal surge, focusing on a brain region called the BNST, which is larger in males than females in both mice and humans. They found a host of genes that were under estrogen's control, including many involved in neurodevelopment and neuronal signaling. And although estrogen itself remains in the brain for only a few hours, it seems that the hormone-controlled genes remain active for weeks. Now that they know what genes estrogen is targeting in the brain, Tollkuhn's team plans to explore exactly how those genes mediate the hormone's diverse effects on brain development, behavior, and disease.
10.1038/s41586-022-04686-1
Physics
Break in temporal symmetry produces molecules that can encode information
Y. Marques et al, Chiral magnetic chemical bonds in molecular states of impurities in Weyl semimetals, Scientific Reports (2019). DOI: 10.1038/s41598-019-44842-8 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-019-44842-8
https://phys.org/news/2019-08-temporal-symmetry-molecules-encode.html
Abstract We demonstrate that chirality of the electron scattering in Weyl semimetals leads to the formation of magnetic chemical bonds for molecular states of a pair of impurities. The effect is associated with the presence of time-reversal symmetry breaking terms in the Hamiltonian which drive a crossover from s - to p -wave scattering. The profiles of the corresponding molecular orbitals and their spin polarizations are defined by the relative orientation of the lines connecting two Weyl nodes and two impurities. The magnetic character of the molecular orbitals and their tunability open the way for using doped Weyl semimetals for spintronics and realization of qubits . Introduction Recent years witnessed unprecedented penetration of the ideas of high energy physics into the domain of condensed matter. In particular, lot of attention is now attracted to the condensed matter realizations of three dimensional (3D) massless quasi-relativistic particles known as Dirac or Weyl fermions 1 . The experimental observation of Dirac fermions in such materials as Na 3 Bi 2 , 3 and Cd 3 As 2 4 , 5 made possible the study of the 3D analogs of graphene physics in a robust topologically protected material possessing both inversion ( \( {\mathcal I} \) ) and time reversal ( \({\mathscr{T}}\) ) symmetries 6 . In Weyl semimetals, where one of these symmetries is broken, a Dirac node, which is the point where conduction and valence bands touch each other, splits into a pair of Weyl nodes with opposite chiralities. Such nodes are predicted to give rise to a plethora of interesting phenomena, including formation of Fermi arcs, unusual Hall effects, and chiral anomaly, among others 6 , 7 , 8 , 9 , 10 , 11 , 12 . The material platform for realization of Weyl fermions is presented by such compounds as tantalum arsenide (TaAs) 13 , 14 , 15 , 16 , 17 , niobium arsenide (NbAs) 18 , and tantalum phosphide (TaP) 19 . One of the aspects of Weyl semimetals which recently received particular attention is the peculiar impurity physics 20 , 21 , 22 , 23 , 24 , 25 . For instance, in the case of a single Kondo impurity, Zheng, S.-H. et al . 24 have observed beating patterns in the local density of states strongly dependent upon the \( {\mathcal I} \) and \({\mathscr{T}}\) symmetries. Additionally, some of us have found an unusual ground state of antibonding-type for a diatomic molecule immersed into a Dirac host, which corresponds to Weyl semimetal with the symmetries above preserved 25 . Thus by working off these regimes, we consider the \({\mathscr{T}}\) – breaking Weyl semimetal and propose the formation of molecules due to an unprecedented chemical bond mechanism, which we reveal being of chiral-magnetic nature. In the present work, we clarify the role played by chirality of Weyl quasiparticles in the processes of impurity scattering by investigation of the local density of states. The latter can be experimentally addressed by means of the scanning tunneling microscopy (STM). We show that long-range Friedel-like oscillations 26 contribute to the formation of molecular states in a pair of distant impurities embedded in a 3D relativistic semimetal. We demonstrate that, the scenario of the impurity scattering is radically different in Dirac and Weyl semimetals and show that in the latter case magnetic molecular states can be formed. Their particular type is defined by the relative orientation of the lines connecting two Weyl nodes and two impurities. We report a crossover from s- to p- type atomic orbitals for individual impurities and related formation of spin-polarized σ – and π –type molecular orbitals for an impurity pair. Model We set \(\hslash =1\) throughout the calculations and represent the total Hamiltonian as the sum of the three terms: $$ {\mathcal H} ={ {\mathcal H} }_{0}+{ {\mathcal H} }_{{\rm{d}}}+{ {\mathcal H} }_{{\mathscr{V}}}.$$ (1) The low-energy Hamiltonian of the host may be represented as $${ {\mathcal H} }_{0}=\sum _{{\bf{k}}}{\psi }^{\dagger }({\bf{k}})({H}_{+}\oplus {H}_{-}){\psi }({\bf{k}}),$$ (2) where \(\psi ({\bf{k}})={({c}_{{\bf{k}}+\uparrow },{c}_{{\bf{k}}+\downarrow },{c}_{{\bf{k}}-\uparrow },{c}_{{\bf{k}}-\downarrow })}^{T}\) is four-spinor operator whose components \({c}_{{\bf{k}}\chi \sigma }^{\dagger }\) ( \({c}_{{\bf{k}}\chi \sigma }\) ) stand for the creation (annihilation) operators of an electron with wave number k and spin σ , $${H}_{\chi }({\bf{k}})=\chi {v}_{F}{\sigma }\cdot ({\bf{k}}-\chi Q),$$ (3) where k = ( k x , k y , k z ) is the three-dimensional wave vector, σ stands for the vector of Pauli matrices, the index \(\chi =\pm \,1\) corresponds to the chirality of the Weyl nodes and \({v}_{F}\) is the Fermi velocity. For Q = 0, \({\mathscr{T}}\) symmetry is conserved and a pair of Weyl nodes is degenerated, which corresponds to the case of a standard Dirac semimetal. If \({\mathscr{T}}\) symmetry is broken ( Q ≠ 0), two Weyl nodes are displaced with respect to each other towards two different points in the Brillouin zone located at ± Q , but maintain energetic degeneracy as it is depicted in the Fig. 1 . Figure 1 Panel (a) Sketch of the proposed setup. Two impurities are embedded in a 3D semimetal of Dirac or Weyl type. The density of electrons forming molecular orbitals can be probed by an STM-tip. Panels (b,c) show low energy band structure for Dirac and \({\mathscr{T}}\) -breaking Weyl semimetals with two Weyl nodes located at ± Q i , i = x , y , z . The blue color of the lower cones indicates the filling of the valence bands, black dotted line is the Fermi energy set at ε F = 0 and red dotted line corresponds to the single-particle energy of the impurities. Full size image The impurities are modeled by the Hamiltonian $${ {\mathcal H} }_{{\rm{d}}}=\sum _{j\sigma }{\varepsilon }_{j\sigma }{d}_{j\sigma }^{\dagger }{d}_{j\sigma }+\sum _{j}{U}_{j}{n}_{j\uparrow }{n}_{j\downarrow },$$ (4) with ε jσ being single-particle energy and U j the on-site Coulomb repulsion, whereas \({n}_{j\sigma }={d}_{j\sigma }^{\dagger }{d}_{j\sigma }\) corresponds to the number of electrons with spin projection σ at the site j with \({d}_{j\sigma }^{\dagger }\) and d jσ being respectively creation and annihilation operators. The hybridization between the host and the impurities is described by the term: $${ {\mathcal H} }_{{\mathscr{V}}}=\sum _{j{\bf{k}}}{\hat{d}}_{j}^{\dagger }{\hat{V}}_{j{\bf{k}}}{\psi }({\bf{k}})+{\rm{H}}.\,{\rm{c}}.,$$ (5) wherein \({\hat{d}}_{j}^{\dagger }=({d}_{j\uparrow }^{\dagger },\,{d}_{j\downarrow }^{\dagger })\) and $${\hat{V}}_{j{\bf{k}}}=(\begin{array}{llll}{V}_{j{\bf{k}}} & 0 & {V}_{j{\bf{k}}} & 0\\ 0 & {V}_{j{\bf{k}}} & 0 & {V}_{j{\bf{k}}}\end{array}),$$ (6) where \({V}_{j{\bf{k}}}=\frac{{v}_{0}}{\sqrt{N}}{e}^{i{\bf{k}}\cdot {{\bf{R}}}_{j}}\) , with ν 0 being the hybridization amplitude between electrons of the host and localized states of the impurities positioned at R j ( j = 1, 2), N is the normalization factor yielding the total number of the conduction states. As performed by some of us 27 , we assume for a sake of simplicity, that the hopping term V j k neglects the exponential decay of the Bloch states right above the Weyl semimetal surface that overlap with those from the STM-tip apex. By employing such an assumption, STM-tip measurements of the local density of states would be just attenuated with respect to the values extracted from our simulations, in such a way that none generality is lost as ensured by Plihal, M. et al . 28 , which have analyzed this density for the single-impurity problem. Local Density of States (LDOS) The electronic properties of the considered system are determined by the LDOS of the host which can be experimentally accessed by means of an STM-tip. It can be calculated using standard equation-of-motion (EOM) procedure 29 , 30 as: $$\rho (\varepsilon ,{{\bf{r}}}_{m})=-\,\frac{1}{\pi }\sum _{\sigma }{\rm{Im}}\{{\tilde{{\mathscr{G}}}}_{\sigma }(\varepsilon ,{{\bf{r}}}_{m})\}={\rho }_{0}(\varepsilon )+\sum _{jj^{\prime} }\delta {\rho }_{jj^{\prime} }(\varepsilon ,{{\bf{r}}}_{m}),$$ (7) where \({\tilde{{\mathscr{G}}}}_{\sigma }(\varepsilon ,{{\bf{r}}}_{m})\) is the time-Fourier transform of the retarded Green’s function in the time domain, defined as: $${{\mathscr{G}}}_{\sigma }(t,{{\bf{r}}}_{m})=-\,i\theta (t){\langle \{{\psi }_{\sigma }(t,{{\bf{r}}}_{m}),{\psi }_{\sigma }^{\dagger }(0,{{\bf{r}}}_{m})\}\rangle }_{ {\mathcal H} },$$ (8) where θ ( t ) denotes the Heaviside function, \({\psi }_{\sigma }(t,{{\bf{r}}}_{m})\) is the field operator of the host electrons written in terms of the continuous variable r m , the brackets \({\langle \cdots \rangle }_{ {\mathcal H} }\) denote the ensemble average with respect to the full Hamiltonian, \(\{\cdots \}\) determines an anti-commutator between operators in the Heisenberg picture, \({\rho }_{0}(\varepsilon )=\frac{6{\varepsilon }^{2}}{{D}^{3}}\) is the pristine host DOS with D being the energy cutoff corresponding to the half-bandwidth, and $$\delta {\rho }_{jj^{\prime} }(\varepsilon )=-\,\frac{1}{\pi {v}_{0}^{2}}\sum _{\chi \chi ^{\prime} }\sum _{\sigma }{\rm{Im}}[{{\rm{\Sigma }}}_{\sigma }^{\chi }({{\bf{r}}}_{mj}){\tilde{G}}_{j\sigma |j^{\prime} \sigma }(\varepsilon ){{\rm{\Sigma }}}_{\sigma }^{\chi ^{\prime} }({{\bf{r}}}_{j^{\prime} m})]$$ (9) encodes the Friedel-like oscillations describing the scattering of the conduction electrons by the impurities, where the terms \(j^{\prime} =j\) and \(j^{\prime} \ne j\) give rise to intra and inter-impurity scattering processes, respectively, which are ruled by the spatial dependent self-energy $${{\rm{\Sigma }}}_{\sigma }^{\chi }({{\bf{r}}}_{mj})=-\,\frac{3{v}_{0}^{2}\pi {v}_{F}}{2{D}^{3}|{{\bf{r}}}_{mj}|}{e}^{-i|{{\bf{r}}}_{mj}|\frac{\varepsilon }{{v}_{F}}}{e}^{-i\chi {\boldsymbol{Q}}\cdot {{\bf{r}}}_{mj}}(\varepsilon \pm \chi \sigma \varepsilon \pm i\frac{\chi \sigma {v}_{F}}{|{{\bf{r}}}_{mj}|}),$$ (10) r mj = r m − R j and ±signs correspond to the vector direction (positive for r mj , negative for r jm ). The Eq. ( 10 ) is spatially anisotropic and the LDOS as a result, due to mutual orientation of Q and r mj inside the plane wave \({e}^{-i\chi Q\cdot {{\bf{r}}}_{mj}}\) . Consequently, it leads to the main finding of this work, which is ruled by the mixing of chirality ( χ ) and spin ( σ ) quantum numbers, as one can perceive by the presence of the crossed terms \(\pm \chi \sigma \varepsilon \pm i\frac{\chi \sigma {v}_{F}}{|{{\bf{r}}}_{mj}|}\) . Such a set of terms, thus gives rise to the possibility of spin-polarized molecular states, which are chiral-dependent as we will see below. To show the emergence of such, we should regard the system anisotropy by means of the condition \({e}^{-i\chi Q\cdot {{\bf{r}}}_{jj^{\prime} }}\ne 1\) , with the relative distance between the impurities given by \({{\bf{r}}}_{jj^{\prime} }={{\bf{R}}}_{j}-{{\bf{R}}}_{j^{\prime} }\) . As aftermath, the spin degree of freedom in the quasiparticle energy correction of the impurity j due to j ′ (and vice-versa), namely \({{\rm{\Sigma }}}_{\sigma }({{\bf{r}}}_{jj^{\prime} })=\sum _{\chi }{{\rm{\Sigma }}}_{\sigma }^{\chi }({{\bf{r}}}_{jj^{\prime} })\) , becomes lifted. In this way, \({{\rm{\Sigma }}}_{\uparrow }({{\bf{r}}}_{jj^{\prime} })\ne {{\rm{\Sigma }}}_{\downarrow }({{\bf{r}}}_{jj^{\prime} })\) and spin-polarized molecular states are allowed. In the absence of anisotropy \(({e}^{-i\chi {\boldsymbol{Q}}\cdot {{\bf{r}}}_{jj\text{'}}}=1)\) , the degeneracy \({{\rm{\Sigma }}}_{\uparrow }({{\bf{r}}}_{jj^{\prime} })={{\rm{\Sigma }}}_{\downarrow }({{\bf{r}}}_{jj^{\prime} })\) holds and paramagnetic molecular states prevail. We should stress that, such an effect is entirely distinct from the corresponding caused by an external Zeeman filed. In the latter, it breaks naturally the spin-degeneracy whatever the orientation \({\boldsymbol{Q}}\cdot {{\bf{r}}}_{jj^{\prime} }\) . Thus, the addressed chiral magnetic effect reveals that an applied magnetic field is not capable of reproducing this peculiar magnetism and prevents the feasibility of the proposed chiral magnetic chemical bond as well. We guide the reader to the supplementary material, where a concise derivation of Eq. ( 10 ) can be readily followed. \({\tilde{{\mathscr{G}}}}_{j\sigma |j^{\prime} \sigma }(\varepsilon )\) is the time-Fourier transform of the Green’s function of the impurities $${{\mathscr{G}}}_{j\sigma |j^{\prime} \sigma }=-\,i\theta (t){\langle \{{d}_{j\sigma }(t),{d}_{j^{\prime} \sigma }^{\dagger }(0)\}\rangle }_{ {\mathcal H} }.$$ (11) Application of the EOM method to \({\tilde{{\mathscr{G}}}}_{j\sigma |j^{\prime} \sigma }(\varepsilon )\) together with Hubbard-I decoupling scheme 31 , yields $${\tilde{{\mathscr{G}}}}_{j\sigma |j\sigma }(\varepsilon )=\frac{{\lambda }_{j}^{\bar{\sigma }}}{{g}_{j\sigma |j\sigma }^{-1}(\varepsilon )-{\lambda }_{j}^{\bar{\sigma }}{{\rm{\Sigma }}}_{\sigma }({{\bf{r}}}_{jj^{\prime} }){g}_{j^{\prime} \sigma |j^{\prime} \sigma }(\varepsilon ){\lambda }_{j^{\prime} }^{\bar{\sigma }}{{\rm{\Sigma }}}_{\sigma }({{\bf{r}}}_{j^{\prime} j})},$$ (12) where \(\bar{\sigma }=-\,\sigma ,j\ne {j}^{{\rm{^{\prime} }}}\) , \({\lambda }_{j}^{\bar{\sigma }}=1+\frac{{U}_{j}}{{g}_{j\sigma |j\sigma }^{-1}(\varepsilon )-{U}_{j}}\langle {n}_{j\bar{\sigma }}\rangle \) is the spectral weight, \({g}_{j\sigma |j\sigma }(\varepsilon )=\frac{1}{\varepsilon -{\varepsilon }_{j\sigma }-{{\rm{\Sigma }}}_{0}}\) as the single impurity noninteracting Green’s function, $$\langle {n}_{j\bar{\sigma }}\rangle =-\,\frac{1}{\pi }{\int }_{-\infty }^{+\infty }{n}_{F}(\varepsilon ){\rm{Im}}({\tilde{{\mathscr{G}}}}_{j\bar{\sigma }|j\bar{\sigma }}(\varepsilon ))d\varepsilon $$ (13) is the occupation number of an impurity with n F ( ε ) being the Fermi-Dirac distribution, $${{\rm{\Sigma }}}_{0}=\frac{3{v}_{0}^{2}}{{D}^{2}}(\frac{{\varepsilon }^{2}}{D}\,\mathrm{ln}|\frac{D+\varepsilon }{D-\varepsilon }|-2\varepsilon -i\frac{{\varepsilon }^{2}}{D})$$ (14) is the local self-energy and $${\mathop{{\mathscr{G}}}\limits^{ \sim }}_{j\sigma |{j}^{{\rm{^{\prime} }}}\sigma }(\varepsilon )={g}_{j\sigma |j\sigma }(\varepsilon ){\lambda }_{j}^{\bar{\sigma }}{{\rm{\Sigma }}}_{\sigma }({{\bf{r}}}_{jj^{\prime} }){\mathop{{\mathscr{G}}}\limits^{ \sim }}_{{j}^{{\rm{^{\prime} }}}\sigma |{j}^{{\rm{^{\prime} }}}\sigma }(\varepsilon ).$$ (15) Results and Discussion In order to understand the formation of the molecular states of a pair of impurities inside a Weyl semimetal, we should start from analyzing the case of a single impurity. As model parameters, we adopt the energy of an impurity ε jσ = −0.07 D , hybridization amplitude ν 0 = −0.14 D , on-site Coulomb repulsion U j = 0.14 D , \(\hslash {v}_{F}\approx 3\,eV{\rm{\AA }}\) , \(D\approx 0.2\,{\rm{eV}}\) and temperature T = 0K. Concerning this latter, we clarify that none generality is lost, once finite T just introduces thermal broadening into the Fermi-Dirac distribution \({n}_{F}(\varepsilon )\) as well as in the LDOS via Eq. ( 13 ) for the impurity occupation number \(\langle {n}_{j\bar{\sigma }}\rangle \) and, according to some of us have 27 , phonon modes rise as effective by increasing T and restore the molecular ground state to bonding-type. Thus in the current work, the molecules show antibonding ground state 25 , due to the T = 0K condition. As one can see in the Fig. 2 , the 2D map of the LDOS which can be probed by an STM-tip over the system surface presents a crossover from s- to p- type atomic orbitals as Q is increased and one moves from Dirac ( Q = 0) towards the Weyl regime ( Q ≠ 0). This happens due to the presence of the terms depending on \({v}_{F}\chi Q\) in the original Hamiltonian. Note, that the p -orbital is elongated along the direction of Q . Figure 2 2D LDOS maps for the case of a single impurity taken at fixed energy \(\varepsilon \) . Crossover from s – to p –type orbitals associated with moving from Dirac ( Q = 0) to Weyl ( Q ≠ 0) regime is clearly seen. Full size image Now we can analyze the molecular state corresponding to a pair of impurities inside a Weyl semimetal with broken \({\mathscr{T}}\) –symmetry. We will consider two cases of the mutual orientation of the vectors Q and \({r}_{12}={R}_{1}-{R}_{2}\) connecting the two impurities: (i) perpendicular orientation, \(Q\cdot {r}_{12}=0\) and (ii) parallel orientation \(Q\cdot {r}_{12}=|Q||{r}_{12}|\) . As we will demonstrate, the former case corresponds to the formation of spin degenerate molecular orbitals, while the latter case gives rise to the chiral magnetic chemical bonds. In all plots of \(\rho (\varepsilon ,{{\bf{r}}}_{m})\) and \(\delta {\rho }_{jj^{\prime} }(\varepsilon ,{{\bf{r}}}_{m})\) versus ε / D we present, the STM-tip is pinned at the site \({{\bf{r}}}_{m}=(1,1,1)\,{\rm{nm}}\) right above the Weyl semimetal surface and the impurities are buried into the bulk at \({{\bf{R}}}_{1,2}=(0,\mp 1,0)\,{\rm{nm}}\) . Let us start from the case \(Q\cdot {r}_{12}=0\) . In the case of an individual impurity, a single energy resonance appears within the valence band in \(\rho (\varepsilon ,{{\bf{r}}}_{m})\) . Naturally, in the two-impurity system a pair of peaks corresponding to bonding and antibonding states appears, as it is shown in the Fig. 3b and d for the cases of Dirac ( Q = 0) and \({\mathscr{T}}\) -breaking Weyl ( Q ≠ 0) hosts. Note, that the coupling between the impurities is fully mediated by Friedel-like oscillations of the electronic density of the host mathematically described by the self-energy \({\lambda }_{j}^{\bar{\sigma }}{{\rm{\Sigma }}}_{\sigma }({{\bf{r}}}_{j\bar{j}}){g}_{\bar{j}\sigma |\bar{j}\sigma }(\varepsilon ){\lambda }_{\bar{j}}^{\bar{\sigma }}{{\rm{\Sigma }}}_{\sigma }({{\bf{r}}}_{\bar{j}j})\) entering into the denominator of \({\tilde{{\mathscr{G}}}}_{j\sigma |j\sigma }(\varepsilon )\) given by the Eq. ( 12 ). Figure 3 LDOS for a pair of impurities. The panels (a,b,e,f) correspond to the case of Dirac semimetal ( Q = 0), the panels (c,d,g,h) to the case of \({\mathscr{T}}\) -breaking Weyl semimetal with \(Q\cdot {r}_{12}=0\) . Panels (a and c) display the diagonal ( \(\delta {\rho }_{jj}\) ) and off-diagonal ( \(\delta {\rho }_{j\bar{j}}\) ) contributions to the LDOS. Note that \(\delta {\rho }_{j\bar{j}}\) reveals both dips and peaks corresponding to anti-resonances and resonances respectively, while \(\delta {\rho }_{jj}\) reveals peaks only. The total LDOS is presented in the panels (b and d). We set \({{\bf{r}}}_{m}=(1,1,1)\,{\rm{nm}}\) , the energy is counted from the Fermi level set at ε F = 0, for the Weyl host Q x = 0.02. The total LDOS on the \({{\bf{r}}}_{m}=(x,y,1)\,{\rm{nm}}\) surface for the energies of bonding and antibonding states ( \(\varepsilon =-\,0.067D\) and \(\varepsilon =-\,0.059D\) ) is shown in panels (e,f,g,h). Panels (i and j) illustrate how molecular orbitals presented in the panels (e–h) are formed from atomic orbitals presented in the Fig. 2 . Full size image The 2D map of the molecular orbitals on the host surface is presented in the panels (e–h) of the Fig. 3 . Panels (e and f) correspond to the case of a Dirac host which was previously considered by some of us in the ref. 25 . One clearly sees the emergence of bonding and antibonding molecular orbitals with σ -type symmetry resulting from the interference between two s-wave atomic orbitals of individual impurities, as it is illustrated in the panel (i). Panels (g and h) correspond to the case of a Weyl semimetal with \(Q\cdot {{\bf{r}}}_{j\bar{j}}=0\) , for which individual impurities reveal p -type atomic orbitals stretched in the direction perpendicular to the line connecting the impurities. Note, that for the considered case bonding and antibonding molecular orbitals have clear π -type symmetry. These orbitals remain spin-degenerate, as it follows from the Eq. ( 10 ), which leads to \({{\rm{\Sigma }}}_{\sigma }({{\bf{r}}}_{jj^{\prime} })=\sum _{\chi }{{\rm{\Sigma }}}_{\sigma }^{\chi }({{\bf{r}}}_{jj^{\prime} })\) independent of the spin degree of freedom, i.e., \({{\rm{\Sigma }}}_{\uparrow }({r}_{jj^{\prime} })={{\rm{\Sigma }}}_{\downarrow }({r}_{jj^{\prime} })\) . The case of the parallel orientation of the vectors Q and r 12 is illustrated by the Fig. 4 . Note that in this case, according to the Eq. ( 10 ) the presence of the terms \({e}^{i\chi Q\cdot {{\bf{r}}}_{j\bar{j}}}\) with \(\chi =\pm \,1\) in the expression for the self-energy \({{\rm{\Sigma }}}_{\sigma }({{\bf{r}}}_{j\bar{j}})=\sum _{\chi }{{\rm{\Sigma }}}_{\sigma }^{\chi }({{\bf{r}}}_{j\bar{j}})\) , leads to the lifting of spin degeneracy and gives rise to the formation of chiral magnetic chemical bonds. Interestingly enough, this spin-dependency can not be considered as being fully equivalent to one induced by effective external magnetic field, once the sequence of the peaks in the LDOS presented in the Fig. 4(b) does not correspond to the alternation of spin-up and spin-down states as usual, but consists of the two inner spin-down states flanked by the two outer spin-up states as can be clearly seen from the Fig. 4(c) . The profiles of the spin-resolved orbitals corresponding to the bonding and antibonding states are shown in the panels (d–g) of the Fig. 4 . These orbitals exhibit σ -type symmetry and are formed due to the interference between two frontal p-wave orbitals as sketched in the Fig. 4(h) . Figure 4 LDOS for the \({\mathscr{T}}\) -breaking Weyl semimetal for the case when vectors Q and r 12 are parallel (we took \(Q=0.02\hat{{j}}\) ). Panel (a) displays diagonal ( \(\delta {\rho }_{jj}\) ) and off-diagonal ( \(\delta {\rho }_{j\bar{j}}\) ) contributions to the LDOS. The total LDOS is presented in the panel (b). Panel (c) shows spin-resolved density of states. The map of the total LDOS on the \({{\bf{r}}}_{m}=(x,y,1)\,{\rm{nm}}\) surface for the energies corresponding to the four spin-resolved molecular states is presented in the panels (d–g). Panel (h) illustrates how molecular orbitals presented in the panels (d–g) are formed from atomic orbitals presented in the Fig. 2 . Full size image To shed more light on the splitting between spin-polarized components in the LDOS, we investigate the impurity magnetization characterized by the polarization degree \(p=(\langle {n}_{j\uparrow }\rangle -\langle {n}_{j\downarrow }\rangle )/(\langle {n}_{j\uparrow }\rangle +\langle {n}_{j\downarrow }\rangle )\) , where the occupation numbers are defined by the Eq. ( 13 ). The dependence of the magnetization on the separation between the Weyl nodes in the direction parallel to the line connecting the two impurities Q y is shown in the Fig. 5(a) . One clearly sees pronounced periodic behavior, stemming from the oscillations of the factor \({e}^{i\chi Q\cdot {{\bf{r}}}_{j\bar{j}}}\) in the expression for spin-resolved self-energy in the Eq. ( 10 ). The LDOS corresponding to the spin-degenerate case, maximal positive and negative magnetizations, is shown in the Fig. 5(d) . Note that for the spin-degenerate situation corresponding to Q y = 0.157, the shape of the molecular orbitals presented in the Fig. 5(b,c) can be represented as linear combination of the orbitals presented in the Fig. 4(d–g) . Figure 5 Panel (a) Total magnetization of the impurities as function of the parameter Q y describing the shift of the Weyl nodes in the reciprocal space in the direction parallel to the line connecting impurities. Panels (b,c) The maps of the total LDOS on the \({{\bf{r}}}_{m}=(x,y,1)\,{\rm{nm}}\) surface for the energies corresponding to bonding and antibonding states in the valence band for the spin degenerate case corresponding to Q y = 0.157. Panel (d) Spin resolved LDOS at the point \({{\bf{r}}}_{m}=(1,1,1)\,{\rm{nm}}\) for spin degenerate case ( Q y = 0.157) and maximal spin up ( Q y = 0.015) and spin down ( Q y = 0.142) polarizations. The LDOS for Q y = 0.157 is rescaled by the factor of 5. The insets highlight the marked sectors. Panel (e) illustrates how molecular orbitals presented in the panels (b and c) are formed from atomic orbitals presented in the Fig. 2 . Full size image Conclusions We analyzed the structure of the molecular orbitals corresponding to the pair of impurities placed within a Weyl semimetal focusing on the role played by the \({\mathscr{T}}\) -symmetry breaking. For this purpose the corresponding LDOS was evaluated. It was demonstrated that the terms in the self-energy stemming from the chiral dependent minimal coupling drive a crossover from spin degenerate σ -type molecular orbitals characteristic for the case of a Dirac host to spin degenerate π -type orbitals or spin-polarized σ -type orbitals for the case of a Weyl host. The type of the chemical bonding in this latter case can be controlled by variation of the mutual position of the impurities with respect to the vector Q describing the shift of the Weyl nodes. The magnetic character of the molecular orbitals and their tunability open the way for using doped Weyl semimetals for spintronics and realization of qubits . Methods The findings of this research depicted in the figures of section Results and Discussion, which concern the Eq. ( 7 ) for the system LDOS, were obtained by performing self-consistent evaluations of the occupation numbers defined by the Eq. ( 13 ) for the impurities. Such calculations and the aforementioned figures were performed by using numerical packages in Python version 3.7. Data Availability The authors declare that the data supporting the findings of this study are available within the paper (and its supplementary information files).
In a study published in Scientific Reports, a group of researchers affiliated with São Paulo State University (UNESP) in Brazil describes an important theoretical finding that may contribute to the development of quantum computing and spintronics (spin electronics), an emerging technology that uses electron spin or angular momentum rather than electron charge to build faster, more efficient devices. The study was supported by São Paulo Research Foundation—FAPESP. Its principal investigator was Antonio Carlos Seridonio, a professor in UNESP's Department of Physics and Chemistry at Ilha Solteira, São Paulo State. His graduate students Yuri Marques, Willian Mizobata and Renan Oliveira also participated. The researchers observed that molecules with the capacity to encode information are produced in systems called Weyl semimetals when time-reversal symmetry is broken. These systems can be considered three-dimensional versions of graphene and are associated with very peculiar kinds of objects called Weyl fermions. These are massless, quasi-relativistic, chiral particles—quasi-relativistic because they move similarly to photons (the fundamental "particles" of light) and behave as if they were relativistic, contracting space and dilating time. The term "chiral" applies to an object that cannot be superimposed onto its mirror image. A sphere is achiral, but our left and right hands are chiral. In the case of Weyl fermions, chirality makes them behave as magnetic monopoles, unlike all magnetic objects in the trivial world, which behave as dipoles. Weyl fermions were proposed in 1929 by German mathematician, physicist and philosopher Hermann Weyl (1885-1955) as a possible solution to Dirac's equation. Formulated by British theoretical physicist Paul Dirac (1902-1984), this equation combines principles of quantum mechanics and special relativity to describe the behavior of electrons, quarks and other objects. Weyl fermions are hypothetical entities and have never been observed freely in nature, but studies performed in 2015 showed that they can be the basis for explaining certain phenomena. Similar to Majorana fermions, which also solve Dirac's equation, Weyl fermions manifest themselves as quasi-particles in condensed matter molecular systems. This field, in which high-energy physics and condensed matter physics converge, has mobilized major research efforts, not only because of the opportunities it offers for the development of basic science but also because the peculiarities of these quasi-particles may one day be used in quantum computing to encode information. The new study conducted at UNESP Ilha Solteira advanced in that direction. "Our theoretical study focused on molecules made up of widely separated atoms. These molecules wouldn't be viable outside the Weyl context because the distance between atoms prevents them from forming covalent bonds and hence from sharing electrons. We demonstrated that the chirality of electron scattering in Weyl semimetals leads to the formation of magnetic chemical bonds," Seridonio told. Examples of Weyl semimetals include tantalum arsenide (TaAs), niobium arsenide (NbAs) and tantalum phosphide (TaP). "In these materials, Weyl fermions play an analogous role to that of electrons in graphene. However, graphene is a quasi-2-D system, whereas these materials are fully 3-D," Seridonio said. The theoretical study showed that Weyl fermions in these systems appear as splits in Dirac fermions, a category comprising all material particles of the so-called Standard Model, with the possible exception of neutrinos. These splits occur at points where the conduction band (the space in which free electrons circulate) touches the valence band (the outermost layer of electrons in atoms). "A break in symmetry makes this point, the Dirac node, split into a pair of Weyl nodes with opposite chiralities. In our study, we broke the time-reversal symmetry," Seridonio said. Time reversal symmetry essentially means that a system remains the same if the flow of time is reversed. "When this symmetry is broken, the resulting molecule has spin-polarized orbitals." In usual molecular systems, spin-up electrons and spin-down electrons are evenly distributed in the electron cloud. This is not the case in Weyl systems. "The result is a molecule in which the spin-up and spin-down electron clouds are spatially different. This peculiarity can be used to encode information because the molecule can be associated with the binary system, which is the bit or basic unit of information," Seridonio said.
10.1038/s41598-019-44842-8